How to Enable Slow Query Log
How to Enable Slow Query Log Database performance is the backbone of modern web applications. Whether you're running an e-commerce platform, a content management system, or a data-intensive SaaS product, slow database queries can cripple user experience, increase server load, and degrade overall system reliability. One of the most powerful diagnostic tools available to database administrators is t
How to Enable Slow Query Log
Database performance is the backbone of modern web applications. Whether you're running an e-commerce platform, a content management system, or a data-intensive SaaS product, slow database queries can cripple user experience, increase server load, and degrade overall system reliability. One of the most powerful diagnostic tools available to database administrators is the Slow Query Log. Enabling this feature allows you to capture and analyze queries that exceed a specified execution time threshold, helping you identify performance bottlenecks before they impact end users.
The Slow Query Log is a built-in feature in popular relational database systems such as MySQL, MariaDB, and PostgreSQL. When activated, it records queries that take longer than a configured duration to execute, along with metadata like execution time, lock time, rows examined, and the SQL statement itself. This log becomes an invaluable resource for optimizing database performance, fine-tuning indexes, and improving application efficiency.
In this comprehensive guide, well walk you through exactly how to enable the Slow Query Log across different database systems. Youll learn practical configuration steps, industry best practices, recommended tools for log analysis, real-world examples of slow query identification, and answers to frequently asked questions. By the end of this tutorial, youll have the knowledge and confidence to implement Slow Query Logging in your environment and use it proactively to maintain high-performance database operations.
Step-by-Step Guide
Enabling Slow Query Log in MySQL
MySQL is one of the most widely used relational databases, and enabling its Slow Query Log is straightforward. The process involves modifying the MySQL configuration file and restarting the service to apply changes.
First, locate your MySQL configuration file. On most Linux systems, this is typically found at /etc/mysql/mysql.conf.d/mysqld.cnf or /etc/my.cnf. On macOS with Homebrew, it may be at /usr/local/etc/my.cnf. You can confirm the location by running:
mysql --help | grep "Default options" -A 1
Once youve located the file, open it with a text editor such as nano or vim:
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
Add or modify the following lines under the [mysqld] section:
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 2
log_queries_not_using_indexes = 1
Heres what each setting does:
- slow_query_log = 1 Enables the Slow Query Log.
- slow_query_log_file Specifies the path where the log file will be stored. Ensure the directory exists and is writable by the MySQL user.
- long_query_time = 2 Sets the threshold (in seconds) for what qualifies as a slow query. Queries taking longer than 2 seconds will be logged.
- log_queries_not_using_indexes = 1 Logs queries that do not use indexes, even if they execute quickly. This helps identify potential missing indexes.
After saving the file, restart the MySQL service to apply the changes:
sudo systemctl restart mysql
To verify the configuration is active, log into the MySQL shell and run:
SHOW VARIABLES LIKE 'slow_query_log';
SHOW VARIABLES LIKE 'long_query_time';
SHOW VARIABLES LIKE 'slow_query_log_file';
If all values return ON or the correct file path and threshold, the Slow Query Log is successfully enabled.
Enabling Slow Query Log in MariaDB
MariaDB, a community-developed fork of MySQL, uses the same configuration syntax. The steps are nearly identical to MySQL, making the transition seamless for users familiar with MySQL.
Locate your MariaDB configuration file. On Ubuntu/Debian, its typically at /etc/mysql/mariadb.conf.d/50-server.cnf. On CentOS/RHEL, check /etc/my.cnf.d/server.cnf.
Edit the file:
sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
Add the following lines under the [mysqld] section:
[mysqld]
slow_query_log = ON
slow_query_log_file = /var/log/mariadb/mariadb-slow.log
long_query_time = 2
log_queries_not_using_indexes = ON
Ensure the log directory exists and is writable:
sudo mkdir -p /var/log/mariadb
sudo chown mysql:mysql /var/log/mariadb
Restart MariaDB:
sudo systemctl restart mariadb
Verify the settings via the MariaDB client:
mysql -u root -p
SHOW VARIABLES LIKE 'slow_query_log%';
SHOW VARIABLES LIKE 'long_query_time';
MariaDB also supports additional logging options such as log_slow_verbosity for more detailed output, including execution plan information. To enable verbose logging:
log_slow_verbosity = query_plan,explain
Enabling Slow Query Log in PostgreSQL
PostgreSQL handles slow query logging differently than MySQL or MariaDB. Instead of a dedicated Slow Query Log, PostgreSQL uses the log_min_duration_statement parameter to capture queries exceeding a specified duration.
Locate your PostgreSQL configuration file. It is typically found at /etc/postgresql/[version]/main/postgresql.conf on Ubuntu or /var/lib/pgsql/[version]/data/postgresql.conf on RHEL/CentOS.
Edit the file:
sudo nano /etc/postgresql/15/main/postgresql.conf
Find and modify the following lines:
Log slow queries
log_min_duration_statement = 2000 Log queries taking longer than 2000ms (2 seconds)
log_statement = 'none' Optional: set to 'mod' or 'all' for broader logging
log_destination = 'stderr' Ensure logs are captured
logging_collector = on Enable log file collection
log_directory = '/var/log/postgresql' Directory for log files
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' File naming pattern
Important: In PostgreSQL, the duration is specified in milliseconds, so 2000 equals 2 seconds.
After saving the file, reload the PostgreSQL configuration (no restart required):
sudo systemctl reload postgresql
PostgreSQL logs are stored in the specified log_directory. You can monitor them in real time using:
tail -f /var/log/postgresql/postgresql-*.log
For even deeper insight, consider enabling track_io_timing = on to capture I/O wait times, or use auto_explain module to log execution plans for slow queries automatically.
Verifying Log File Creation and Permissions
Regardless of the database system, ensuring the log file is created and accessible is critical. After enabling the Slow Query Log, check that the file is being written to:
ls -la /var/log/mysql/mysql-slow.log
or
ls -la /var/log/mariadb/mariadb-slow.log
or
ls -la /var/log/postgresql/postgresql-*.log
If the file does not exist or is empty, verify:
- The database service has write permissions to the directory.
- The path specified in the configuration is absolute and correct.
- The service was restarted or reloaded after configuration changes.
- No syntax errors exist in the configuration file (check error logs:
sudo journalctl -u mysqlorsudo journalctl -u postgresql).
Its also good practice to rotate log files to prevent them from consuming excessive disk space. Use logrotate on Linux systems with a configuration like:
/var/log/mysql/mysql-slow.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
create 640 mysql adm
sharedscripts
postrotate
systemctl reload mysql > /dev/null
endscript
}
Best Practices
Set an Appropriate long_query_time Threshold
Choosing the right threshold for what constitutes a slow query is crucial. Setting it too low (e.g., 0.1 seconds) may generate excessive logs, overwhelming storage and analysis tools. Setting it too high (e.g., 10 seconds) may cause you to miss performance issues that accumulate under load.
As a general rule:
- Development/Testing Environments: Set
long_query_time = 0.5to catch even minor inefficiencies. - Production Environments: Start with
long_query_time = 12seconds. Adjust based on application response time expectations. - High-Traffic Systems: Consider using
long_query_time = 0.5with log rotation and monitoring to detect emerging bottlenecks early.
Use application performance monitoring (APM) tools to correlate user-perceived latency with database query times. If users report delays of 1.5 seconds on page loads, investigate queries exceeding 1 second.
Enable log_queries_not_using_indexes
Many performance issues stem from missing or misused indexes. Enabling log_queries_not_using_indexes (MySQL/MariaDB) or using auto_explain (PostgreSQL) helps surface queries that perform full table scans even if theyre fast on small datasets.
Be cautious: this setting can generate a large volume of logs if your application runs many ad-hoc queries on unindexed columns. Use it temporarily during performance audits, then disable it once problematic queries are identified and indexed.
Use Log Rotation and Monitoring
Slow query logs can grow rapidly, especially on busy systems. Without rotation, they may fill your disk and cause service outages.
Implement log rotation using logrotate (Linux) or a similar utility. Schedule daily rotation and retain 714 days of logs unless compliance requires longer retention.
Additionally, set up monitoring alerts. Tools like Prometheus with the MySQL Exporter or Datadog can monitor log file size and trigger alerts if the log grows beyond a threshold indicating a potential surge in slow queries.
Separate Log Storage from System Disk
Store slow query logs on a dedicated disk or partition, especially in high-volume environments. This prevents log growth from affecting the operating systems ability to write critical files or cause the database to crash due to full disk errors.
Analyze Logs Regularly Dont Just Collect Them
Enabling the Slow Query Log is only the first step. The real value comes from regularly analyzing the data. Schedule weekly reviews of slow query logs using tools like mysqldumpslow, pt-query-digest, or PostgreSQLs built-in logging utilities.
Identify patterns: Are the same queries recurring? Are they triggered by a specific feature or user action? Correlate logs with application deployment cycles a new release may have introduced inefficient queries.
Use Read Replicas for Logging in High-Traffic Systems
In production systems with heavy read loads, consider enabling slow query logging only on read replicas, not the primary database. This reduces the performance overhead of logging on your most critical server.
Ensure your replication setup is healthy before doing this, and confirm that the replicas query patterns reflect those of the primary.
Document and Share Findings
Slow query analysis is not just a DBA task its a team effort. Share findings with developers. Create a shared dashboard or document that lists top slow queries, their execution plans, and recommended fixes.
Encourage a culture of query optimization. Make slow query logs part of your code review process. Require developers to explain query performance for complex data operations.
Tools and Resources
MySQL and MariaDB Tools
mysqldumpslow
Part of the MySQL distribution, mysqldumpslow is a simple command-line tool that summarizes slow query logs. It groups similar queries and provides statistics like count, average time, and total time.
mysqldumpslow -s c -t 10 /var/log/mysql/mysql-slow.log
This command sorts queries by count (-s c) and shows the top 10 (-t 10). Other sort options include s (time), l (lock time), and r (rows sent).
pt-query-digest (Percona Toolkit)
One of the most powerful tools for analyzing MySQL slow query logs is pt-query-digest from Percona Toolkit. It provides detailed analysis, including execution plans, query fingerprints, and performance impact estimates.
Install it via:
wget https://percona.com/get/percona-toolkit.tar.gz
tar xzf percona-toolkit.tar.gz
cd percona-toolkit-*
perl Makefile.PL
make
sudo make install
Run analysis:
pt-query-digest /var/log/mysql/mysql-slow.log > analysis-report.txt
The output includes:
- Top queries by total time
- Queries with highest average latency
- Lock time and rows examined metrics
- Query fingerprints (normalized versions for grouping)
It also supports direct analysis from live MySQL servers using the --processlist option.
MySQL Workbench Performance Dashboard
MySQL Workbench includes a built-in Performance Dashboard that can connect to your database and display real-time and historical slow query data. It visualizes query execution times, index usage, and server load.
Use it to correlate slow queries with system metrics like CPU, memory, and I/O.
PostgreSQL Tools
pg_stat_statements
This built-in PostgreSQL extension tracks execution statistics for all SQL statements. Its more comprehensive than slow query logs because it captures every query, not just slow ones.
To enable it, add to postgresql.conf:
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
Restart PostgreSQL, then create the extension:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
Query the statistics:
SELECT query, calls, total_time, mean_time, rows, 100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 10;
This reveals the most time-consuming queries in your system, even if they dont exceed the log_min_duration_statement threshold.
auto_explain
This extension logs the execution plan of slow queries. Enable it in postgresql.conf:
shared_preload_libraries = 'auto_explain'
auto_explain.log_min_duration = 2000
auto_explain.log_analyze = true
auto_explain.log_buffers = true
auto_explain.log_timing = true
After reload, slow queries will include detailed execution plans invaluable for identifying full scans, inefficient joins, or missing indexes.
pgBadger
pgBadger is a fast, standalone log analyzer for PostgreSQL. It generates rich HTML reports with graphs, top queries, error trends, and connection statistics.
Install via:
sudo apt-get install pgbadger
Generate a report:
pgbadger -o report.html /var/log/postgresql/postgresql-*.log
Open report.html in a browser to explore interactive visualizations.
Third-Party Monitoring Platforms
For enterprise environments, consider integrating slow query logs with monitoring platforms:
- Datadog: Ingests MySQL and PostgreSQL logs via agents, correlates with infrastructure metrics, and provides alerting.
- Prometheus + Grafana: Use exporters like
mysqld_exporterorpostgres_exporterto scrape metrics and visualize slow query trends. - New Relic: Offers application-level tracing that links slow database queries to specific user actions or code paths.
These tools enable proactive detection of performance degradation and reduce mean time to resolution (MTTR).
Real Examples
Example 1: Missing Index on WHERE Clause
A web application running on MySQL experienced slow page loads on the product search page. The Slow Query Log recorded:
Time: 2024-03-15T10:23:45.123456Z
User@Host: app_user[app_user] @ localhost []
Query_time: 4.321000 Lock_time: 0.000123 Rows_sent: 150 Rows_examined: 1250000
SELECT product_name, price FROM products WHERE category_id = 45 AND in_stock = 1;
Analysis revealed that category_id was indexed, but in_stock was not. With 1.25 million rows examined, the query performed a full table scan.
Fix: Added a composite index:
CREATE INDEX idx_category_stock ON products(category_id, in_stock);
After the change, the same query executed in 0.012 seconds. Rows examined dropped to 320.
Example 2: N+1 Query Problem in ORM
A Rails application using ActiveRecord logged hundreds of nearly identical queries:
Query_time: 0.123456 Lock_time: 0.000045 Rows_sent: 1 Rows_examined: 1
SELECT * FROM users WHERE id = 12345;
SELECT * FROM users WHERE id = 12346;
SELECT * FROM users WHERE id = 12347;
...
Each query took under 100ms, but 500 such queries in one request caused a 60-second page load. The application was fetching user data one-by-one instead of using includes(:posts) to eager load.
Fix: Modified the Rails controller to use eager loading:
@orders = Order.includes(:user).where(status: 'completed')
The number of queries dropped from 500 to 2 one for orders, one for users. Page load time improved from 60 seconds to 1.2 seconds.
Example 3: Unoptimized JOIN in PostgreSQL
A reporting dashboard using PostgreSQL showed slow query logs like:
2024-03-15 11:05:32 UTC [12345]: [1-1] user=reporting,db=analytics,host=[local] LOG: duration: 8420.123 ms statement:
SELECT c.name, SUM(o.amount) FROM customers c
JOIN orders o ON c.id = o.customer_id
WHERE o.created_at > '2024-01-01'
GROUP BY c.name;
The execution plan revealed a nested loop join on unindexed foreign keys. The orders.customer_id column had no index.
Fix: Added index:
CREATE INDEX idx_orders_customer_id ON orders(customer_id);
Query time dropped from 8.4 seconds to 0.8 seconds. The report now loads instantly.
Example 4: High Lock Time Due to Uncommitted Transactions
One of the slowest queries in the log had a high lock time:
Query_time: 12.456789 Lock_time: 11.987654 Rows_sent: 0 Rows_examined: 1
UPDATE accounts SET balance = balance - 100 WHERE id = 500;
Despite affecting only one row, it took over 12 seconds nearly all spent waiting for a lock. Investigation revealed a long-running transaction in another session holding a row lock.
Fix: Implemented transaction timeouts in the application and added monitoring for open transactions:
SHOW TRANSACTION ISOLATION LEVEL;
SELECT * FROM pg_stat_activity WHERE state = 'idle in transaction';
Application code was updated to enforce a 5-second timeout on all database transactions.
FAQs
What is the Slow Query Log?
The Slow Query Log is a diagnostic feature in database systems like MySQL, MariaDB, and PostgreSQL that records SQL queries exceeding a specified execution time threshold. It helps identify performance bottlenecks by capturing slow-running queries along with metadata such as execution time, lock time, and rows examined.
Does enabling the Slow Query Log impact database performance?
Yes, but the impact is typically minimal when configured properly. Logging adds slight overhead due to disk I/O and parsing. To minimize impact:
- Set a reasonable
long_query_timethreshold (12 seconds in production). - Use log rotation to prevent excessive file growth.
- Store logs on a separate disk from the database data files.
- Avoid enabling verbose logging (e.g.,
log_queries_not_using_indexes) permanently unless needed for audits.
How often should I review slow query logs?
For production systems, review slow query logs at least weekly. In high-traffic environments, consider daily automated analysis using tools like pt-query-digest or pgBadger. Set up alerts if the number of slow queries increases by more than 20% over a baseline.
Can I enable Slow Query Log without restarting the database?
In MySQL and MariaDB, you can enable the Slow Query Log dynamically using:
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 2;
However, changes to slow_query_log_file require a restart. In PostgreSQL, you can reload the configuration without restarting using pg_ctl reload or SELECT pg_reload_conf();.
Why are some queries logged even if theyre fast?
If you enabled log_queries_not_using_indexes (MySQL/MariaDB) or auto_explain (PostgreSQL), queries that dont use indexes are logged regardless of execution time. This helps identify potential index optimization opportunities.
How do I analyze slow query logs without manual parsing?
Use automated tools:
- MySQL/MariaDB:
pt-query-digest, MySQL Workbench - PostgreSQL:
pgBadger,pg_stat_statements - Cloud platforms: Datadog, New Relic, AWS RDS Performance Insights
These tools normalize queries, group similar ones, and generate visual reports for easy interpretation.
Should I enable Slow Query Log on production servers?
Yes but with caution. Use a conservative threshold (e.g., 2 seconds), monitor disk usage, and rotate logs regularly. The performance benefit of identifying and fixing slow queries far outweighs the minimal logging overhead.
Whats the difference between Slow Query Log and General Query Log?
The Slow Query Log records only queries that exceed a time threshold. The General Query Log records every query executed including fast ones. The General Query Log is extremely verbose and should only be enabled temporarily for debugging, as it can severely impact performance and fill disks quickly.
Conclusion
Enabling the Slow Query Log is one of the most effective and low-cost actions you can take to improve database performance. It transforms guesswork into data-driven optimization, allowing you to pinpoint inefficiencies before they become user-facing problems. Whether youre managing a small application or a large-scale enterprise system, understanding how to configure, analyze, and act on slow query data is essential for maintaining reliability, scalability, and responsiveness.
This guide has walked you through the detailed steps to enable Slow Query Logging in MySQL, MariaDB, and PostgreSQL. Youve learned best practices for setting thresholds, managing log files, and integrating analysis tools. Real-world examples demonstrated how even minor query improvements can yield dramatic performance gains.
Remember: the goal isnt just to log slow queries its to eliminate them. Make slow query analysis part of your regular operational rhythm. Share insights with your development team. Automate reporting. Monitor trends. Iterate.
By consistently leveraging the Slow Query Log, youre not just fixing slow queries youre building a culture of performance awareness that elevates the entire system. Start today. Enable the log. Analyze the data. Optimize relentlessly.