Drupal watchdog logging: dblog vs syslog

Sometimes logging events to your Drupal database is not the best way to go and can be quite a performance killer. This is when you want to start using a syslog server.

Database logging (dblog)

A standard Drupal 7 installation comes with database logging enabled, so your watchdog() calls get logged in a permanent storage system. You can then filter these logs and use “drush ws” to view them from a console, and get valuable feedback from it. So it’s a pretty good thing to have.

You can fine-tune this logging on the Drupal log settings page (admin/config/development/logging) as shown below:

Drupal logging settings page

For a development site you mostly turn on the display of warnings and errors, while for a production site it should be turned off. You can also configure this via your settings.php file:

// Disable logging
$conf['error_level'] = 0;

// Set nr. of log items
$conf['dblog_row_limit'] = 10000;

If you have a big site, where a lot of things are logged, like node edit messages, user login messages, API calls, order checkouts etc… you want the setting for “Database log messages to keep” be as big as possible. Only the last log items are kept (cron clears older records for you), but if this number is too low, you might end up losing important data as it’s being pushed out by newer log entries.

For one of our sites this settings somehow got raised to 1000000 (one million), but that had quite a dramatic impact on our server performance. Below is a New Relic performance graph that shows the watchdog table is a huge bottleneck for database queries:

New Relic MySQL queries performance report of a Drupal site that has watchdlog log entries set to 1 million.

As a lot pages on this site add some kind of log action when using them, this was having quite a negative impact on the overall performance of the site.

Syslog

A better way to handle logs for such a big site, is using an external tool like syslog.

Syslog is a UNIX service, installed on every Linux server, that basically listens on a TCP or UDP port for log entries and then does something with it. Most of the time that’s writing the logs to a file (that then gets rotated on a daily base by logrotate), but you can also send them to a remote server, or do both.

If you have a lot of servers you want to monitor from one central place, a remote syslog server that collects all the logs is a good choice. You can set it up yourself using a Linux server, or you can use an external service like Papertrail.

Most consumer-based NAS solutions also have a syslog server installed, like my Synology that I’m using for my home network.

What we did for the website I mentioned before was now:

The basic configuration of the Drupal syslog module is quite ok, we’re using LOCAL0 to log here and that’s also what we are going to send to Papertrail:

Drupal syslog configuration

The server’s syslog entry looks like this (Papertrail will guide you through this setup process):

# Drupal sites log to LOCAL0 and get their logs sent to Papertrail
local0.* @XXXX.papertrailapp.com:XXXXX

We actually did this for 5 servers for this client, so we can now monitor all their logs via one central website on Papertrail.

comments powered by Disqus