You're probably already running pgBadger to monitor your PostgreSQL logs. However, you're probably not running it incrementally throughout the day. Most likely, you've setup a
cron.daily job that runs pgBadger against yesterday's log(s). And that's great. Except when you get the dreaded "what just happened on the db?" email. Are you going to wait until tonight's normal run of pgBadger to see what happened? Are you going to run a 'one off' pgBadger against today's logfile and wait for it to process the entire log? Or are you going to copy the log off somewhere, edit it to cut it down, and then run pgBadger against this cut-down version (hoping you left enough in the log to see proper trending)?
No, most likely you're going to look at your actual monitoring tool that does real-time monitoring of your db and try to figure things out from there. You are running some kind of db monitoring tool, right?
However, let's say that for, uh, reasons, you only have pgBadger at your disposal right this instant. Well, if you were making use of pgBadger's incremental mode you could simply fire off the next scheduled run and it would only process those log entries that were new since the last run. So, for example, if you had a
cron.hourly run of pgBadger it would only process the last hour's worth of entries to update today's report. No waiting to process multiple hours of info that you don't need, no editing of the logfile to remove things outside the window you care about, just run it and done.
Sounds nice, right? So let's set this up shall we? I'm assuming you've already setup
postgresql.conf appropriately, but if you haven't please go that first. The pgBadger website has good documentation on how to do so. According to the docs:
is how we turn on incremental mode. You'll note that we also need to specify an output dir:
I usually stick the pgBadger output into the
pg_log directory. In my mind, having the logs and the report on the logs next to each makes sense, but feel free to stick yours wherever.
Finally, we probably don't need pgBadger reports that are too old, and the docs say we can cull the cruft automatically:
(Ignore the typo, it's that way in the code)
On my servers, I have PostgreSQL setup to log into a different file for each day of the week, with automatic rotation and truncation:
cron.hourly pgBadger looks like:
which as you can see always feeds both yesterday's and today's log into pgBadger (since the cron runs at 2300 and then again at 0000, we need yesterday's log to catch that last hour). Since we're running in incremental mode, it knows at every run where it left off in the files the last time and does a
seek to skip over that data. This cuts the run time down significantly even with the PostgreSQL logging cranked up. You can see it here:
As you can see, it jumps right in at 95% of the file and only processes the newest 5%. In fact, this takes a mere 20 seconds:
on my overloaded Macbook!
So there you have it. Not counting the time it takes you to
ssh to your server, it would have taken all of 20 seconds to have an updated report of what just happened on your database!
Keep in mind, this is also with a single thread. pgBadger has the ability to run multi-threaded. See the
--help for details.