Monthly Archives: October 2016

//October

To reload or restart? (with a test script!)

Even the most seasoned DBA sometimes forgets if a setting change needs a restart or a reload. So, how do we check, in a PostgreSQL database if I can reload or restart a setting? Config File The quickest way is to look at the default configuration file for the version of PostgreSQL that you're running. [...]

By | October 31st, 2016|Categories: Scott PostgreSQL|1 Comment

Announcing pgCMH

I've been kicking around the idea of founding a Columbus-based PostgreSQL User Group for a while now. I even went so far as to float the idea to people at OLF in '14. After much hemming and hawing (and no one else stepping up in the interim), I've finally gone and done it.

pgCMH is the name of my newly formed group, and we're good to go. We've got our own Twitter (@pgCMH):

#PostgreSQL 9.6.1, 9.5.5, 9.4.10, 9.3.15, 9.2.19 and 9.1.24 Released! https://t.co/RH27To33gh

— pgCMH (@pgCMH) October 27, 2016

as well as our own MeetUp page. We've got a sponsor providing food, and another providing the meeting location. Our first meeting will be in Jan, thanks to all the scheduling conflicts the upcoming holidays create.

Watch this space for updates, follow our Twitter, and join the mailing list on MeetUp. I'd love to get your participation and input. Let's make this group as wildly successful as we can!

By | October 31st, 2016|Categories: Uncategorized|Comments Off on Announcing pgCMH

Don’t forget the globals!

pg_dump is a great tool.  You can get a consistent database backup from a live database without impacting your running traffic. Many people don't realize however that a pg_dump isn't actually a complete backup... pg_dump pg_dump operates on a single database in a postgres cluster.  You provide it with the name of a database and [by [...]

By | October 27th, 2016|Categories: Scott PostgreSQL|Tags: , , , |4 Comments

Incremental pgBadger

(reblogged from here) You're probably already running pgBadger to monitor your PostgreSQL logs. However, you're probably not running it incrementally throughout the day. Most likely, you've setup a cron.daily job that runs pgBadger against yesterday's log(s). And that's great. Except when you get the dreaded "what just happened on the db?" email. Are you going to [...]

By | October 26th, 2016|Categories: Doug PostgreSQL|0 Comments

How does a database get promoted these days?!?

These days, replicated postgres is simple to setup, easy to maintain and ultra-reliable.  Actually, the biggest question is typically "how do I convert a slave to a master?" Well, the short answer is, a couple of ways.  These days, the very simple 'pg_ctl promote' works best. Promotion Essentially, promotion is the process of converting your read-only, slave [...]

By | October 26th, 2016|Categories: Scott PostgreSQL|Tags: , , , |0 Comments

Incremental pgBadger

You're probably already running pgBadger to monitor your PostgreSQL logs. However, you're probably not running it incrementally throughout the day. Most likely, you've setup a cron.daily job that runs pgBadger against yesterday's log(s). And that's great. Except when you get the dreaded "what just happened on the db?" email. Are you going to wait until tonight's normal run of pgBadger to see what happened? Are you going to run a 'one off' pgBadger against today's logfile and wait for it to process the entire log? Or are you going to copy the log off somewhere, edit it to cut it down, and then run pgBadger against this cut-down version (hoping you left enough in the log to see proper trending)?

No, most likely you're going to look at your actual monitoring tool that does real-time monitoring of your db and try to figure things out from there. You are running some kind of db monitoring tool, right?

However, let's say that for, uh, reasons, you only have pgBadger at your disposal right this instant. Well, if you were making use of pgBadger's incremental mode you could simply fire off the next scheduled run and it would only process those log entries that were new since the last run. So, for example, if you had a cron.hourly run of pgBadger it would only process the last hour's worth of entries to update today's report. No waiting to process multiple hours of info that you don't need, no editing of the logfile to remove things outside the window you care about, just run it and done.

Sounds nice, right? So let's set this up shall we? I'm assuming you've already setup postgresql.conf appropriately, but if you haven't please go that first. The pgBadger website has good documentation on how to do so. According to the docs:

-I | --incremental     : use incremental mode, reports will be generated by days in a separate directory, --outdir must be set.

is how we turn on incremental mode. You'll note that we also need to specify an output dir:

-O | --outdir path     : directory where out file must be saved

I usually stick the pgBadger output into the pg_log directory. In my mind, having the logs and the report on the logs next to each makes sense, but feel free to stick yours wherever.

Finally, we probably don't need pgBadger reports that are too old, and the docs say we can cull the cruft automatically:

-R | --retention N     : number of week to keep in incremental mode. Default to 0, disabled. Used to set the number of weel to keep in output directory. Older weeks and days directory are automatically removed.

(Ignore the typo, it's that way in the code)

On my servers, I have PostgreSQL setup to log into a different file for each day of the week, with automatic rotation and truncation:

log_filename = 'postgresql-%a.log'
log_truncate_on_rotation = on
log_rotation_age = 1d

so my cron.hourly pgBadger looks like:

pgbadger 
    -I 
    -O $PGDATA/pg_log 
    -R 12 
    -q 
    $PGDATA/pg_log/postgresql-$(date --date yesterday +%a) 
    $PGDATA/pg_log/postgresql-$(date +%a)

which as you can see always feeds both yesterday's and today's log into pgBadger (since the cron runs at 2300 and then again at 0000, we need yesterday's log to catch that last hour). Since we're running in incremental mode, it knows at every run where it left off in the files the last time and does a seek to skip over that data. This cuts the run time down significantly even with the PostgreSQL logging cranked up. You can see it here:

...
DEBUG: Starting reading file postgresql-Wed.log...
DEBUG: Start parsing at offset 412677131 of file postgresql-Wed.log to 433543395
[======================>  ] Parsed 413815537 bytes of 433543395 (95.45%), queries
...

As you can see, it jumps right in at 95% of the file and only processes the newest 5%. In fact, this takes a mere 20 seconds:

statistics gathering took:20 wallclock secs (19.44 usr +  0.17 sys = 19.61 CPU)

on my overloaded Macbook!

So there you have it. Not counting the time it takes you to ssh to your server, it would have taken all of 20 seconds to have an updated report of what just happened on your database!

Keep in mind, this is also with a single thread. pgBadger has the ability to run multi-threaded. See the --help for details.

Enjoy!

By | October 26th, 2016|Categories: Uncategorized|Comments Off on Incremental pgBadger

Production psql Prompts

I deal with lots of production.  When you're dealing with multiple production machines, it's important to know many things: Who you are What you're connected to Where you are Why you are connected When you are connected Most prompts don't give you any of this detail.  If you're familiar with bash, you probably have heard [...]

By | October 25th, 2016|Categories: Scott PostgreSQL, tip|Tags: , , , |7 Comments

Extending Postgres by BigSQL

Recently we had a customer request to build a custom extension against Postgres by BigSQL distribution. Even though BigSQL ships with a large set of commonly used extensions and good collection of FDWs, these kind of user build requirements always crop up, based on how powerful the Postgres extension model is. BigSQL makes it easy [...]

By | October 25th, 2016|Categories: BigSQL, Jobin PostgreSQL, PostgreSQL, Uncategorized|0 Comments