- postgresql

/- postgresql

About - postgresql

This author has not yet filled in any details.
So far - postgresql has created 9 blog entries.

Inaugural pgCMH mtg scheduled

As announced on our Twitter, we've scheduled our inaugural meeting for Jan 24, 2017 at 1800 hrs. The folks at CoverMyMeds will graciously provide the meeting space (and parking garage) while OpenSCG is buying the pizza!

At this first meeting, we'll be discussing what you, the members, would like to get from the meetings, we'll talk about future topics, and general organizational things. I know it's not exciting, but I need everyone's input to make this group something you get value from.

Our first mtg is happening on Jan 24 at @covermymeds with @openscg buying pizza. Bring a friend! RSVP at https://t.co/600ZNbPV8e

— pgCMH (@pgCMH) October 19, 2016

Please RSVP via the MeetUP event so we have sufficient food and drink!

See you there!

By | November 3rd, 2016|Categories: PostgreSQL User Group|Tags: , , |Comments Off on Inaugural pgCMH mtg scheduled

Announcing pgCMH

I've been kicking around the idea of founding a Columbus-based PostgreSQL User Group for a while now. I even went so far as to float the idea to people at OLF in '14. After much hemming and hawing (and no one else stepping up in the interim), I've finally gone and done it.

pgCMH is the name of my newly formed group, and we're good to go. We've got our own Twitter (@pgCMH):

#PostgreSQL 9.6.1, 9.5.5, 9.4.10, 9.3.15, 9.2.19 and 9.1.24 Released! https://t.co/RH27To33gh

— pgCMH (@pgCMH) October 27, 2016

as well as our own MeetUp page. We've got a sponsor providing food, and another providing the meeting location. Our first meeting will be in Jan, thanks to all the scheduling conflicts the upcoming holidays create.

Watch this space for updates, follow our Twitter, and join the mailing list on MeetUp. I'd love to get your participation and input. Let's make this group as wildly successful as we can!

By | October 31st, 2016|Categories: Uncategorized|Comments Off on Announcing pgCMH

Incremental pgBadger

You're probably already running pgBadger to monitor your PostgreSQL logs. However, you're probably not running it incrementally throughout the day. Most likely, you've setup a cron.daily job that runs pgBadger against yesterday's log(s). And that's great. Except when you get the dreaded "what just happened on the db?" email. Are you going to wait until tonight's normal run of pgBadger to see what happened? Are you going to run a 'one off' pgBadger against today's logfile and wait for it to process the entire log? Or are you going to copy the log off somewhere, edit it to cut it down, and then run pgBadger against this cut-down version (hoping you left enough in the log to see proper trending)?

No, most likely you're going to look at your actual monitoring tool that does real-time monitoring of your db and try to figure things out from there. You are running some kind of db monitoring tool, right?

However, let's say that for, uh, reasons, you only have pgBadger at your disposal right this instant. Well, if you were making use of pgBadger's incremental mode you could simply fire off the next scheduled run and it would only process those log entries that were new since the last run. So, for example, if you had a cron.hourly run of pgBadger it would only process the last hour's worth of entries to update today's report. No waiting to process multiple hours of info that you don't need, no editing of the logfile to remove things outside the window you care about, just run it and done.

Sounds nice, right? So let's set this up shall we? I'm assuming you've already setup postgresql.conf appropriately, but if you haven't please go that first. The pgBadger website has good documentation on how to do so. According to the docs:

-I | --incremental     : use incremental mode, reports will be generated by days in a separate directory, --outdir must be set.

is how we turn on incremental mode. You'll note that we also need to specify an output dir:

-O | --outdir path     : directory where out file must be saved

I usually stick the pgBadger output into the pg_log directory. In my mind, having the logs and the report on the logs next to each makes sense, but feel free to stick yours wherever.

Finally, we probably don't need pgBadger reports that are too old, and the docs say we can cull the cruft automatically:

-R | --retention N     : number of week to keep in incremental mode. Default to 0, disabled. Used to set the number of weel to keep in output directory. Older weeks and days directory are automatically removed.

(Ignore the typo, it's that way in the code)

On my servers, I have PostgreSQL setup to log into a different file for each day of the week, with automatic rotation and truncation:

log_filename = 'postgresql-%a.log'
log_truncate_on_rotation = on
log_rotation_age = 1d

so my cron.hourly pgBadger looks like:

pgbadger 
    -I 
    -O $PGDATA/pg_log 
    -R 12 
    -q 
    $PGDATA/pg_log/postgresql-$(date --date yesterday +%a) 
    $PGDATA/pg_log/postgresql-$(date +%a)

which as you can see always feeds both yesterday's and today's log into pgBadger (since the cron runs at 2300 and then again at 0000, we need yesterday's log to catch that last hour). Since we're running in incremental mode, it knows at every run where it left off in the files the last time and does a seek to skip over that data. This cuts the run time down significantly even with the PostgreSQL logging cranked up. You can see it here:

...
DEBUG: Starting reading file postgresql-Wed.log...
DEBUG: Start parsing at offset 412677131 of file postgresql-Wed.log to 433543395
[======================>  ] Parsed 413815537 bytes of 433543395 (95.45%), queries
...

As you can see, it jumps right in at 95% of the file and only processes the newest 5%. In fact, this takes a mere 20 seconds:

statistics gathering took:20 wallclock secs (19.44 usr +  0.17 sys = 19.61 CPU)

on my overloaded Macbook!

So there you have it. Not counting the time it takes you to ssh to your server, it would have taken all of 20 seconds to have an updated report of what just happened on your database!

Keep in mind, this is also with a single thread. pgBadger has the ability to run multi-threaded. See the --help for details.

Enjoy!

By | October 26th, 2016|Categories: Uncategorized|Comments Off on Incremental pgBadger

I’m syndicated

Someone at work thought it would be a good idea to give me access to the corporate blog so that I might post PostgreSQL-related things there and have them syndicted to Planet PostgreSQL. So my PostgreSQL ramblings will show up there now instead of here...

By | October 8th, 2016|Categories: Uncategorized|Comments Off on I’m syndicated

Where Not To Put Your Tablespaces

From the PostgreSQL docs:

Tablespaces in PostgreSQL allow database administrators to define locations in the file system where the files representing database objects can be stored. Once created, a tablespace can be referred to by name when creating database objects.

By using tablespaces, an administrator can control the disk layout of a PostgreSQL installation. This is useful in at least two ways. First, if the partition or volume on which the cluster was initialized runs out of space and cannot be extended, a tablespace can be created on a different partition and used until the system can be reconfigured.

Second, tablespaces allow an administrator to use knowledge of the usage pattern of database objects to optimize performance. For example, an index which is very heavily used can be placed on a very fast, highly available disk, such as an expensive solid state device. At the same time a table storing archived data which is rarely used or not performance critical could be stored on a less expensive, slower disk system.

As you can see, while not as powerful as tablespaces in, say, Oracle, they do still have their uses in PostgreSQL. You can use them to make use of different filesystems, or different mount options, or different disk types and, in doing so, intelligently apply performance characteristics to subsets of your data. For example, you could put your highest volume tables in a tablespace that is mounted from SSDs while the rest of your db is mounted from spinning rust.

Sounds decent, right? Now you before you go off and be "clever" and create an SSD-backed mountpoint for your new tablespace, understand that there are places you should not create the tablespace. You shouldn't create tablespaces on any kind of ephemeral storage, for example on a tmpfs or a ramfs or similar. You also should not create your new tablespaces under $PGDATA. Yes, I'm aware there is $PGDATA/pg_tblspc but that directory is not for you. The system will auto-populate that directory with pointers to the real location of your tablespaces!

So what happens when you create a tablespace inside $PGDATA? Let's find out. First, we'll create the directory for the tablespace:

bash
doug.hunley ~ $ mkdir $PGDATA/tablespaces
doug.hunley ~ $ cd $PGDATA/tablespaces
doug.hunley ~/pgdata/tablespaces $ pwd
/Users/doug.hunley/pgdata/tablespaces

And we see that nothing bad has happened yet. So, let's pop over into psql and actually create the tablespace:

sql
(doug.hunley@[local]:5432/doug.hunley) # CREATE TABLESPACE ts1 LOCATION '/Users/doug.hunley/pgdata/tablespaces';
WARNING:  42P17: tablespace location should not be inside the data directory
LOCATION:  CreateTableSpace, tablespace.c:295
CREATE TABLESPACE
Time: 7.797 ms
(doug.hunley@[local]:5432/doug.hunley) #

We get a warning (not an error, for some reason) but it works and all appears fine. Now you can spend minutes/days/months/years using your new tablespace and never notice that you've got a problem. So where does the problem come in?

Let's try to make a backup of our cluster:

bash
doug.hunley ~ $ pg_basebackup -D pgdata2 -Fp -R -Xs -c fast -l 'clone for slave' -P -v
transaction log start point: 2/17000028 on timeline 1
pg_basebackup: directory "/Users/doug.hunley/pgdata/tablespaces" exists but is not empty
doug.hunley ~ $

There it is.

When creating the backup, it tries to ensure the tablespace location is the same, but then it won't write to a non-empty directory. My example is two different $PGDATA locations on the same box, but the issue is the same when using different machines because pg_basebackup backs up everything in $PGDATA which means your tablespace directory gets cloned before it gets to the actual cloning of the data in the tablespace so you end up with "stuff" in the dir, making it non-empty. Which gives you the same error and output.

OK, so it breaks backups. I can work around that by using another backup method. What else?

How about using pg_upgrade to do an upgrade? No matter if you run in link mode or not, pg_upgrade will not move your tablespace location. So you may have ~/pgdata95 and ~/pgdata96 after the upgrade, but your tablespaces are still in ~/pgdata95/tablespaces. So, as per the docs:

Once you are satisfied with the upgrade, you can delete the old cluster's data directories by running the script mentioned when pg_upgrade completes.

And boom you've just deleted your tablespaces off disk. Congratulations!

So there you have it. Two very good reasons to not create tablespaces inside $PGDATA. Please, don't do this. Everyone who admins that cluster going forward will thank you.

By | August 24th, 2016|Categories: Uncategorized|Comments Off on Where Not To Put Your Tablespaces

Happy 20th Birthday, PostgreSQL

Just a quick shout out to PostgreSQL, the world's most advanced open source database, as it celebrates it's 20th year in existence. Happy birthday, y'all!

By | May 12th, 2016|Categories: Uncategorized|Comments Off on Happy 20th Birthday, PostgreSQL

PostgreSQL Partitioning Quick Tip

Partitioning in PostgreSQL can be a little daunting at times. In fact, you should probably just use pg_partman and be done with it. However, if you're trying to learn, can't use pg_partman, or are a masochist you'll probably be following the docs and thinking 'seriously? i have to create indexes on each child? why don't they copy the indexes of the parent? why isn't this easier?'. Here's a little tip to make things slightly easier:

Instead of creating your child tables like the doc says:

sql
CREATE TABLE child1 (
  CHECK (blah blah)
) INHERITS (parent);

Create your child tables thusly:

sql
CREATE TABLE child1 (
  LIKE parent INCLUDING ALL,
  CHECK (blah blah)
) INHERITS (parent);

and PostgeSQL will copy all your indexes, primary keys, etc from the parent to the child. Which is what you wanted, right?

Enjoy.

By | April 21st, 2016|Categories: Uncategorized|Comments Off on PostgreSQL Partitioning Quick Tip

Logical Replication with Skytools3

UPDATE: My coworker Richard liked this write up, and Skytools, so much he threw together a demo script. You can get it here.

I recently had to do a near-zero downtime upgrade from PostgreSQL 8.4.x to PostgreSQL 9.4.x for a custmer. I couldn't use streaming replication because of the change in major version (and because it's simply not present in 8.x), so that left me looking at logical replication options. Usually, everyone else would be thinking Slony right here. I've only messed with Slony a few times, but each time was a pita, and the whole thing just seemed overly complicated to me. So I decided to give Londiste a look.

Londiste is part of the Skytools suite, originally developed by Skype back when they were a 'no central node' setup. As such, the thing was literally born to be "master-master" and assumes nodes come and go at will, so it's got all the tools to handle bringing nodes up/down, marking them active/inactive, catching them up, etc. It's written in Python, and uses plain text ini files for configuration.

There's really only two hurdles that I found with using Londiste. First is that if you can't get the rpms from the PGDG Yum Repo you're looking at compiling from Git. And second, the online documentation for it is hard to find, hard to follow, and practically no one has used it so you can't ask RandomPostgresPerson for help.

Which is exactly why I'm writing this blog post. Here's what I needed to get me through the migration in question. I hope it helps you, should you consider using Londiste for your own replication needs. To whit:

  • As with other logical replication tools, you must ensure that all the tables to be replicated have a valid primary key. So before you even get started, determine which tables are missing them and then pass that list to your junior DBA and have them create pkeys while you continue on:

    sql
    SELECT
    n.nspname as schema,
    c.relname as table
    FROM
    pg_class c
    JOIN
    pg_namespace n
    ON
    n.oid = c.relnamespace
    WHERE
    c.relkind = 'r'
    AND NOT EXISTS (
    SELECT
        1
    FROM
        pg_constraint con
    WHERE
        con.conrelid = c.oid
    AND
        con.contype = 'p'
    )
    AND n.nspname <> ALL (
    ARRAY [
        'pg_catalog',
        'sys',
        'dbo',
        'information_schema'
    ]
    );
    
  • On the PostgreSQL 9.4.x server that will be receiving the replicated data, we need to ensure that all roles are pre-created. We want all ownerships and grants to be identical when we're done, right? You can use pg_dumpall -g on the PostgreSQL 8.4.x to get a listing of roles.

  • Again, like Slony, we should pre-build the schema on the PostgreSQL 9.4.x server. I think you can actually get Londiste to do this for you as part of the replication, but I couldn't find anything online for sure, and I didn't have time to add more experimentation here (we're on the customer's dime here, remember). So, use pg_dump over the network and pipe it to pg_restore to transfer the schema thusly:

    bash
    pg_dump 
    -Fc 
    -C 
    -S 
    -h IPofOldServer 
    -U postgres 
    myapp | 
    pg_restore 
    -d template1 
    -v 
    -C 
    -e 
    -s
    
  • Install Skytools on the PostgreSQL 9.4.x server using the PGDG repo:

    bash
    yum -y install 
    skytools-94 
    skytools-94-modules
    
  • Install Skytools from source on the PostgreSQL 8.4.x server:

    bash
    yum -y install 
    python-devel 
    asciidoc 
    xmlto 
    postgresql-devel
    git clone git://github.com/markokr/skytools.git
    cd skytools
    git submodule init
    git submodule update
    ./autogen.sh
    ./configure
    make
    make install
    
  • Restart the PostgreSQL 8.4.x cluster to load the new libs and modules

  • Now we configure the Londiste ticker. Note, we have trust setup for the postgres user in pg_hba.conf so there is no password= in the connection strings. Adjust to meet your setup:

    ```bash
    mkdir -pv ~postgres/londiste-config/{log,pid}
    cd ~postgres/londiste-config
    cat << EOF > ticker.ini
    [pgqd]
    baseconnstr = user=postgres host=IPofOldServer
    databaselist = myapp
    logfile = log/ticker.log
    pidfile = pid/ticker.pid
    EOF
  • Start up the ticker, to provide the replication "heartbeat" by running pgqd -d ticker.ini

  • Check the ticker.log to ensure there are no warnings or errors! You can stop the ticker with pgqd -s ticker.ini while you fix things.

  • Now, we tell Londiste about the master node (same note applies about the lack of password in the connection string):

    bash
    cd ~postgres/londiste-config
    cat &lt;&lt; EOF &gt; master.ini
    [londiste3]
    db = user=postgres host=IPofOldServer dbname=myapp
    queue_name = myappq
    loop_delay = 0.5
    logfile = log/master.log
    pidfile = pid/master.pid
    EOF
    
  • We have to actually create the master node as the root node by doing:

    bash
    londiste3 
    master.ini 
    create-root 
    master &#39;user=postgres host=IPofOldServer dbname=myapp&#39;
    
  • Check the master.log to see if you have a line like INFO Node "master" initialized for queue "myappq" with type "root"

  • Now, spin up the master's replication worker process by running londiste3 -d master.ini worker

  • Next, we configure our slave node (same note applies about the lack of password in the connection string):

    bash
    cd ~postgres/londiste-config
    cat &lt;&lt; EOF &gt; slave.ini
    [londiste3]
    db = user=postgres host=127.0.0.1 dbname=myapp
    queue_name = myappq
    loop_delay = 0.5
    logfile = log/slave.log
    pidfile = pid/slave.pid
    EOF
    
  • Like the master, we have to create the slave node. I created it as a leaf but I could have created it as a branch if we we're going to cascade replication:

    bash
    londiste3 
    slave.ini 
    create-leaf slave &#39;user=postgres host=127.0.0.1 dbname=myapp&#39; 
    --provider=&#39;user=postgres host=IPofOldServer dbname=myapp&#39;
    
  • Check the slave.log to see if you have the line INFO Node "slave" initialized for queue "myappq" with type "branch"

  • Spin up the slave's replication worker process by running londiste3 -d slave.ini worker

  • Tell the master node that we want to replicate all the tables in the db (londiste3 master.ini add-table --all) as well as all the sequences (londiste3 master.ini add-seq --all). Note that this only adds the tables that currently exist. If you add new tables to the master db, you need to londiste3 master.ini add-table tablename to add them to replication. Ditto for new sequences.

  • For the slave node, also replicate all the tables (londiste3 slave.ini add-table --all) and all the sequences (londiste3 slave.ini add-seq --all). Note that this only adds the tables that currently exist. If you add new tables to the master db, you need to londiste3 slave.ini add-table tablename to add them to replication. Ditto for new sequences.

At this point, replication is actually up and running. Any changes occurring on the master node are being replicated to the slave node. That's all you need to do.

But what about the data that was already in the master db? You don't need to do anything. It's already replicating. You can forcibly tell Londiste to 'catch things up' by doing londiste3 slave.ini resync --all if you like though.

If you want to check on the replication at any point, simply issue londiste3 slave.ini status or to be more pedantic londiste3 slave.ini compare which will examine row counts and md5sums between master and slave.

Enjoy your new cross-version logical replication!

By | April 19th, 2016|Categories: Uncategorized|Tags: , , , |Comments Off on Logical Replication with Skytools3

PostgreSQL Streaming Replication in 10 Minutes

While there's absolutely nothing new in this blog post that isn't covered by the wonderful docs I've been asked multiple times now by customers if we had some kind of 'crib notes' format for how to get replication up and running. And since I just had to set this up and document it for a customer, I figured I might as well post it so that I can simply point people to it in the future. So here we are.

Now, let's get started. I assume you already have two PostgreSQL servers up with the binaries installed. For simplicity's sake, we will call these machines 'master' and 'standby'. Note too that I'm using replication slots which needs PostgreSQL 9.4.0 or later; if you're using something earlier, simply ignore the slot stuff.

Let's get started!

On the master, do the following:

bash
cat &lt;&lt; EOF &gt;&gt; postgresql.conf
    wal_level = hot_standby
    full_page_writes = on
    wal_log_hints = on
    max_wal_senders = 6
    max_replication_slots = 6
    hot_standby = on
    hot_standby_feedback = on
EOF

On the master, add the external IP addresses of the servers to pg_hba.conf:

bash
cat &lt;&lt; EOF &gt;&gt; pg_hba.conf
    host replication repl_user IP_of_master/32 md5
    host replication repl_user IP_of_standby/32 md5
EOF

Restart PostgreSQL on the master for the changes to take affect

On the master, create the replication user:

bash
psql 
  -d postgres 
  -U postgres 
  -c &quot;CREATE ROLE repl_user LOGIN REPLICATION ENCRYPTED PASSWORD &#39;secretpasswordhere&#39;;&quot;

On the master, create the replication slot for the standby:

bash
psql 
  -d postgres 
  -U postgres 
  -c &quot;SELECT * FROM pg_create_physical_replication_slot(&#39;standby1&#39;);&quot;

On the standby, wipe the existing cluster:

bash
cd /var/lib/pgsql/9.4/data
pg_ctl -D $PWD -mf stop
cd ..
rm -rfv data

On the standby, use the pg_basebackup command to clone the master (enter the repl_user's password from above when prompted):

bash
pg_basebackup 
  -D data 
  -Fp 
  -R 
  -Xs 
  -c fast 
  -l &#39;initial clone&#39; 
  -P 
  -v 
  -h IP_of_master 
  -U repl_user

On the standby, tweak the recovery.conf that was created for you and add the replication slot name:

bash
cd data
cat &lt;&lt; EOF &gt;&gt; recovery.conf
primary_slot_name = &#39;standby1&#39;
EOF

Start the standby up

And that's it. You should be all done. Easy, right?

By | April 18th, 2016|Categories: Uncategorized|Comments Off on PostgreSQL Streaming Replication in 10 Minutes