Monthly Archives: April 2016


PostgreSQL Partitioning Quick Tip

Partitioning in PostgreSQL can be a little daunting at times. In fact, you should probably just use pg_partman and be done with it. However, if you're trying to learn, can't use pg_partman, or are a masochist you'll probably be following the docs and thinking 'seriously? i have to create indexes on each child? why don't they copy the indexes of the parent? why isn't this easier?'. Here's a little tip to make things slightly easier:

Instead of creating your child tables like the doc says:

  CHECK (blah blah)
) INHERITS (parent);

Create your child tables thusly:

  CHECK (blah blah)
) INHERITS (parent);

and PostgeSQL will copy all your indexes, primary keys, etc from the parent to the child. Which is what you wanted, right?


By | April 21st, 2016|Categories: Uncategorized|Comments Off on PostgreSQL Partitioning Quick Tip

Logical Replication with Skytools3

UPDATE: My coworker Richard liked this write up, and Skytools, so much he threw together a demo script. You can get it here.

I recently had to do a near-zero downtime upgrade from PostgreSQL 8.4.x to PostgreSQL 9.4.x for a custmer. I couldn't use streaming replication because of the change in major version (and because it's simply not present in 8.x), so that left me looking at logical replication options. Usually, everyone else would be thinking Slony right here. I've only messed with Slony a few times, but each time was a pita, and the whole thing just seemed overly complicated to me. So I decided to give Londiste a look.

Londiste is part of the Skytools suite, originally developed by Skype back when they were a 'no central node' setup. As such, the thing was literally born to be "master-master" and assumes nodes come and go at will, so it's got all the tools to handle bringing nodes up/down, marking them active/inactive, catching them up, etc. It's written in Python, and uses plain text ini files for configuration.

There's really only two hurdles that I found with using Londiste. First is that if you can't get the rpms from the PGDG Yum Repo you're looking at compiling from Git. And second, the online documentation for it is hard to find, hard to follow, and practically no one has used it so you can't ask RandomPostgresPerson for help.

Which is exactly why I'm writing this blog post. Here's what I needed to get me through the migration in question. I hope it helps you, should you consider using Londiste for your own replication needs. To whit:

  • As with other logical replication tools, you must ensure that all the tables to be replicated have a valid primary key. So before you even get started, determine which tables are missing them and then pass that list to your junior DBA and have them create pkeys while you continue on:

    n.nspname as schema,
    c.relname as table
    pg_class c
    pg_namespace n
    n.oid = c.relnamespace
    c.relkind = 'r'
        pg_constraint con
        con.conrelid = c.oid
        con.contype = 'p'
    AND n.nspname <> ALL (
    ARRAY [
  • On the PostgreSQL 9.4.x server that will be receiving the replicated data, we need to ensure that all roles are pre-created. We want all ownerships and grants to be identical when we're done, right? You can use pg_dumpall -g on the PostgreSQL 8.4.x to get a listing of roles.

  • Again, like Slony, we should pre-build the schema on the PostgreSQL 9.4.x server. I think you can actually get Londiste to do this for you as part of the replication, but I couldn't find anything online for sure, and I didn't have time to add more experimentation here (we're on the customer's dime here, remember). So, use pg_dump over the network and pipe it to pg_restore to transfer the schema thusly:

    -h IPofOldServer 
    -U postgres 
    myapp | 
    -d template1 
  • Install Skytools on the PostgreSQL 9.4.x server using the PGDG repo:

    yum -y install 
  • Install Skytools from source on the PostgreSQL 8.4.x server:

    yum -y install 
    git clone git://
    cd skytools
    git submodule init
    git submodule update
    make install
  • Restart the PostgreSQL 8.4.x cluster to load the new libs and modules

  • Now we configure the Londiste ticker. Note, we have trust setup for the postgres user in pg_hba.conf so there is no password= in the connection strings. Adjust to meet your setup:

    mkdir -pv ~postgres/londiste-config/{log,pid}
    cd ~postgres/londiste-config
    cat << EOF > ticker.ini
    baseconnstr = user=postgres host=IPofOldServer
    databaselist = myapp
    logfile = log/ticker.log
    pidfile = pid/
  • Start up the ticker, to provide the replication "heartbeat" by running pgqd -d ticker.ini

  • Check the ticker.log to ensure there are no warnings or errors! You can stop the ticker with pgqd -s ticker.ini while you fix things.

  • Now, we tell Londiste about the master node (same note applies about the lack of password in the connection string):

    cd ~postgres/londiste-config
    cat &lt;&lt; EOF &gt; master.ini
    db = user=postgres host=IPofOldServer dbname=myapp
    queue_name = myappq
    loop_delay = 0.5
    logfile = log/master.log
    pidfile = pid/
  • We have to actually create the master node as the root node by doing:

    master &#39;user=postgres host=IPofOldServer dbname=myapp&#39;
  • Check the master.log to see if you have a line like INFO Node "master" initialized for queue "myappq" with type "root"

  • Now, spin up the master's replication worker process by running londiste3 -d master.ini worker

  • Next, we configure our slave node (same note applies about the lack of password in the connection string):

    cd ~postgres/londiste-config
    cat &lt;&lt; EOF &gt; slave.ini
    db = user=postgres host= dbname=myapp
    queue_name = myappq
    loop_delay = 0.5
    logfile = log/slave.log
    pidfile = pid/
  • Like the master, we have to create the slave node. I created it as a leaf but I could have created it as a branch if we we're going to cascade replication:

    create-leaf slave &#39;user=postgres host= dbname=myapp&#39; 
    --provider=&#39;user=postgres host=IPofOldServer dbname=myapp&#39;
  • Check the slave.log to see if you have the line INFO Node "slave" initialized for queue "myappq" with type "branch"

  • Spin up the slave's replication worker process by running londiste3 -d slave.ini worker

  • Tell the master node that we want to replicate all the tables in the db (londiste3 master.ini add-table --all) as well as all the sequences (londiste3 master.ini add-seq --all). Note that this only adds the tables that currently exist. If you add new tables to the master db, you need to londiste3 master.ini add-table tablename to add them to replication. Ditto for new sequences.

  • For the slave node, also replicate all the tables (londiste3 slave.ini add-table --all) and all the sequences (londiste3 slave.ini add-seq --all). Note that this only adds the tables that currently exist. If you add new tables to the master db, you need to londiste3 slave.ini add-table tablename to add them to replication. Ditto for new sequences.

At this point, replication is actually up and running. Any changes occurring on the master node are being replicated to the slave node. That's all you need to do.

But what about the data that was already in the master db? You don't need to do anything. It's already replicating. You can forcibly tell Londiste to 'catch things up' by doing londiste3 slave.ini resync --all if you like though.

If you want to check on the replication at any point, simply issue londiste3 slave.ini status or to be more pedantic londiste3 slave.ini compare which will examine row counts and md5sums between master and slave.

Enjoy your new cross-version logical replication!

By | April 19th, 2016|Categories: Uncategorized|Tags: , , , |Comments Off on Logical Replication with Skytools3

PostgreSQL Streaming Replication in 10 Minutes

While there's absolutely nothing new in this blog post that isn't covered by the wonderful docs I've been asked multiple times now by customers if we had some kind of 'crib notes' format for how to get replication up and running. And since I just had to set this up and document it for a customer, I figured I might as well post it so that I can simply point people to it in the future. So here we are.

Now, let's get started. I assume you already have two PostgreSQL servers up with the binaries installed. For simplicity's sake, we will call these machines 'master' and 'standby'. Note too that I'm using replication slots which needs PostgreSQL 9.4.0 or later; if you're using something earlier, simply ignore the slot stuff.

Let's get started!

On the master, do the following:

cat &lt;&lt; EOF &gt;&gt; postgresql.conf
    wal_level = hot_standby
    full_page_writes = on
    wal_log_hints = on
    max_wal_senders = 6
    max_replication_slots = 6
    hot_standby = on
    hot_standby_feedback = on

On the master, add the external IP addresses of the servers to pg_hba.conf:

cat &lt;&lt; EOF &gt;&gt; pg_hba.conf
    host replication repl_user IP_of_master/32 md5
    host replication repl_user IP_of_standby/32 md5

Restart PostgreSQL on the master for the changes to take affect

On the master, create the replication user:

  -d postgres 
  -U postgres 
  -c &quot;CREATE ROLE repl_user LOGIN REPLICATION ENCRYPTED PASSWORD &#39;secretpasswordhere&#39;;&quot;

On the master, create the replication slot for the standby:

  -d postgres 
  -U postgres 
  -c &quot;SELECT * FROM pg_create_physical_replication_slot(&#39;standby1&#39;);&quot;

On the standby, wipe the existing cluster:

cd /var/lib/pgsql/9.4/data
pg_ctl -D $PWD -mf stop
cd ..
rm -rfv data

On the standby, use the pg_basebackup command to clone the master (enter the repl_user's password from above when prompted):

  -D data 
  -c fast 
  -l &#39;initial clone&#39; 
  -h IP_of_master 
  -U repl_user

On the standby, tweak the recovery.conf that was created for you and add the replication slot name:

cd data
cat &lt;&lt; EOF &gt;&gt; recovery.conf
primary_slot_name = &#39;standby1&#39;

Start the standby up

And that's it. You should be all done. Easy, right?

By | April 18th, 2016|Categories: Uncategorized|Comments Off on PostgreSQL Streaming Replication in 10 Minutes

How to install and configure the Oracle Foreign Data Wrapper

How to install and configure the Oracle Foreign Data Wrapper Preinstall Checklist:    - If you have not already, please download and install "Microsoft Visual C++ 2015 Redistributal" and Java Runtime Environment (version 7 or 8) Oracle Instant Client install and config:           01. Download and unzip into the same directory: [...]

By | April 7th, 2016|Categories: BigSQL|0 Comments