Production psql Prompts

I deal with lots of production.  When you’re dealing with multiple production machines, it’s important to know many things:

  1. Who you are
  2. What you’re connected to
  3. Where you are
  4. Why you are connected
  5. When you are connected

Most prompts don’t give you any of this detail.  If you’re familiar with bash, you probably have heard of PS1, it’s an environment variable that lets you set what your prompt looks like.  psql has something very similar.  The question becomes, how do I get a useful prompt that is compact and doesn’t cause all of my commands to wrap off of the screen?

Let’s start with bash, most people are familiar with the default linux prompt (debian and redhat differ slightly in their defaults) of:

[root@ansible-test-1 ~]#

One day, I realized that wasn’t enough.  I had a lot of tabs open in my terminal emulator and I was ssh’d three ways from Sunday.  Many of those tabs were executing long running commands, and I couldn’t quickly look back to see how long it had been running for (unless I remembered to execute the `date` command prior to starting my job).  That’s when I came up with this prompt:


This tells me most of what I’d like to know.  I have:

[user@host : Full Path to current directory: Time the prompt was generated]
$ <your command gets entered here>


You can accomplish this with:

export PS1=[\u@\h : \[\e[4;33m\]\w\[\e[0m\] : \[\e[0;94m\]\t\[\e[0m\]]\n$

I did have one issue, in order to see the full path, I had to put the next command on the line below.  I’ve actually come to appreciate it quite a bit.  Essentially, I have a line that tells me: Who @ What: Where : When.  (I guess why I’m there is something I should probably already know : -)

Now, I decided to apply the same design to my psql prompt so that I knew what was going on.  psql prompt

You can accomplish this by putting the following in your .bashrc:

alias psql='psql -v PROMPT1=[`hostname -s`'\''@%m:%>]
%n@%/%=%# '\'''

It’s important to note, the line wrap, you have to actually enter that or this won’t work.  Essentially, I have to have bash set the ‘hostname’ in the prompt (which is why this isn’t in .psqlrc).

Why do I think this prompt is better than the default?

This prompt tells me:

  1.  scottsmac : What host psql is running from
  2. [local]         : What host psql is connected to
  3. 9520            : What port psql is connected to
  4. newline
  5. postgres      : What user psql is connected as
  6. @postgres  : What database psql is connected to

This, again gives me ‘who, what, where, why and when’ … okay, not the ‘why’ but that’s really up to you.





Quick stats with pgBadger

A while back, I wrote about ‘Helpful Postgres logging and defaults‘.  The real question is, how helpful are these, really?  So, let’s take a quick look at how we can use the increased logging in order to do something useful.  If the previous post is TL;DR, here’s the quick list:

logging_collector = on 
log_filename = 'postgresql-%a.log' 
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
log_checkpoints = on
log_temp_files = 0
log_lock_waits = on

Essentially, these cover the basics when it comes to monitoring, without adding a whole lot of stuff to your logs.  These settings are default in the BigSQL sandboxes so that you’re running out of the gate.  If you’ve installed PostgreSQL another way, you can simply modify your postgresql.conf file to include these items.

Follow along

Quick Install

If you’d like to follow along, you can quickly setup a test pg96 environment by using the ‘standalone’ installer for postgres that BigSQL provides.  If you already have PostgreSQL, you can skip this section.

Grab BigSQL

scottm: ~ $ python -c "$(curl -fsSL"
Downloading BigSQL PGC 3.0.0 ...
Unpacking ...
Cleaning up
Setting REPO to
Updating Metadata
 BigSQL PGC installed. Try 'bigsql/pgc help' to get started.

Install PostgreSQL 9.6.current

scottm: ~ $ cd bigsql/
 scottm: bigsql $ ./pgc init pg96
 scottm: bigsql $ ./pgc install pg96
 Get:1 pg96-9.6.0-1b-osx64
 Unpacking pg96-9.6.0-1b-osx64.tar.bz2

Init pg96

scottm: bigsql $ ./pgc init pg96
## Initializing pg96 #######################
Superuser Password [password]:
 Giving current user permission to data dir
Initializing Postgres DB at:
 -D "/Users/scottm/bigsql/data/pg96"
Using PostgreSQL Port 5432
Password securely remembered in the file: /Users/scottm/.pgpass
to load this postgres into your environment, source the env file:

Start pg96

scottm: bigsql $ ./pgc start pg96
 pg96 starting on port 5432

Since the BigSQL package automatically sets the appropriate logging defaults, we’re set to go!

Modify Existing PG

  1. open up your postgresql.conf file
  2. place the above values at the end of the file
  3. save & close postgresql.conf
  4. restart postgres (if you already had ‘logging_collector=on’, you can just reload).


Now that you’re logging, it’s time to generate some workload.  My database isn’t actually serving clients at this point, so, I’m going to use the included pgbench tool to generate some data and traffic. (NB: If you run this step, you’ll use up about 2.5 GB of disk space).

pgbench -i -s 100 -U postgres

# This initializes a new pgbench database with a ‘scaling factor’ (i.e. sizing factor) of 100 bank branches (pgbench simulates debit/credit).  You’ll see a bunch of output fly-by as you initialize 10,000,000 rows (tuples) of data.  Don’t worry, this only translates to about 2.5 GB of space.


Great!  Now we’re logging at a verbosity that lets us see what’s going on in PostgreSQL, how do I take advantage?  Enter pgBadger; this tool is essentially a high-performance log parser for the postgresql information logs.  If you’re following along with my BigSQL install from above, you can grab pgBadger with:

scottm: bigsql $ ./pgc install pgbadger
Get:1 pgbadger-9.0
 Unpacking pgbadger-9.0.tar.bz2

If you’re not using the BigSQL package, you can:

git clone

Now, let’s run pgBadger against the logs.

If you’re following along:

scottm: bigsql $ perl pgbadger/pgbadger data/logs/pg96/postgresql-*.log
[========================>] Parsed 8993 bytes of 8993 (100.00%), queries: 0, events: 2
LOG: Ok, generating html report...
scottm: bigsql $

By default, pgBadger puts its output in a file called ‘out.html’ (see perl pgbadger/pgbadger –help for the flag to change this).

I’m going to open up the ‘out.html’ file in my web browser and view my report (I’ve attached the out.html report I generated for this blog so you can download it and play if you’d like).


pgBadger empty overview

Hmmm, looks like I have an empty report?  Strange?  Well, keep in mind, we’ve told postgres to log certain things:

  •  checkpoints
  • Vacuum
  • temp files
  • lock waits

We never told postgres to log queries (we’ll come back to this in a minute).  For now, let’s take a look at the ‘checkpoints’ and see what’s there.

Choose Checkpoints


Once we choose the ‘Checkpoints’ item from the top menu, we’ll see that our data-load created some activity for us (your data will most likely look somewhat different).

Screen Shot 2016-10-21 at 4.45.47 PM

Now I’m starting to see data about my checkpoint buffer utilization!  Cool!  I can actually see how memory is being used in the server.  This is coming out of the ‘log_checkpoints’ parameter that we set earlier.

To get richer data, we need to push more activity against our DB server.  With pgBench:

scottm: bigsql $ pg96/bin/pgbench -T 300 -c 10

This will run a 5 minute (300 second) test with 10 concurrent client sessions.

Once that’s complete, re-run pgBadger:

scottm: bigsql $ perl pgbadger/pgbadger data/logs/pg96/postgresql-*.log
[========================>] Parsed 38636 bytes of 38636 (100.00%), queries: 0, events: 2
[========================>] Parsed 3024 bytes of 3024 (100.00%), queries: 0, events: 2
[========================>] Parsed 24694 bytes of 24694 (100.00%), queries: 0, events: 2
LOG: Ok, generating html report...

(Your output may appear slightly different based on the number of log files that you have)

Now, your ‘out.html’ will have data based on your workload.

Badger with WorkloadWe also have some interesting data surrounding VACUUM behavior:Vacuum DataIt should be noted, these are just quick screen grabs, take a look at the attached ‘out.html’ for more.


The point is, using some wise logging defaults gets you some awesome pgBadger joy, without paying a TON in logging overhead.  Stay tuned for part 2 where I’ll crank up logging even more and show you how to get per-statement data.



Helpful PostgreSQL Logging and Defaults

I use PostgreSQL every single day.  I develop against multiple versions of the database (4 versions running in parallel as I write), I have apps that use it and, my daily DBA-ing.  The biggest question I get from newbies and veterans a like is: “What are your PostgreSQL defaults?”

If you follow postgres, you already know that the default configuration (postgresql.conf) is very conservative from a resource (CPU, IOPS, Disk Space) perspective.  Over the last 10 years, we [the community] have developed some straightforward and easy to understand formulas that will help you tune… shared_buffers for example.  The item that always gets left out though is logging.  As a developer, I’m always looking for ways to see “How the database is answering the questions I ask”.  When I get a new postgres instance set up, I have a personal (somewhere in the cobwebs) checklist that I run.  Some are based on the purpose of the deployment (shared_buffers, work_mem, etc…), some are things that I always set.  Aside from memory, the biggest set of “standard” items I set are all related to logging.  I’m big on monitoring (my pg_stat_activity patch was accepted back for 9.2) and having the right detail presented to me is important.


 logging_collector = on
 log_filename = ‘postgresql-%a.log’
 log_line_prefix = ‘%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h ‘
 log_checkpoints = on
 log_autovacuum_min_duration = 0
 log_temp_files = 0
 log_lock_waits = on
the goal here is to have postgres ‘bubble-up’ details about what’s going on to us so

Read more

Simple CSV dumps from your Postgres database

Recently, I’ve spent a lot of time doing data shuffling.  Some of it for me and my own ( mostly ) malevolent purposes, but mostly, the kind of stuff that data analysts want to slice and dice.  I typically prefer to push analysts to the web based reporting setup I’ve built them, but a lot of times they are looking for some new angle that requires an aggregate we’re not already computing or a slightly different take on a report that already exists.  I usually end up building a web-based BIRT report that lets them self-service the CSV output for the data’s inevitable import into M$ office products.  There are times however, when it’s just simpler to build a quick CSV and email it over.  It makes them feel happy and it makes me feel happy.  The question is, what’s the best way to do this straight out of Postgres?

Read more

Announcing pgHA version 9.2-1.00.20

pgHA provides automated failure detection and switchover of a PostgreSQL streaming replica pair.  Version  9.2-1.00.20 has been updated to include a more intuitive control script as well as native nagios-compliant output for integration into your monitoring system.

Download pgHA.