PostgreSQL Replication Checklist

Replication is a key part of creating an “Enterprise” Postgres deployment, to support high availability, failover, disaster recovery, or scale-out queries.

Built-in streaming replication was added to PostgreSQL in version 9.0, but the Postgres community has had a number of trigger based replication options for many years, with the big 3 being Slony-I, Londsite, and Bucardo (we’re biased towards Slony-I with it’s high performance C-language triggers — and not just because of the cute slonik logo or that the original author works at OpenSCG). And a whole new generation of logical replication has been introduced in the pglogical project.

How do you choose which replication solution? Use this checklist to decide which Postgres replication solution to use.

1. Version Support

  • Are all of your PostgreSQL instances (master and all slaves) the same version? (Streaming replication requires same PG version so cannot be used for upgrades)
  • Are you using a recent Postgres version?
  • Are you using an ancient Postgres 7.x?

The different solutions have very different version compatibility so if you are working with an old Postgres version or upgrading to a new major version you can cross some off:

Approach Supported Version
Streaming Replication WAL file 9.0+ (all nodes must be same version)
Slony-I v1.2.23 Triggers 7.3.3+ up to 8.4
Slony-I v2.0.8 Triggers 8.3+
Bucardo Triggers 8.1+
PG Logical WAL decoding 9.4+

If you’re trying to upgrade an ancient version 7.x or < 8.3, you’ll probably need a 2-step upgrade using Slony-I (Sorry. Slony-I 1.2.23 works on 7.3.3 to 8.4. Slony-I 2.x supports 8.3+). Starting from 8.1 or 8.2 Bucardo is likely the best way to get to a modern version.

2. Topology

  • Do you need bi-directional replication or master-> slave?
  • Are you trying to replicate a single database or the entire Postgres instance?

Slony-I, Bucardo, and pglogical can all be configured to replicate a single database or individual tables, while Streaming replication works on the entire database instance (cluster). Streaming replication only supports master -> slave while the other solutions can be used to support multi-master replication — although care must be taken in configuration and to ensure no conflicts are generated by concurrent changes on different write servers.

3. Performance

  • How heavily loaded is your PostgreSQL server?
  • Are you trying to replicate across a WAN connection?

WAL based streaming replication places the smallest CPU load on the master database server. Trigger based approaches will have some performance impact on write throughput. BUT, streaming replication sends the entire WAL file to the slaves which on a heavy database server will saturate WAN connections to a disaster recovery data center and lead to significant delay or replication lag on the slaves. Trigger based and logical replication send much less data over the wire, so perform better for remote replica scenarios.

Conclusion

There are several good PostgreSQL replication solutions you can choose from, and your specific situation will guide your choice. Creating local read replicas and stand-by servers using streaming replication on a LAN is easy using just the PostgreSQL core. Upgrading with a trigger based replication like Slony-I can provide a nearly zero-downtime upgrade process. Disaster recovery or multi-datacenter replicas work best with a lower-volume Slony-I or pglogical replication. It’s easy to install and configure Slony-I from the integrated Postgres by BigSQL distribution – so what are you waiting for? Replicate already!!

Integrating Cassandra, Spark, PostgreSQL and Hadoop as a Hybrid Data Platform – Part 2

In the previous post, we provided a business and architectural
background for the Postgres FDWs that we are developing for Spark,
Hadoop and Cassandra. In particular, we highlighted the key benefits of
bringing Cassandra and PostgreSQL together.

With this post, we will start taking a more technical look at the
Cassandra FDW.

The C* FDW speaks natively with Cassandra on two levels; it:

  • uses the binary CQL protocol instead of the legacy Thrift protocol.
  • directly relies on the DataStax Native C++ driver for Cassandra.

The DataStax C++ driver is performant and feature-rich; various load
balancing and routing options are available and configurable. We are
already making use of some of these features and plan to provide more of
these to our users.

While exploring Cassandra as a Postgres user, the defaults such as
automatic inclusion of the ALLOW FILTERING clause are useful as they
allow gradual familiarity; especially useful in small development
environments. Our intent is to support tuning for large environments
but to default to a configuration geared toward existing PostgreSQL
users.

At this point, let us consider whether we are introducing a new SPOF by
using PostgreSQL with a Cassandra system. We believe not; a PostgreSQL
node at the edge of a Cassandra cluster – as a transactional or open-SQL
end point – is not at all the same as a central master node critical to
the operation of an entire cluster. We see some trade-offs but mostly
we see benefits of bringing PostgreSQL to Cassandra in this way as we
intend to elucidate through this series.

In the next post, we will show you how to get started with the Cassandra
FDW.

Improvement in read-write transactions turning off update_process_title

Improve PostgreSQL on Windows performance by 100%

It sounds like click-bait, or one of those late night TV promotions – “Improve your database performance by 100% – by changing just this one setting!” But in this case, it’s true – you can drastically improve PostgreSQL on Windows performance by changing one configuration setting – and we made this the default in our Postgres by BigSQL distribution for 9.2 thru 9.6.

tl;dr – if you have high query load, change “update_process_title” to ‘off’ on Windows, and get 100% more throughput.

Improve postgresql performance by turning off update_process_title

Performance Improvement by turning off update_process_title

Most Postgres DBA’s already know that they need to tune settings for shared buffers, WAL segments, checkpoints, etc, to get the best performance from their database. If you are running PostgreSQL on Windows, there’s another setting that you need to look at, specifically “update_process_title”. Changing this setting from “on” to “off” can improve throughput on a heavy query load by close to 100%

We ran a series of benchmark tests in our performance lab and you can see the dramatic improvement in the graphs displayed. We tested PostgreSQL 9.5 on a 16-core Windows server with fast SSD drives using a standard pgbench run in both read-only and read-write modes. Scaling from 4 to 40 clients shows a plateau in throughput (measured by TPS) after 8 clients when the setting is set to “on”. Changing the update_process_title setting to “off” allows PostgreSQL to continue to scale throughput, showing increasing TPS up to 40 clients. The throughput at 32 read-only clients increases from 20K TPS to 58K TPS (180% higher) and at 40 clients continues to climb to 76K TPS (270% higher).

Improvement in read-write transactions turning off update_process_title

Improvement in read-write transactions turning off update_process_title

This performance gain is seen for both read-only and read-write workloads. With 32 clients, the write throughput increases from 2,700 TPS to 7,700 TPS (180% higher) and at 40 clients continues to climb to 8,200 (200% higher).

The update_process_title setting controls whether or not Postgres will update the process description that you see when querying the system list of running commands based on the current SQL statement being processed. On Linux this is done using ps, on Windows it requires the ProcessExplorer tool. Updating the process description becomes a bottleneck on Windows, and limits the throughput even on a high-end server. Not many Windows admins actually use this information on a regular basis, so unless you are actively debugging a slow or long running process using this process information, you should leave this turned off.

Takayuki Tsunakawa originally tracked down this bottleneck and created a patch for PostgreSQL 9.6 that has changed the default to be ‘off’ on Windows. We have made the same setting change in BigSQL distributions of version 9.2 thru 9.5 as well as 9.6. So even if you’re not ready to move to the new 9.6 version, when you install Postgres by BigSQL on Windows you are getting the best performance out-of-the-box.

 

Integrating Cassandra, Spark, PostgreSQL and Hadoop as a Hybrid Data Platform

Today many organizations struggle to keep up with their database
requirements, for example, to:

  • store and analyze high-velocity and rapidly-growing data such as logs,
    package tracking events, sensor readings and machine-generated
    streams.
  • ensure 24/7 availability of customer-facing websites, services and
    apps even when a subset of their data centers, servers or data are
    offline.
  • support fast-growing internet-scale businesses by adding relatively
    inexpensive data servers rather than requiring million-dollar
    investments in high-end servers and storage.

Our industry is increasingly producing and exploring various Open Source
systems to provide solutions for requirements like these. However, many
such systems intending to offer degrees of Scalability and
Availability choose architectures that impose inherent limitations.

Many of these architectures have a node or a collection of nodes that
are treated as special. Think Master-Slave, NameNode-DataNode and so
forth. While each of these models serves a different set of use cases,
a common attribute across them is that they have a SPOF (Single Point
of Failure). Even when they offer some level of multiplicity to deal
with the SPOF issue, the problems continue: these special nodes can
become bottlenecks for the operations that only they are allowed to
carry out. Capacity Planning, Backup and Recovery, Fault
Tolerance
, Disaster Recovery and similar areas of operation all
become more complex. Moreover, the non-special nodes are typically
underutilized or entirely passive. Many of these architectures make it
virtually impossible to achieve peta-scale, multi-thousand-node clusters
with linear growth and failure tolerance atop today’s
dynamically-orchestrated infrastructure.

Enter Cassandra – A peer-to-peer, multi-datacenter active-active,
peta-scale, fault-tolerant distributed database system. Nowadays, it is
hard not to have heard of this excellent system as its user-base
continues to grow. The key to realize is that its peer-to-peer
architecture is the basis for its SPOF-free operation with the
understanding that failures are the norm in clustered environments.
Cassandra is also well known for lowering the latency relative to many
other big data systems. It is in use by over 1500 organizations
including Netflix, eBay, Instagram and CERN. To get an idea of the
scale, Apple’s production deployment has been well known in the
Cassandra community to comprise 75,000 nodes storing over 10 PB but in
September last year at the Cassandra Summit, their deployment was
reported to have exceeded 100,000 nodes.

We are great believers in Cassandra and Spark and are building a hybrid
data platform bringing the benefits of these systems to PostgreSQL. We
also hope that the benefits of the PostgreSQL platform will have a wider
reach through this. Our distribution, Postgres by BigSQL, provides easy
access to these two systems through our FDW extensions CassandraFDW and
HadoopFDW. The HadoopFDW extension provides not just access to Hadoop
but also to Spark which uses the same underlying network protocol and
SQL parser.

The combined array of advanced features that these two FDWs support is
impressive: write support (INSERT/UPDATE/DELETE), predicate pushdown,
IMPORT FOREIGN SCHEMA, and JOIN pushdown. We believe that of all the
externally-maintained FDWs, these two FDW extensions represent the
cutting-edge in terms of the PostgreSQL FDW technology as an
implementation of SQL/MED for big data systems.

With that context, we will focus on the CassandraFDW in the next blog
post in this series.

Read environment variables from PostgreSQL using plpython

Sometimes in your PostgreSQL code you want to read an environment variable – whether to get the input file location for a data load, or check that the postgres user has the right path set, or verify that the TDSDUMP environment variable is set when configuring the tds_fdw to connect to SQL Server. Here’s a plpython based function that will do just that (or the gist):

create extension plpythonu;
  
create type py_environ_type as (name text, value text);

create or replace function py_environ(name varchar DEFAULT NULL)
  returns setof py_environ_type
as $$
  import os
  aev = []
  if name is None:
    for k, v in os.environ.items():
      aev.append((k, v))
  else:
    v = os.getenv(name)
    if v is not None:
      aev.append((name,v))
  return aev;
$$ language plpythonu;

You can use it to get a list of all environment variables:

select * from py_environ();

Or to get just a single variable:

select value from py_environ('PATH');

If you want other filtering, just add a where clause:

select name, value from py_environ() where name like 'TDS%';

Inspired by an idea from Scott Mead and the pal_environ() function, tested to work on Linux, OS/X, and Windows.

pgBackRest with PostgreSQL Sandbox on Debian / Ubuntu

pgBackRest is one of the most powerful backup solutions available for PostgreSQL. It has enterprise level features like compression, multiple channels (threads) of backup execution, incremental and differential backups etc.
The official documentation is Debian-centric in its focus. I wanted to test it out with the PostgreSQL sandbox from the BigSQL project.

Setting up PostgreSQL Sandbox and Installing pgBackRest

The BigSQL Project makes it easy to install and setup PostgreSQL and its associated components across different operating systems. In this document, we are going to look at how to set it up on Ubuntu 14.04. Linux binaries of the sandbox can be downloaded from the BigSQL download page

The sandbox installation requires only the unpacking of the downloaded file.

tar -xvf bigsql-9.5.3-5-linux64.tar.bz2
cd bigsql/

Using the command line utility (pgc) supplied with the sandbox, its very easy to initialize and start a PostgreSQL instance.

./pgc init pg95
./pgc start pg95

A PostgreSQL instance should now be up and running.
The same pgc utility can be used to install pgBackRest.

./pgc install backrest

Install Perl Dependencies

An important aspect to keep in mind is that pgBackrest is written in Perl and has many dependencies on different perl libraries and modules.
An easy way to install all the dependencies in one shot is to instruct the apt-get utility to install one of the leaf components in the dependency chain.

sudo apt-get install libdbd-pg-perl

This command should fetch all the perl dependencies of pgBackRest.

Setting Up a Backup Repository Directory

Set up a backup repository directory for pgBackRest with the following commands.

sudo mkdir /var/log/pgbackrest
sudo chmod 750 /var/log/pgbackrest

IMPORTANT for this test:

  1. pgbackrest and the postgres server process should run as the same OS user.
  2. The backup repository directory should be owned by the same OS user.

Change the ownership of the repository directory to the user under which the postgres process is running. If the user is “postgres” and the group is “postgres” then:

sudo chown -R postgres:postgres /var/log/pgbackrest

pgBackRest configuration

sudo vi /etc/pgbackrest.conf

Append the following entries to this file.

[demo]
db-path=/home/postgres/bigsql/data/pg95

[global]
repo-path=/var/log/pgbackrest

Note: if the entries already exist, modify them accordingly.

Change the ownership of this configuration file to the OS user that owns the postgres and pgbackrest process

sudo chown -R postgres:postgres /etc/pgbackrest.conf
chmod +x pgbackrest.conf

Modification Database Parameters

The archive_command needs to be modified to use pgbackrest. If the pgbackrest executable doesn’t exist in the path, please make sure that the full path is mentioned

alter system set archive_command = '/home/postgres/bigsql/backrest/bin/pgbackrest --stanza=demo archive-push %p';

A few other parameters that are also important for the proper working of pgBackRest:

alter system set archive_mode=on;
alter system set listen_addresses = '*';
alter system set max_wal_senders=3;
alter system set wal_level = 'hot_standby';

Modification of all these parameters requires a restart of the PostgreSQL instance.

./pgc restart pg95

In the event that our operating system user doesn’t exist as a superuser in our database, we need to create the user and assign superuser privileges

postgres=# create user vagrant with password 'vagrant';
postgres=# alter user vagrant with superuser;

Backing up database using pgBackRest

pgBackRest uses .pgpass file for authentication.
Add a line to .pgpass with the password of the superuser in the following format:

*:*:*:*:vagrant

once this is done, we are ready to backup the PostgreSQL instance.

backrest/bin/pgbackrest --stanza=demo --db-socket-path=/tmp --log-level-console=info backup

Restoring from backup

Imagine a scenario where the files in your data directory are corrupt or lost and you want to restore it from backup.
The first step is to bring down the PostgreSQL instance. This should release all file descriptors pointing to the current data directory.

Clean up the Data directory:
Before restoring the backup make sure that the data directory is clean and is stored on a reliable medium. The full path to the new data directory should be the same as the previous one (we can override this default, but for the sake of simplicity lets assume that the location remains the same).

Run the pgBackRest “restore” command to restore the data directory from the latest backup.

backrest/bin/pgbackrest --stanza=demo --db-socket-path=/tmp --log-level-console=info restore

Now we should be able to start up the PostgreSQL instance with the restored data directory.

./pgc start pg95

Our PostgreSQL cluster is now back online from the backup we restored.

My PL/pgSQL code is stuck, but where?

In the short time that I am enhancing the PL profiler (https://bitbucket.org/openscg/plprofiler) I have been asked multiple times if it can be abused as a debugging aid. Not directly. The conversation goes something like this:

Q: When my PL code is stuck somewhere, can I turn on profiling and see where?
A: No.
Q: Will this feature be added?
A: No.

Of course would that be a useful feature. I don't argue that. And it seems to be that this is precisely how Oracle users find out where their PL/SQL code gets stuck or moves like molasses. However, the reason why I am against adding this is because a profiler, or even parts of it, should not be enabled 24x7 on a production database server for the purpose of eventually using it to extract some debugging information some day. There are other ways to get that information and I will show you one of them.

If a program is stuck somewhere, one uses a debugger to get a backtrace. This works with C programs, like the database server, provided that symbol information is available. In that backtrace (and some variables) we also find the PL/pgSQL backtrace. Having symbol information available is also useful in case of a program crash, to find out why it crashed by loading the core dump into the debugger. 

Every line of PL code, that is executed, goes through the PL/pgSQL executor function exec_stmt(). At that place we find the current PL code's line in a variable. Likewise every function execution goes through either plpgsql_exec_function() or plpgsql_exec_trigger(). In those stack frames we find the OID of the function as well as its signature (name and call argument types).

Doing this eventually several times for a deeply nested PL/pgSQL program is tedious and no fun. So here is a little script called plbacktrace (https://github.com/wieck/plbacktrace.git) that does all of that. It is invoked with the PID of the PostgreSQL backend and will output information like



[postgres@db1 tmp]$ ./plbacktrace.py 13239
fn_oid=105672 lineno=5 func="life_get_cell_pl(life_board,integer,integer)"
fn_oid=105673 lineno=12 func="life_num_neighbors_pl(life_board,integer,integer)"
fn_oid=105675 lineno=11 func="life_tick_pl(life_board)"
fn_oid=105674 lineno=22 func="life_pl(integer,integer,boolean)"
 I will explain another time why I have a database server playing Conway's Game of Life written in PL/pgSQL. 


Regards, Jan

Compiling new Postgres (9.6.x) on newest LTS Ubuntu (16.04)

It is great to confirm that the latest version of Postgres cleanly builds on the latest Ubuntu without any errors or warnings.   Hmmm, I wonder now if GGC 5 makes anything measurably run faster??

Pre-Requisites:

$ sudo apt-get install build-essential  libreadline-dev  zlib1g-dev flex bison libxml2-dev libxslt-dev libssl-dev

With this complete, you can then do an industrial strength build of PostgreSQL as per below:


$ tar -xvf postgresql-9.6.x.tar.bz2

$ cd postgresql-9.6.x

$ ./configure --with-libxml --with-libxslt --with-openssl

$ make

MySQL Foreign Data Wrapper : A quick tour

Data centers are no longer dominated by a single DBMS. Many companies have heterogeneous environments and may want their Postgres database to talk to other database systems. Foreign Data Wrappers can be the right solution for many scenarios. The BigSQL Project provides a well tested, ready to use MySQL FDW with Postgres. This makes life easy for a DevOps or DataCenter person.

Here is a quick tour on how to configure Foreign Data Wrappers for MySQL, so that Postgres can query a MySQL table. For this quick guide, I use a CentOS Linux machine. This, or a similar setup, should work fine on all other operating systems.

Setting up a test MySQL server for the test

In this demo I’m going to create a table in MySQL  which should be available to Postgres though the FDW.
The FDW can talk to any MySQL distribution including Oracle’s MySQL, Percona Server or MariaDB. I’m going to use MariaDB, which is more community friendly.

Install MariaDB Server and Start the service

$ sudo yum install mariadb-server.x86_64
$ sudo systemctl start mariadb

Connect as root user of mariadb and create a database

$ mysql -uroot
MariaDB [(none)]> create database postgres;

Connect to Database and create a table

MariaDB [(none)]> use postgres;
MariaDB [postgres]> create table t1m(id int,name varchar(30));

Insert some data in the table:

MariaDB [postgres]> insert into t1m values (1,'abc');
Query OK, 1 row affected (0.04 sec)

MariaDB [postgres]> insert into t1m values (2,'def');
Query OK, 1 row affected (0.00 sec)

MariaDB [postgres]> insert into t1m values (3,'hij');
Query OK, 1 row affected (0.03 sec)

Setting up Postgres Database

Install Postgres

For this test, I’m going to use the Postgres DevOps Sandbox from the BigSQL project.
Download the Sandbox from BigSQL
Since this is a sandbox, you just need to unpack it

$ tar -xvf bigsql-9.5.3-5-linux64.tar.bz2

Install MySQL FDW

Go to the unpacked directory and invoke the bigsql command line tool to install MySQL FDW

$ cd bigsql
$ ./pgc list
Category | Component | Version | Status | Port | Updates
PostgreSQL pg92 9.2.17-5 NotInstalled
PostgreSQL pg93 9.3.13-5 NotInstalled
PostgreSQL pg94 9.4.8-5 NotInstalled
PostgreSQL pg95 9.5.3-5 NotInitialized
Extensions cassandra_fdw3-pg95 3.0.0-1 NotInstalled
Extensions hadoop_fdw2-pg95 2.5.0-1 NotInstalled
Extensions mysql_fdw2-pg95 2.1.2-1 NotInstalled
Extensions oracle_fdw1-pg95 1.4.0-1 NotInstalled
Extensions orafce3-pg95 3.3.0-1 NotInstalled
Extensions pgtsql9-pg95 9.5-1 NotInstalled
Extensions pljava15-pg95 1.5.0-1 NotInstalled
Extensions plv814-pg95 1.4.8-1 NotInstalled
Extensions postgis22-pg95 2.2.2-2 NotInstalled
Extensions slony22-pg95 2.2.5-2 NotInstalled
Extensions tds_fdw1-pg95 1.0.7-1 NotInstalled
Servers bam2 1.5.0 NotInstalled
Servers cassandra30 3.0.6 NotInstalled
Servers hadoop26 2.6.4 NotInstalled
Servers hive2 2.0.1 NotInstalled
Servers pgbouncer17 1.7.2-1 NotInstalled
Servers pgha2 2.1b NotInstalled
Servers pgstudio2 2.0.1-2 NotInstalled
Servers spark16 1.6.1 NotInstalled
Servers tomcat8 8.0.35 NotInstalled
Servers zookeeper34 3.4.8 NotInstalled
Applications backrest 1.02 NotInstalled
Applications birt 4.5.0 NotInstalled
Applications ora2pg 17.4 NotInstalled
Applications pgbadger 8.1 NotInstalled
Frameworks java8 8u92 NotInstalled
$ ./pgc install mysql_fdw2-pg95
['mysql_fdw2-pg95']
Get:1 http://s3.amazonaws.com/pgcentral mysql_fdw2-pg95-2.1.2-1-linux64
Unpacking mysql_fdw2-pg95-2.1.2-1-linux64.tar.bz2

Note:- We can use the same command line tool to initalize a new postgres cluster

$ ./pgc init pg95

## Initializing pg95 #######################

Superuser Password [password]:
Confirm Password:
Giving current user permission to data dir

Initializing Postgres DB at:
-D "/home/vagrant/bigsql/data/pg95"

Using PostgreSQL Port 5432

Password securely remembered in the file: /home/vagrant/.pgpass

to load this postgres into your environment, source the env file:
/home/vagrant/bigsql/pg95/pg95.env

Create the extension in the postgres database

create extension mysql_fdw;

Create foreign server

postgres=# CREATE SERVER mysql_svr
FOREIGN DATA WRAPPER mysql_fdw
OPTIONS (host 'localhost', port '3306');
CREATE SERVER
postgres=#

Create foreign table

postgres=# CREATE FOREIGN TABLE mysql_tab (
postgres(# id int,
postgres(# name varchar(30)
postgres(# )
postgres-# SERVER mysql_svr
postgres-# OPTIONS (dbname 'postgres', table_name 't1m');
CREATE FOREIGN TABLE
postgres=#

Create user mapping

postgres=# CREATE USER MAPPING FOR PUBLIC
postgres-# SERVER mysql_svr
postgres-# OPTIONS (username 'root');

(if your user is having password authentication to mysql, you have to pass that also in the format (username ‘username’, password ‘password’))

Now everything is set, You can test by querying the table.

postgres=# select * from mysql_tab;
id | name
----+-------
1 | abc
2 | def
3 | hij
(3 rows)


Note:- MySQL FDW for Postgres requires MySQL Client Libraries. Please make sure that libmysqlclient.so is there in the LD_LIBARY_PATH. if this file name is something different like “libmysqlclient.so.18.0.0″, you may have to create a softlink with name “libmysqlclient.so”

Welcome Affan

Affan Salman joins the team
East Brunswick, NJ, March 1, 2016

OpenSCG, a leading provider of subscriptions and services for PostgreSQL, announced today that Affan Salman has joined its team of top-tier PostgreSQL talent as Senior Database Architect. Affan is the primary original author of EnterpriseDB’s Oracle compatibility from 10 years ago and has spent much of the last six years rounding out his database internals credentials by becoming a Cassandra, Spark & Hadoop rock star.

“We are very excited that Affan has re-joined the family,” said Denis Lussier, CEO of OpenSCG. “Affan put EnterpriseDB on the map for Oracle compatibility back in the day. Having him as a key part of the OpenSCG team will enable us to move faster, and further enhance our strategic plans for the future.” As part of his new responsibilities, Salman is expected to contribute to BigSQL, a new PostgreSQL Distribution that OpenSCG is sponsoring.

“I was thrilled to join the OpenSCG/BigSQL team,” said Salman. “I have worked with the management team for over a decade, and was looking for a balance between new product development and working closely with customers on data-driven business problems. My new role offers the chance to do both.”

About OpenSCG
Open Source Consulting Group, Inc. (OpenSCG) is a leading provider of subscriptions and services for PostgreSQL, the world’s most advanced open source database. The OpenSCG PostgreSQL team is one of the largest and most experienced in the world, and the management team is respected and trusted across the global Postgres community. OpenSCG offers PostgreSQL consulting and migration services, as well as subscription services that provide 24×7 global support and remote DBA Services.