There is good demand for Hadoop integration with Postgres using FDW. Due to same reasonPostgres by BigSQL ships Hadoop FDW package along with other binaries.
let me give a quick step by step instruction (with an example) on how to set it up.

For preparing these steps, I used Hortonworks VM with Docker.
My Hive server is (10.150.11.66)  running on port 10000

Step 1. Get Client Library Jars from Hive Server

We should be able to locate the the jar files in the hive server.

$ ls /usr/hdp/*/hadoop/hadoop-common*[0-9].jar
/usr/hdp/2.5.0.0-1245/hadoop/hadoop-common-2.7.3.2.5.0.0-1245.jar
$ ls /usr/hdp/*/hive/lib/hive*jdbc*standalone.jar
/usr/hdp/2.5.0.0-1245/hive/lib/hive-jdbc-1.2.1000.2.5.0.0-1245-standalone.jar

Create a directory in the postgres server and copy those two jars into that

$ mkdir hive-client-lib
$ cd hive-client-lib
$ scp -P 2222 root@10.150.11.66:/usr/hdp/2.5.0.0-1245/hadoop/hadoop-common-2.7.3.2.5.0.0-1245.jar .
$ scp -P 2222 root@10.150.11.66:/usr/hdp/2.5.0.0-1245/hive/lib/hive-jdbc-1.2.1000.2.5.0.0-1245-standalone.jar .

Step 2. Install Java

Hadoop FDW needs Java (JRE) available in Postgres server. In case you don’t have it already, I would recommend installing it with the following pgc command:

$ ./pgc install java8
['java8']
Get:1 http://10.150.2.165 java8-8u121-linux64
Unpacking java8-8u121-linux64.tar.bz2

Step 3. Set environment variables and restart PostgreSQL server


export LD_LIBRARY_PATH=/home/vagrant/bigsql/java8/jre/lib/amd64/server
export HADOOP_FDW_CLASSPATH=/home/vagrant/bigsql/pg96/lib/postgresql/Hadoop_FDW.jar:/home/vagrant/hive-client-lib/hadoop-common-2.7.3.2.5.0.0-1245.jar:/home/vagrant/hive-client-lib/hive-jdbc-1.2.1000.2.5.0.0-1245-standalone.jar

In order to avoid setting these environment varaiables every time you open a shell, I would suggest setting these variables in .profile or .bash_profile or .bashrc:

echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH" >> ~/.bash_profile
echo "export HADOOP_FDW_CLASSPATH=$LD_LIBRARY_PATH" >> ~/.bash_profile

Restart the PostgreSQL server. This step ensures that the environment variables known to the PostgreSQL server

./pgc restart pg96

Step 4. Create extension and Import the schema / foreign table

Install the extension using the pgc command line utility:

$ ./pgc install hadoop_fdw

Create extension and server definitions:

postgres=# CREATE EXTENSION hadoop_fdw;

postgres=# CREATE SERVER hadoop_server FOREIGN DATA WRAPPER hadoop_fdw OPTIONS (HOST '10.150.11.66', PORT '10000');

postgres=# CREATE USER MAPPING FOR PUBLIC SERVER hadoop_server;

We have the option to import into a Postgres schemaall tables from a Hive database:.
postgres=# import foreign schema “default” from server “hadoop_server” into public;

If we import the entire Hive database, all tables will be visible in the Postgres schema:

postgres=# \det
List of foreign tables
Schema | Table | Server
--------+-------------+---------------
public | geolocation | hadoop_server
public | sample_07 | hadoop_server
public | sample_08 | hadoop_server
public | t1 | hadoop_server
public | trucks | hadoop_server
(5 rows)

Alternatively, we can create a foreign table in Postgres for a specific table in Hive:

postgres=# CREATE FOREIGN TABLE T1 (
ID INT,
NM VARCHAR(50)
) SERVER hadoop_server OPTIONS (TABLE 'T1');

Now we should be able to query the tables in Hive:

postgres=# SELECT * FROM T1;
id | nm
----+----------------
1 | HADOOP
2 | TESTING USING
3 | HIVE INTERFACE
4 | AND HADOOP FDW
(4 rows)