This tutorial will show you how to import PostgreSQL data into the Hadoop File System using the Sqoop 2 application within Hue. Hue, or the Hadoop User Experience, allows you to use Oozie, Hive, Pig, Postgres, Sqoop, Hbase, Zookeeper, and Hadoop from a browser. Since release three of the BigSQL Quick-start Virtual Machine comes with Hue already installed, I will be using that for this tutorial.
In the virtual machine, open a terminal window, go to the bigsql-9.3r2-b3 directory and start BigSQL:
$ cd bigsql-9.3r2-b3
$ ./bigsql.sh start
Next, set up the benchmarksql.customer history table:
$ . ./setENV.sh
postgres=# CREATE USER benchmarksql WITH SUPERUSER PASSWORD ‘password’;
postgres=# GRANT ALL PRIVILEGES ON DATABASE postgres TO benchmarksql;
$ cd examples/run/
$ ./runSQL.sh props.pg sqlTableCreates
$ ./runLoader.sh props.pg numWarehouses 1
$ ./runSQL.sh props.pg sqlIndexCreates
Now, open hue by going to localhost:8000/ in the virtual machine’s web browser. Log in with the username: bigsql and the password: bigsql.
Name the connection, make sure generic-jdbc-connector is selected, and define the class as org.postgresql.Driver. For this example we’re connecting to the postgres database as the benchmarksql user. Enter jdbc:postgresql://localhost:5432/postgres as the connection string, benchmarksql as the username, password as the password, and then click Save.
The next page is where you will fill out the information about where you are importing from. For this tutorial, specify that the schema name is Benchmarksql, that the table name is History, and then click Next.
Next, describe where you want Sqoop to move the data to. Choose HDFS as the storage type, Text File as the output format, no compression, and that the output directory will be /user/data/example/hive_customer_history. Then, click Save and run.
Once the job starts running, a notification will pop up next to the Job Browser. If you go here, you will be able the see the status of the mappers and reducers, and other information about the job.