[ 
https://issues.apache.org/jira/browse/SQOOP-1393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133234#comment-14133234
 ] 

Pratik Khadloya commented on SQOOP-1393:
----------------------------------------

For my hive 0.13 env, i actually have these env variables set:

{code}
export HADOOP_COMMON_HOME=/usr/lib/hadoop-0.20
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
export HIVE_HOME=/usr/lib/hive
export HADOOP_HOME=/usr/lib/hadoop
export HIVE_CONF_DIR=/etc/hive/conf
export HCAT_HOME=/usr/lib/hive/hcatalog
{code}

> Import data from database to Hive as Parquet files
> --------------------------------------------------
>
>                 Key: SQOOP-1393
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1393
>             Project: Sqoop
>          Issue Type: Sub-task
>          Components: tools
>            Reporter: Qian Xu
>            Assignee: Richard
>             Fix For: 1.4.6
>
>         Attachments: patch.diff, patch_v2.diff, patch_v3.diff
>
>
> Import data to Hive as Parquet file can be separated into two steps:
> 1. Import an individual table from an RDBMS to HDFS as a set of Parquet files.
> 2. Import the data into Hive by generating and executing a CREATE TABLE 
> statement to define the data's layout in Hive with Parquet format table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to