[ 
https://issues.apache.org/jira/browse/SQOOP-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194203#comment-14194203
 ] 

Qian Xu commented on SQOOP-1390:
--------------------------------

If [~tispratik]'s solution does not help, you might add jars of two more 
packages on the path. They are joda-time and fasterxml.

> Import data to HDFS as a set of Parquet files
> ---------------------------------------------
>
>                 Key: SQOOP-1390
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1390
>             Project: Sqoop
>          Issue Type: Sub-task
>          Components: tools
>            Reporter: Qian Xu
>            Assignee: Qian Xu
>             Fix For: 1.4.6
>
>         Attachments: SQOOP-1390.patch
>
>
> Parquet files keep data in contiguous chunks by column, appending new records 
> to a dataset requires rewriting substantial portions of existing a file or 
> buffering records to create a new file. 
> The JIRA proposes to add the possibility to import an individual table from a 
> RDBMS into HDFS as a set of Parquet files. We will also provide a 
> command-line interface with a new argument {{--as-parquetfile}} 
> Example invocation: 
> {{sqoop import --connect JDBC_URI --table TABLE --as-parquetfile --target-dir 
> /path/to/files}}
> The major items are listed as follows:
> * Implement ParquetImportMapper
> * Hook up the ParquetOutputFormat and ParquetImportMapper in the import job.
> * Be able to support import from scratch or in append mode
> Note that as Parquet is a columnar storage format, it doesn't make sense to 
> write to it directly from record-based tools. So we'd consider to use Kite 
> SDK to simplify the handling of Parquet specific things.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to