[ https://issues.apache.org/jira/browse/SQOOP-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14106379#comment-14106379 ]
Pratik Khadloya commented on SQOOP-1390: ---------------------------------------- Another option that worked was to specify -libjars like below: {code} bin/sqoop import -libjars <jar1>,<jar2>,<jar3> ..... {code} > Import data to HDFS as a set of Parquet files > --------------------------------------------- > > Key: SQOOP-1390 > URL: https://issues.apache.org/jira/browse/SQOOP-1390 > Project: Sqoop > Issue Type: Sub-task > Components: tools > Reporter: Qian Xu > Assignee: Qian Xu > Fix For: 1.4.6 > > Attachments: SQOOP-1390.patch > > > Parquet files keep data in contiguous chunks by column, appending new records > to a dataset requires rewriting substantial portions of existing a file or > buffering records to create a new file. > The JIRA proposes to add the possibility to import an individual table from a > RDBMS into HDFS as a set of Parquet files. We will also provide a > command-line interface with a new argument {{--as-parquetfile}} > Example invocation: > {{sqoop import --connect JDBC_URI --table TABLE --as-parquetfile --target-dir > /path/to/files}} > The major items are listed as follows: > * Implement ParquetImportMapper > * Hook up the ParquetOutputFormat and ParquetImportMapper in the import job. > * Be able to support import from scratch or in append mode > Note that as Parquet is a columnar storage format, it doesn't make sense to > write to it directly from record-based tools. So we'd consider to use Kite > SDK to simplify the handling of Parquet specific things. -- This message was sent by Atlassian JIRA (v6.2#6252)