[ 
https://issues.apache.org/jira/browse/SQOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766630#comment-15766630
 ] 

chen kai edited comment on SQOOP-2907 at 12/21/16 3:48 PM:
-----------------------------------------------------------

We are using hive stored as parquet file and exporting by sqoop failed. This 
patch will resolve it, the changes are based on `sqoop-1.4.6-cdh5.7.1`. If you 
want to export recursively, see [SQOOP-951], And then change 
[ExportJobBase.java line:159] code, finding child file recursively bug.


was (Author: 514793...@qq.com):
We are using hive stored as parquet file and exporting by sqoop failed. This 
patch will resolve it. If you want to export recursively, see [SQOOP-951], And 
then change [ExportJobBase.java line:159] code, finding child file recursively 
bug.

> Export parquet files to RDBMS: don't require .metadata for parquet files
> ------------------------------------------------------------------------
>
>                 Key: SQOOP-2907
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2907
>             Project: Sqoop
>          Issue Type: Improvement
>          Components: metastore
>    Affects Versions: 1.4.6
>         Environment: sqoop 1.4.6
> export parquet files to Oracle
>            Reporter: Ruslan Dautkhanov
>         Attachments: SQOOP-2907.patch
>
>
> Kite currently requires .metadata.
> Parquet files have their own metadata stored along data files.
> It would be great for Export operation on parquet files to RDBMS not to 
> require .metadata.
> We have most of the files created by Spark and Hive, and they don't create 
> .metadata, it only Kite that does.
> It makes sqoop export of parquet files usability very limited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to