[ 
https://issues.apache.org/jira/browse/SQOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15580833#comment-15580833
 ] 

Ruslan Dautkhanov commented on SQOOP-2907:
------------------------------------------

See last comment from [~b...@cloudera.com] at 
https://issues.cloudera.org/browse/KITE-1121 .

{quote}
... that's not how Kite is intended to work. Kite works with datasets that have 
metadata, whether stored in the file system or in Hive. It's up to the caller 
to create that metadata, and Kite provides an API for inferring it. The pieces 
are there, but I don't think it is a good idea for Kite to do this 
automatically. That's up to Sqoop.
{quote}

Would it be hard to add this .metadata generation into Sqoop?

> Export parquet files to RDBMS: don't require .metadata for parquet files
> ------------------------------------------------------------------------
>
>                 Key: SQOOP-2907
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2907
>             Project: Sqoop
>          Issue Type: Improvement
>          Components: metastore
>    Affects Versions: 1.4.6
>         Environment: sqoop 1.4.6
> export parquet files to Oracle
>            Reporter: Ruslan Dautkhanov
>
> Kite currently requires .metadata.
> Parquet files have their own metadata stored along data files.
> It would be great for Export operation on parquet files to RDBMS not to 
> require .metadata.
> We have most of the files created by Spark and Hive, and they don't create 
> .metadata, it only Kite that does.
> It makes sqoop export of parquet files usability very limited.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to