[ 
https://issues.apache.org/jira/browse/PARQUET-1142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16775448#comment-16775448
 ] 

Ryan Blue commented on PARQUET-1142:
------------------------------------

The next steps for this are to get compression working without relying on 
Hadoop. After that, it is a matter of some fairly simple refactoring of the 
file writer. But that refactoring doesn't help much unless the compression 
implementations also don't depend on Hadoop.

> Avoid leaking Hadoop API to downstream libraries
> ------------------------------------------------
>
>                 Key: PARQUET-1142
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1142
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-mr
>    Affects Versions: 1.9.0
>            Reporter: Ryan Blue
>            Assignee: Ryan Blue
>            Priority: Major
>             Fix For: 1.10.0
>
>
> Parquet currently leaks the Hadoop API by requiring callers to pass {{Path}} 
> and {{Configuration}} instances, and by using Hadoop codecs. {{InputFile}} 
> and {{SeekableInputStream}} add alternatives to Hadoop classes in some parts 
> of the read path, but this needs to be extended to the write path and to 
> avoid passing options through {{Configuration}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to