[ 
https://issues.apache.org/jira/browse/PARQUET-1142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16775097#comment-16775097
 ] 

Hadrien Kohl commented on PARQUET-1142:
---------------------------------------

 

Hi. I've faced the same problems and I still need to drag in hadoop 
dependencies to be able to use ParquetWriter and ParquetReader. 

I am wrapping 
[SeekableByteChannel|https://docs.oracle.com/javase/8/docs/api/java/nio/channels/SeekableByteChannel.html]
 into InputFile/OutputFile so that I can call

 
{code:java}
ParquetWriter.builder(new 
Wrapper(Files.newByteChannel("/foo/bar.parquet")));{code}
 

Do you plan to work on this further? I'd be happy to help.

I saw prestodb also implemented another 
[reader/writer|https://github.com/prestodb/presto/tree/master/presto-parquet/src/main/java/com/facebook/presto/parquet].

> Avoid leaking Hadoop API to downstream libraries
> ------------------------------------------------
>
>                 Key: PARQUET-1142
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1142
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-mr
>    Affects Versions: 1.9.0
>            Reporter: Ryan Blue
>            Assignee: Ryan Blue
>            Priority: Major
>             Fix For: 1.10.0
>
>
> Parquet currently leaks the Hadoop API by requiring callers to pass {{Path}} 
> and {{Configuration}} instances, and by using Hadoop codecs. {{InputFile}} 
> and {{SeekableInputStream}} add alternatives to Hadoop classes in some parts 
> of the read path, but this needs to be extended to the write path and to 
> avoid passing options through {{Configuration}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to