It would help to have AvroParquetReader/Writer also provide the
Also: any suggestions as to when this might be officially released?
On Tue, Feb 13, 2018 at 5:02 PM Ryan Blue <rb...@netflix.com> wrote:
> We're planning a release that will include the new OutputFile class, which
> I think you should be able to use. Is there anything you'd change to make
> this work more easily with Beam?
> On Tue, Feb 13, 2018 at 12:31 PM, Jean-Baptiste Onofré <j...@nanthrax.net>
>> Hi guys,
>> I'm working on the Apache Beam ParquetIO:
>> In Beam, thanks to FileIO, we support several filesystems (HDFS, S3, ...).
>> If I was able to implement the Read part using AvroParquetReader
>> leveraging Beam
>> FileIO, I'm struggling on the writing part.
>> I have to create ParquetSink implementing FileIO.Sink. Especially, I have
>> implement the open(WritableByteChannel channel) method.
>> It's not possible to use AvroParquetWriter here as it takes a Path as
>> (and from the channel, I can only have an OutputStream).
>> As a workaround, I wanted to use
>> providing my own implementation of org.apache.parquet.io.OutputFile.
>> Unfortunately OutputFile (and the updated method in ParquetFileWriter)
>> exists on
>> Parquet master branch, but it was different on Parquet 1.9.0.
>> So, I have two questions:
>> - do you plan a Parquet 1.9.1 release including org.apache.parquet.io
>> and updated org.apache.parquet.hadoop.ParquetFileWriter ?
>> - using Parquet 1.9.0, do you have any advice how to use
>> AvroParquetWriter/ParquetFileWriter with an OutputStream (or any object
>> that I
>> can get from WritableByteChannel) ?
>> Thanks !
>> Jean-Baptiste Onofré
>> Talend - http://www.talend.com
> Ryan Blue
> Software Engineer