Sorry, I sent not finished message.

In terms of schema and ParquetIO source/sink, there was an answer in some
previous thread [1]. Currently (without introducing any change in
ParquetIO) there is no way to not pass the avro schema. It will probably be
replaced with Beam's schema in the future [2].

[1]
https://lists.apache.org/thread.html/a466ddeb55e47fd780be3bcd8eec9d6b6eaf1dfd566ae5278b5fb9e8@%3Cuser.beam.apache.org%3E
[2] https://issues.apache.org/jira/browse/BEAM-4812

wt., 31 lip 2018 o 12:43 Łukasz Gajowy <[email protected]> napisał(a):

> In terms of schema and ParquetIO source/sink, there was an answer in some
> previous thread:
>
> Currently (without introducing any change in ParquetIO) there is no way to
> not pass the avro schema. It will probably be replaced with Beam's schema
> in the future ()
>
> [1]
> https://lists.apache.org/thread.html/a466ddeb55e47fd780be3bcd8eec9d6b6eaf1dfd566ae5278b5fb9e8@%3Cuser.beam.apache.org%3E
>
>
> wt., 31 lip 2018 o 10:19 Akanksha Sharma B <[email protected]>
> napisał(a):
>
>> Hi,
>>
>>
>> I am hoping to get some hints/pointers from the experts here.
>>
>> I hope the scenario described below was understandable. I hope it is a
>> valid use-case. Please let me know if I need to explain the scenario
>> better.
>>
>>
>> Regards,
>>
>> Akanksha
>>
>> ------------------------------
>> *From:* Akanksha Sharma B
>> *Sent:* Friday, July 27, 2018 9:44 AM
>> *To:* [email protected]
>> *Subject:* Re: pipeline with parquet and sql
>>
>>
>> Hi,
>>
>>
>> Please consider following pipeline:-
>>
>>
>> Source is Parquet file, having hundreds of columns.
>>
>> Sink is Parquet. Multiple output parquet files are generated after
>> applying some sql joins. Sql joins to be applied differ for each output
>> parquet file. Lets assume we have a sql queries generator or some
>> configuration file with the needed info.
>>
>>
>> Can this be implemented generically, such that there is no need of the
>> schema of the parquet files involved or any intermediate POJO or beam
>> schema.
>>
>> i.e. the way spark can handle it - read parquet into dataframe, create
>> temp view and apply sql queries to it, and write it back to parquet.
>>
>> As I understand, beam SQL needs (Beam Schema or POJOs) and parquetIO
>> needs avro schemas. Ideally we dont want to see POJOs or schemas.
>> If there is a way we can achieve this with beam, please do help.
>>
>> Regards,
>> Akanksha
>>
>> ------------------------------
>> *From:* Akanksha Sharma B
>> *Sent:* Tuesday, July 24, 2018 4:47:25 PM
>> *To:* [email protected]
>> *Subject:* pipeline with parquet and sql
>>
>>
>> Hi,
>>
>>
>> Please consider following pipeline:-
>>
>>
>> Source is Parquet file, having hundreds of columns.
>>
>> Sink is Parquet. Multiple output parquet files are generated after
>> applying some sql joins. Sql joins to be applied differ for each output
>> parquet file. Lets assume we have a sql queries generator or some
>> configuration file with the needed info.
>>
>>
>> Can this be implemented generically, such that there is no need of the
>> schema of the parquet files involved or any intermediate POJO or beam
>> schema.
>>
>> i.e. the way spark can handle it - read parquet into dataframe, create
>> temp view and apply sql queries to it, and write it back to parquet.
>>
>> As I understand, beam SQL needs (Beam Schema or POJOs) and parquetIO
>> needs avro schemas. Ideally we dont want to see POJOs or schemas.
>> If there is a way we can achieve this with beam, please do help.
>>
>> Regards,
>> Akanksha
>>
>>
>>
>>

Reply via email to