[ 
https://issues.apache.org/jira/browse/BEAM-11460?focusedWorklogId=528351&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-528351
 ]

ASF GitHub Bot logged work on BEAM-11460:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 25/Dec/20 09:09
            Start Date: 25/Dec/20 09:09
    Worklog Time Spent: 10m 
      Work Description: iemejia merged pull request #13616:
URL: https://github.com/apache/beam/pull/13616


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 528351)
    Time Spent: 2h 10m  (was: 2h)

> Support reading Parquet files with unknown schema
> -------------------------------------------------
>
>                 Key: BEAM-11460
>                 URL: https://issues.apache.org/jira/browse/BEAM-11460
>             Project: Beam
>          Issue Type: New Feature
>          Components: io-java-parquet
>            Reporter: Anant Damle
>            Priority: P1
>              Labels: Parquet
>   Original Estimate: 336h
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Data engineers encounter times when schema of Parquet file is unknown at the 
> time of writing the pipeline or multiple schema may be present in different 
> files. Reading Parquet files using ParquetIO requires providing an Avro 
> (equivalent) schema, Many a times its not possible to know the schema of the 
> Parquet files.
> On the other hand 
> [AvroIO|https://beam.apache.org/releases/javadoc/2.26.0/org/apache/beam/sdk/io/AvroIO.html]
>  supports reading unknow schema files by providing a parse function : 
> {{*#parseGenericRecords(SerializableFunction<GenericRecord,T>)*}}
> Supporting this functionality in ParquetIO is simple and requires minimal 
> changes to the ParquetIO surface.
> {code} 
> Pipeline p = ...;
> PCollection<String> filepatterns = p.apply(...);
> PCollection<Foo> records =
>      filepatterns
>          .apply(FileIO.matchAll())
>          .apply(FileIO.readMatches())
>          .apply(ParquetIO.parseGenericRecords(new 
> SerializableFunction<GenericRecord, Foo>() {
>              public Foo apply(GenericRecord record) { 
>                // If needed, access the schema of the record using 
> record.getSchema()                
>                return ...;             
>              }
>           })); 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to