[
https://issues.apache.org/jira/browse/BEAM-2879?focusedWorklogId=323921&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323921
]
ASF GitHub Bot logged work on BEAM-2879:
----------------------------------------
Author: ASF GitHub Bot
Created on: 05/Oct/19 13:13
Start Date: 05/Oct/19 13:13
Worklog Time Spent: 10m
Work Description: steveniemitz commented on issue #9665: [BEAM-2879]
Support writing data to BigQuery via avro
URL: https://github.com/apache/beam/pull/9665#issuecomment-538648713
Thanks for the thoughts! My comments inline
> This is just a brain dump of what I'm thinking...
>
> I wonder whether we need the `AvroWriteRequest`, and the Avro schema. I
guess we do, as the `InputElement` (whatever it is) + the Avro schema are all
one needs to build the `GenericRecord`. Having the `AvroWriteRequest` may help
make the formatting function as concise as possible....
I really went back and forth on this a few times. We could use
`SerializableBiFunction` here, but if in the future we ever wanted to add
another parameter, it'd be a breaking change. This was we can just add a field
to the class. This follows the same pattern as read does, where it takes a
`SchemaAndRecord` as an input. You do need both the avro schema and the
element though in order to support more advanced cases w/ DynamicDestinations,
etc. Plus avro schemas themselves aren't easily serializable (until avro 1.9)
so users can't simply create a closure over them.
I do hate the name though, if you can think of anything better I'd love to
rename this!
> As for supporting Beam schemas + avro files, one could have a
`useBeamSchemaForAvroFiles()`... though it's a little strange....
>
> Another option is to have `useBeamSchema`, and a pre-coded avro formatting
function called something like ... `BigQueryIOUtils.beamRowToAvroRecord()`.
Though this is a little awkward too.
Yeah I struggled with this as well. The only thing stopping us from having
a version that supports beam schemas is the interface.
`useBeamSchemaForAvroFiles` is a pretty reasonable name.
> Overall, I like using `AvroWriteRequest` as input for the avro format
function... and for supporting Beam schemas, it may be that
`useBeamSchemaForAvroFiles` (or some better name) is the more reasonable
options.
> WDYT?
I'd be up for adding that in a follow-up PR. I also have some ideas around
`writeGenericRecords()` I want to play around with (that would also use beam
schemas, similar to `AvroIO.writeGenericRecords()` )
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 323921)
Time Spent: 40m (was: 0.5h)
> Implement and use an Avro coder rather than the JSON one for intermediary
> files to be loaded in BigQuery
> --------------------------------------------------------------------------------------------------------
>
> Key: BEAM-2879
> URL: https://issues.apache.org/jira/browse/BEAM-2879
> Project: Beam
> Issue Type: Improvement
> Components: io-java-gcp
> Reporter: Black Phoenix
> Assignee: Steve Niemitz
> Priority: Minor
> Labels: starter
> Time Spent: 40m
> Remaining Estimate: 0h
>
> Before being loaded in BigQuery, temporary files are created and encoded in
> JSON. Which is a costly solution compared to an Avro alternative
--
This message was sent by Atlassian Jira
(v8.3.4#803005)