[
https://issues.apache.org/jira/browse/BEAM-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15949420#comment-15949420
]
Daniel Halperin commented on BEAM-92:
-------------------------------------
Progress:
* Data-dependent file based sink is in PR
* Data-dependent BigQuery sink is in PR
Experience over the past year makes us think that the Write/Sink transform is
not general and instead we'll need individual implementations that follow
similar design patterns in spirit but are hard to unify at a low level.
Self-assigning to clean up.
> Data-dependent sinks
> --------------------
>
> Key: BEAM-92
> URL: https://issues.apache.org/jira/browse/BEAM-92
> Project: Beam
> Issue Type: New Feature
> Components: sdk-java-core
> Reporter: Eugene Kirpichov
> Assignee: Daniel Halperin
>
> Current sink API writes all data to a single destination, but there are many
> use cases where different pieces of data need to be routed to different
> destinations where the set of destinations is data-dependent (so can't be
> implemented with a Partition transform).
> One internally discussed proposal was an API of the form:
> {code}
> PCollection<Void> PCollection<T>.apply(
> Write.using(DoFn<T, SinkT> where,
> MapFn<SinkT, WriteOperation<WriteResultT, T>> how)
> {code}
> so an item T gets written to a destination (or multiple destinations)
> determined by "where"; and the writing strategy is determined by "how" that
> produces a WriteOperation (current API - global init/write/global finalize
> hooks) for any given destination.
> This API also has other benefits:
> * allows the SinkT to be computed dynamically (in "where"), rather than
> specified at pipeline construction time
> * removes the necessity for a Sink class entirely
> * is sequenceable w.r.t. downstream transforms (you can stick transforms onto
> the returned PCollection<Void>, while the current Write.to() returns a PDone)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)