[jira] [Commented] (BEAM-17) Add support for new Beam Source API

2016-09-21 Thread Amit Sela (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15509658#comment-15509658
 ] 

Amit Sela commented on BEAM-17:
---

Same for streaming, with UnboundedSource, but the challenge there is greater 
since a "checkpointing" mechanism needs to be available on the workers.

> Add support for new Beam Source API
> ---
>
> Key: BEAM-17
> URL: https://issues.apache.org/jira/browse/BEAM-17
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-spark
>Reporter: Amit Sela
>Assignee: Amit Sela
>
> The API is discussed in 
> https://cloud.google.com/dataflow/model/sources-and-sinks#creating-sources
> To implement this, we need to add support for 
> com.google.cloud.dataflow.sdk.io.Read in TransformTranslator. This can be 
> done by creating a new SourceInputFormat class that translates from a DF 
> Source to a Hadoop InputFormat. The two concepts are pretty-well aligned 
> since they both have the concept of splits and readers.
> Note that when there's a native HadoopSource in DF, it will need 
> special-casing in the code for Read since we'll be able to use the underlying 
> InputFormat directly.
> This could be tested using XmlSource from the SDK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (BEAM-17) Add support for new Beam Source API

2016-06-24 Thread Amit Sela (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348444#comment-15348444
 ] 

Amit Sela commented on BEAM-17:
---

Implementing for next gen. Spark runner - Spark 2.x

> Add support for new Beam Source API
> ---
>
> Key: BEAM-17
> URL: https://issues.apache.org/jira/browse/BEAM-17
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-spark
>Reporter: Amit Sela
>Assignee: Amit Sela
>
> The API is discussed in 
> https://cloud.google.com/dataflow/model/sources-and-sinks#creating-sources
> To implement this, we need to add support for 
> com.google.cloud.dataflow.sdk.io.Read in TransformTranslator. This can be 
> done by creating a new SourceInputFormat class that translates from a DF 
> Source to a Hadoop InputFormat. The two concepts are pretty-well aligned 
> since they both have the concept of splits and readers.
> Note that when there's a native HadoopSource in DF, it will need 
> special-casing in the code for Read since we'll be able to use the underlying 
> InputFormat directly.
> This could be tested using XmlSource from the SDK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)