On Wed, Apr 27, 2016 at 11:12 PM, Jean-Baptiste Onofré <[email protected]> 
wrote:
>
> generally speaking, we have to check that all runners work fine with the 
> provided IO. I don't think it's a good idea that the runners themselves 
> implement any IO: they should use "out of the box" IO.

In the long run, big yes and I liked to help to make it possible!
However, there is still a gap between what Beam and its Runners
provide and what users want to do. For the time being, I think the
solution we have is fine. It gives users the option to try out Beam
with sources and sinks that they expect to be available in streaming
systems.


@Kanisha: I've created a class here in my branch that demonstrates how
to Read/Write data to Kafka using String or Avro serialization. There
is also a small bug fix included for generic types. I'll try to
contribute this back to the Beam repository as soon as possible. In
the meantime, please see if you can use this branch for your
experiments.

https://github.com/mxm/incubator-beam/tree/kafkaSink

In particular, please see the KafkaIOExamples:
https://github.com/mxm/incubator-beam/blob/kafkaSink/runners/flink/examples/src/main/java/org/apache/beam/runners/flink/examples/streaming/KafkaIOExamples.java

You can run the main methods in the KafkaString and KafkaAvro classes
to execute the , i.e.

KafkaString: ReadStringFromKafka, WriteStringToKafka

KafkaAvro: ReadAvroFromKafka, WriteAvroToKafka

Again, I tested that these work with Kafka. Hope that helps to get
everything running.

Cheers,
Max

On Thu, Apr 28, 2016 at 9:35 AM, Raghu Angadi <[email protected]> wrote:
>
> On Wed, Apr 27, 2016 at 11:12 PM, Jean-Baptiste Onofré <[email protected]>
> wrote:
>>
>> generally speaking, we have to check that all runners work fine with the
>> provided IO. I don't think it's a good idea that the runners themselves
>> implement any IO: they should use "out of the box" IO.
>
>
> +1

Reply via email to