Hi!

Is this possible possible in spark 2.1.1?

Sent from my iPhone

> On May 19, 2017, at 5:55 AM, Patrick McGloin <mcgloin.patr...@gmail.com> 
> wrote:
> 
> # Write key-value data from a DataFrame to a Kafka topic specified in an 
> option
> query = df \
>   .selectExpr("CAST(userId AS STRING) AS key", "to_json(struct(*)) AS value") 
> \
>   .writeStream \
>   .format("kafka") \
>   .option("kafka.bootstrap.servers", "host1:port1,host2:port2") \
>   .option("topic", "topic1") \
>   .option("checkpointLocation", "/path/to/HDFS/dir") \
>   .start()
> Described here:
> https://databricks.com/blog/2017/04/26/processing-data-in-apache-kafka-with-structured-streaming-in-apache-spark-2-2.html
> 
> 
>> On 19 May 2017 at 10:45, <kanth...@gmail.com> wrote:
>> Is there a Kafka sink for Spark Structured Streaming ?
>> 
>> Sent from my iPhone
> 

Reply via email to