1) We are consuming from  kafka using  structured streaming and  writing the
processed data set to s3.
We also want to write the processed data to kafka moving forward, is it
possible to do it from the same streaming query ? (spark  version 2.1.1)


2) In the logs, I see the streaming  query progress output and I have a
sample duration JSON from the log, can some one please provide more clarity
on what  the difference is between *addBatch* and *getBatch* ?  

3)  TriggerExecution - is it the time take  to both process the fetched data
and writing to the sink?




"durationMs" : {
    "addBatch" : 2263426,
    "getBatch" : 12,
    "getOffset" : 273,
    "queryPlanning" : 13,
    "triggerExecution" : 2264288,
    "walCommit" : 552
  },

regards
aravias



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Structured-Streaming-multiple-sinks-tp29056.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to