Hi All,

Does Spark Structured Streaming have a JDBC sink or Do I need to use
ForEachWriter? I see the following code in this link
<https://databricks.com/blog/2016/07/28/structured-streaming-in-apache-spark.html>
and
I can see that database name can be passed in the connection string,
however, I wonder how to pass a table name?

inputDF.groupBy($"action", window($"time", "1 hour")).count()
       .writeStream.format("jdbc")
       .save("jdbc:mysql//…")


Thanks,
Kant

Reply via email to