Has anyone been able to run the code in  The Future of Real-Time in Spark
<http://rxin.github.io/talks/2016-02-18_spark_summit_streaming.pdf>    Slide
24 :"Continuous Aggregation"?

Specifically, the line: stream("jdbc:mysql//..."), 

Using Spark 2.0 preview build, I am getting the error when writing to MySQL:
Exception in thread "main" java.lang.UnsupportedOperationException: Data
source jdbc does not support streamed writing
        at
org.apache.spark.sql.execution.datasources.DataSource.createSink(DataSource.scala:201)

My code:
 val logsDF = sparkSession.read.format("json")
             
.stream("file:///xxx/xxx/spark-2.0.0-preview-bin-hadoop2.4/examples/src/main/resources/people.json")
    val logsDS = logsDF.as[Person]
   
logsDS.groupBy("name").sum("age").write.format("jdbc").option("checkpointLocation",
"/xxx/xxx/temp").startStream("jdbc:mysql//localhost/test")
  }

Looking at the Spark DataSource.scala source code, looks like only
ParquetFileFormat is supported?  Am I missing something?  What data sources
support streamed write? Is the example code referring to 2.0 features?

Thanks in advanced for your help.

Chang




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Running-of-Continuous-Aggregation-example-tp27229.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to