[
https://issues.apache.org/jira/browse/SPARK-19768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889472#comment-15889472
]
Amit Baghel edited comment on SPARK-19768 at 3/1/17 4:14 AM:
-------------------------------------------------------------
Thanks [~zsxwing] for clarification. Documentation for structured streaming is
missing this piece of information. The error thrown in case of console sink
with checkpoint should be more meaningful. I have one more question. Does file
sink using "parquet" and checkpoint work only for non-aggregate query? I tried
this for both aggregate and non-aggregate queries and I am getting exception
for aggregate query.
was (Author: baghelamit):
Thanks [~zsxwing] for clarification. Documentation for structured streaming
missing this piece of information. The error thrown in case of console sink
with checkpoint should be more meaningful. I have one more question. Does file
sink using "parquet" and checkpoint work only for non-aggregate query? I tried
this for both aggregate and non-aggregate queries and I am getting exception
for aggregate query.
> Error for both aggregate and non-aggregate queries in Structured Streaming
> - "This query does not support recovering from checkpoint location"
> -------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-19768
> URL: https://issues.apache.org/jira/browse/SPARK-19768
> Project: Spark
> Issue Type: Question
> Components: Structured Streaming
> Affects Versions: 2.1.0
> Reporter: Amit Baghel
>
> I am running JavaStructuredKafkaWordCount.java example with
> checkpointLocation. Output mode is "complete". Below is relevant code.
> {code}
> // Generate running word count
> Dataset<Row> wordCounts = lines.flatMap(new FlatMapFunction<String,
> String>() {
> @Override
> public Iterator<String> call(String x) {
> return Arrays.asList(x.split(" ")).iterator();
> }
> }, Encoders.STRING()).groupBy("value").count();
> // Start running the query that prints the running counts to the console
> StreamingQuery query = wordCounts.writeStream()
> .outputMode("complete")
> .format("console")
> .option("checkpointLocation", "/tmp/checkpoint-data")
> .start();
> {code}
> This example runs successfully and writes data in checkpoint directory. When
> I re-run the program it throws below exception
> {code}
> Exception in thread "main" org.apache.spark.sql.AnalysisException: This query
> does not support recovering from checkpoint location. Delete
> /tmp/checkpoint-data/offsets to start over.;
> at
> org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:219)
> at
> org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:269)
> at
> org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:262)
> at
> com.spark.JavaStructuredKafkaWordCount.main(JavaStructuredKafkaWordCount.java:85)
> {code}
> Then I modified JavaStructuredKafkaWordCount.java to have non aggregate query
> with output mode as "append". Please see the code below.
> {code}
> // no aggregations
> Dataset<Row> wordCounts = lines.flatMap(new FlatMapFunction<String,
> String>() {
> @Override
> public Iterator<String> call(String x) {
> return Arrays.asList(x.split(" ")).iterator();
> }
> }, Encoders.STRING()).select("value");
> // append mode with console
> StreamingQuery query = wordCounts.writeStream()
> .outputMode("append")
> .format("console")
> .option("checkpointLocation", "/tmp/checkpoint-data")
> .start();
> {code}
> This modified code runs successfully and writes data in checkpoint directory.
> When I re-run the program it throws same exception
> {code}
> Exception in thread "main" org.apache.spark.sql.AnalysisException: This query
> does not support recovering from checkpoint location. Delete
> /tmp/checkpoint-data/offsets to start over.;
> at
> org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:219)
> at
> org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:269)
> at
> org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:262)
> at
> com.spark.JavaStructuredKafkaWordCount.main(JavaStructuredKafkaWordCount.java:85)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]