Thanks Ted.

I didn't try, but I think SaveMode and OuputMode are different things.
Currently, the spark code contain two output mode, Append and Update.  Append 
is the default mode,but looks that there is no way to change to Update.

Take a look at DataFrameWriter#startQuery

Thanks.








At 2016-05-18 12:10:11, "Ted Yu" <yuzhih...@gmail.com> wrote:

Have you tried adding:


    .mode(SaveMode.Overwrite)



On Tue, May 17, 2016 at 8:55 PM, Todd <bit1...@163.com> wrote:

scala> records.groupBy("name").count().write.trigger(ProcessingTime("30 
seconds")).option("checkpointLocation", 
"file:///home/hadoop/jsoncheckpoint").startStream("file:///home/hadoop/jsonresult")
org.apache.spark.sql.AnalysisException: Aggregations are not supported on 
streaming DataFrames/Datasets in Append output mode. Consider changing output 
mode to Update.;
  at 
org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.org$apache$spark$sql$catalyst$analysis$UnsupportedOperationChecker$$throwError(UnsupportedOperationChecker.scala:142)
  at 
org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForStreaming$1.apply(UnsupportedOperationChecker.scala:59)
  at 
org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForStreaming$1.apply(UnsupportedOperationChecker.scala:46)
  at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:125)
  at 
org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.checkForStreaming(UnsupportedOperationChecker.scala:46)
  at 
org.apache.spark.sql.ContinuousQueryManager.startQuery(ContinuousQueryManager.scala:190)
  at org.apache.spark.sql.DataFrameWriter.startStream(DataFrameWriter.scala:351)
  at org.apache.spark.sql.DataFrameWriter.startStream(DataFrameWriter.scala:279)



I brief the spark code, looks like there is no way to change output mode to 
Update?


Reply via email to