-dev
Have you tried clearing out the checkpoint directory? Can you also give
the full stack trace?
On Wed, May 24, 2017 at 3:45 PM, kant kodali wrote:
> Even if I do simple count aggregation like below I get the same error as
>
Even if I do simple count aggregation like below I get the same error as
https://issues.apache.org/jira/browse/SPARK-19268
Dataset df2 = df1.groupBy(functions.window(df1.col("Timestamp5"),
"24 hours", "24 hours"), df1.col("AppName")).count();
On Wed, May 24, 2017 at 3:35 PM, kant kodali
Hi All,
I am using Spark 2.1.1 and running in a Standalone mode using HDFS and
Kafka
I am running into the same problem as
https://issues.apache.org/jira/browse/SPARK-19268 with my app(not
KafkaWordCount).
Here is my sample code
*Here is how I create ReadStream*
sparkSession.readStream()