Hi,
I guess I found part of the issue: I said
dstream.transform(rdd => { rdd.foreachPartition(...); rdd })
instead of
dstream.transform(rdd => { rdd.mapPartitions(...) }),
that's why stop() would not stop the processing.
Now with the new version a non-graceful shutdown works in the sense that
Hi,
I am processing a bunch of HDFS data using the StreamingContext (Spark
1.1.0) which means that all files that exist in the directory at start()
time are processed in the first batch. Now when I try to stop this stream
processing using `streamingContext.stop(false, false)` (that is, even with
s