Hi,
I guess I found part of the issue: I said
dstream.transform(rdd => { rdd.foreachPartition(...); rdd })
instead of
dstream.transform(rdd => { rdd.mapPartitions(...) }),
that's why stop() would not stop the processing.
Now with the new version a non-graceful shutdown works in the sense that
Spark does not wait for my processing to complete; job generator, job
scheduler, job executor etc. all seem to be shut down fine, just the
threads that do the actual processing are not. Even after
streamingContext.stop() is done, I see logging output from my processing
task.
Is there any way to signal to my processing tasks that they should stop the
processing?
Thanks
Tobias