Is there a way to change the streaming context batch interval after reloading
from checkpoint?
I would like to be able to change the batch interval after restarting the
application without loosing the checkpoint of course.
Thanks!
--
View this message in context:
http://apache-spark-user-lis
There are cases where spark streaming job tasks fails (one, several or all
tasks) and there's not much sense in progressing to the next job while
discarding the failed one. For example, when failing to connect to remote
target DB, I would like to either fail-fast and relaunch the application
from t
When a spark streaming task is failed (after exceeding
spark.task.maxFailures), the related batch job is considered failed and the
driver continues to the next batch in the pipeline after updating checkpoint
to the next checkpoint positions (the new offsets when using Kafka direct
streaming).
I'm