smrosenberry commented on issue #23926: [SPARK-26872][STREAMING] Use a configurable value for final termination in the JobScheduler.stop() method URL: https://github.com/apache/spark/pull/23926#issuecomment-469307029 Unfortunately, the batch interval introduces different issues for this use case (see [my previous message](https://github.com/apache/spark/pull/23926#issuecomment-469093849)) as it controls the ongoing streaming process. The configuration needed for this use case is to gracefully stop the streaming. From questions on StackOverflow, I know others besides myself would find useful something that would limit the number of batches created and processed followed by a clean termination of the application. Without much research, I expect such a change would reach deeper into the core code than the proposed spark.streaming.jobTimeout which simply uses existing code by eliminating a hard-coded magic number.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
