> > *I found that task retries are currently not supported > <https://github.com/apache/spark/blob/5264164a67df498b73facae207eda12ee133be7d/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousTaskRetryException.scala> > in > continuous processing mode. Is there another way to recover from continuous > task failures currently?*
Yes, currently task level retry is not supported in CP mode and the recover strategy instead by stage restart. *If not, are there plans to support this in a future release?* Actually task level retry in CP mode is easy to implement in map-only operators but need more discussion when we plan to support more shuffled stateful operators in CP. More discussion in https://github.com/apache/spark/pull/20675. Basil Hariri <basil.har...@microsoft.com.invalid> 于2018年11月3日周六 上午3:09写道: > *Hi all,* > > > > *I found that task retries are currently not supported > <https://github.com/apache/spark/blob/5264164a67df498b73facae207eda12ee133be7d/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousTaskRetryException.scala> > in continuous processing mode. Is there another way to recover from > continuous task failures currently? If not, are there plans to support this > in a future release?* > > Thanks, > > Basil >