xuanyuanking opened a new pull request #25420: [SPARK-28699][Core] Cache an 
indeterminate RDD could lead to incorrect result while stage rerun
URL: https://github.com/apache/spark/pull/25420
 
 
   ## What changes were proposed in this pull request?
   
   It's another case for the indeterminate stage/RDD rerun while stage rerun 
happened. In the CachedRDDBuilder, we miss tracking the `isOrderSensitive` 
characteristic to the newly created MapPartitionsRDD.
   This patch just a safeguard, if we need the support for stage rerunning, it 
should be done after #24892.
   
   ## How was this patch tested?
   
   Integrated test with an exception, instead of the wrong answer.
   ```
   import scala.sys.process._
   
   import org.apache.spark.TaskContext
   val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x =>
     x
   }.repartition(200).map { x =>
     if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) 
{
       throw new Exception("pkill -f java".!!)
     }
     x
   }
   res.distinct().count()
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to