in writing my own RDD i ran into a few issues with respect to stuff being
private in spark.

in compute i would like to return an iterator that respects task killing
(as HadoopRDD does), but the mechanics for that are inside the private
InterruptibleIterator. also the exception i am supposed to throw
(TaskKilledException) is private to spark.

Reply via email to