After checking the spark code, I now realize that an rdd that was cached to disk can't be evicted, so I will just persist the rdd to disk after the random numbers are created.
-- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Consistent-hashing-of-RDD-row-tp20820p20829.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org