Hello,

Is it possible in spark to reuse cached RDDs generated in earlier run?

Specifically, I am trying to have a setup where first scala script
generates cached RDDs. If another scala script tries to perform same
operations on same dataset, it should be able to get results from cache
generated in earlier run.

Is there any direct/indirect way to do this?

--
Regards,
Saumitra Shahapure

Reply via email to