Hi
I'm using Spark 0.9.1 and Shark 0.9.1. My dataset does not fit into memory
I have in my cluster setup, so I want to use also disk for caching. I guess
MEMORY_ONLY is the default storage level in Spark. If that's the case how
could I change the storage level to MEMORY_AND_DISK in Spark?
You can change storage level on an individual RDD with
.persist(StorageLevel.MEMORY_AND_DISK), but I don't think you can change
what the default persistency level is for RDDs.
Andrew
On Wed, Jun 4, 2014 at 1:52 AM, Salih Kardan karda...@gmail.com wrote:
Hi
I'm using Spark 0.9.1 and Shark