Jim Carroll wrote
> Okay,
> 
> I have an rdd that I want to run an aggregate over but it insists on
> spilling to disk even though I structured the processing to only require a
> single pass.
> 
> In other words, I can do all of my processing one entry in the rdd at a
> time without persisting anything.
> 
> I set rdd.persist(StorageLevel.NONE) and it had no affect. When I run
> locally I get my /tmp directory filled with transient rdd data even though
> I never need the data again after the row's been processed. Is there a way
> to turn this off?
> 
> Thanks
> Jim

hi,
Did you have many input file?
If it is, try to use 

conf.set("spark.shuffle.consolidateFiles", "true");

Hope this help.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/No-disk-single-pass-RDD-aggregation-tp20723p20753.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to