Hi,

I am using PySpark (1.1) and I am using it for some image processing tasks.
The images (RDD) are of in the order of several MB to low/mid two digit MB.
However, when using the data and running operations on it using Spark, I
experience blowing up memory. Is there anything I can do about it? I played
around with serialization and RDD compression, but that didn't really help.
Any other idea what I can do or what I should particularly aware of?

Best,
 Tassilo



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Memory-Hungry-tp18923.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to