-list.1001560.n3.nabble.com/master-local-vs-master-local-tp11434p11529.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional comman
The more cores you have, the less memory they will get.
512M is already quite small, and if you have 4 cores it will mean
roughly 128M per task.
Sometimes it is interesting to have less cores and more memory.
how many cores do you have ?
André
On 2014-08-05 16:43, Grzegorz Białek wrote:
Hi,
I
Hi,
I have Spark application which computes join of two RDDs. One contains
around 150MB of data (7 million entries) second around 1,5MB (80 thousand
entries) and
result of this join contains 50MB of data (2 million entries).
When I run it on one core (with master=local) it works correctly (whole