Hi Ravi,

Setting SPARK_MEMORY doesn't do anything. I believe you confused it with
SPARK_MEM, which is now deprecated. You should set SPARK_EXECUTOR_MEMORY
instead, or "spark.executor.memory" as a config in
conf/spark-defaults.conf. Assuming you haven't set the executor memory
through a different mechanism, your executors will quickly run out of
memory with the default of 512m.

Let me know if setting this does the job. If so, you can even persist the
RDDs to memory as well to get better performance, though this depends on
your workload.

-Andrew


2014-08-13 11:38 GMT-07:00 rpandya <r...@iecommerce.com>:

> I'm running Spark 1.0.1 with SPARK_MEMORY=60g, so 4 executors at that size
> would indeed run out of memory (the machine has 110GB). And in fact they
> would get repeatedly restarted and killed until eventually Spark gave up.
>
> I'll try with a smaller limit, but it'll be a while - somehow my HDFS got
> seriously corrupted so I need to rebuild my HDP cluster...
>
> Thanks,
>
> Ravi
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Lost-executors-tp11722p12050.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to