Hello,
we are running Spark 1.2.0 standalone on a cluster made up of 4 machines, each
of them running one Worker and one of them also running the Master; they are
all connected to the same HDFS instance.
Until a few days ago, they were all configured with
SPARK_WORKER_MEMORY = 18G
Hello,
we are running Spark 1.2.0 standalone on a cluster made up of 4 machines, each
of them running one Worker and one of them also running the Master; they are
all connected to the same HDFS instance.
Until a few days ago, they were all configured with
SPARK_WORKER_MEMORY = 18G