Worker never used by our Spark applications

2015-01-26 Thread Federico Ragona
Hello, we are running Spark 1.2.0 standalone on a cluster made up of 4 machines, each of them running one Worker and one of them also running the Master; they are all connected to the same HDFS instance. Until a few days ago, they were all configured with SPARK_WORKER_MEMORY = 18G

Worker never used by our Spark applications

2015-01-26 Thread Federico Ragona
Hello, we are running Spark 1.2.0 standalone on a cluster made up of 4 machines, each of them running one Worker and one of them also running the Master; they are all connected to the same HDFS instance. Until a few days ago, they were all configured with SPARK_WORKER_MEMORY = 18G