We have a 4 node Spark cluster with 3 gigs of ram available per executor
(via the spark.executor.memory setting).  When we run a Spark job, we see
the following output:

Using Scala version 2.9.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_21)
Initializing interpreter...
Creating SparkContext...
13/11/19 23:17:20 INFO Slf4jEventHandler: Slf4jEventHandler started
13/11/19 23:17:20 INFO SparkEnv: Registering BlockManagerMaster
13/11/19 23:17:20 INFO DiskBlockManager: Created local directory at
/opt/spark/tmp/spark-local-20131119231720-a023
13/11/19 23:17:20 INFO MemoryStore: MemoryStore started with capacity 323.9
MB.
13/11/19 23:17:20 INFO ConnectionManager: Bound socket to port 11240 with
id = ConnectionManagerId(spark-shell-01,11240)
13/11/19 23:17:20 INFO BlockManagerMaster: Trying to register BlockManager
13/11/19 23:17:20 INFO BlockManagerMasterActor$BlockManagerInfo:
Registering block manager spark-shell-01:11240 with 323.9 MB RAM

Is this right?  I feel like much more RAM should be available.

Reply via email to