I have setup up apache mesos using mesosphere on Cent OS 6 with Java 8.I have
3 slaves which total to 3 cores and 8 gb ram. I have set no firewalls. I am
trying to run the following lines of code to test whether the setup is
working:

 val data = 1 to 10000
 val distData = sc.parallelize(data)
 distData.filter(_< 10).collect()

I get the following on my console
5/08/24 20:54:57 INFO SparkContext: Starting job: collect at <console>:26
15/08/24 20:54:57 INFO DAGScheduler: Got job 0 (collect at <console>:26)
with 8 output partitions (allowLocal=false)
15/08/24 20:54:57 INFO DAGScheduler: Final stage: Stage 0(collect at
<console>:26)
15/08/24 20:54:57 INFO DAGScheduler: Parents of final stage: List()
15/08/24 20:54:57 INFO DAGScheduler: Missing parents: List()
15/08/24 20:54:57 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[1]
at filter at <console>:26), which has no missing parents
15/08/24 20:54:57 INFO MemoryStore: ensureFreeSpace(1792) called with
curMem=0, maxMem=280248975
15/08/24 20:54:57 INFO MemoryStore: Block broadcast_0 stored as values in
memory (estimated size 1792.0 B, free 267.3 MB)
15/08/24 20:54:57 INFO MemoryStore: ensureFreeSpace(1293) called with
curMem=1792, maxMem=280248975
15/08/24 20:54:57 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes
in memory (estimated size 1293.0 B, free 267.3 MB)
15/08/24 20:54:57 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory
on ip-172-31-46-176.ec2.internal:33361 (size: 1293.0 B, free: 267.3 MB)
15/08/24 20:54:57 INFO BlockManagerMaster: Updated info of block
broadcast_0_piece0
15/08/24 20:54:57 INFO SparkContext: Created broadcast 0 from broadcast at
DAGScheduler.scala:839
15/08/24 20:54:57 INFO DAGScheduler: Submitting 8 missing tasks from Stage 0
(MapPartitionsRDD[1] at filter at <console>:26)
15/08/24 20:54:57 INFO TaskSchedulerImpl: Adding task set 0.0 with 8 tasks
15/08/24 20:55:12 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources
15/08/24 20:55:27 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources
.........

Here are the logs /var/log/mesos attached







mesos-slave.22202
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n24430/mesos-slave.22202>
  
mesos-slave.22202
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n24430/mesos-slave.22202>
  
mesos-master.22181
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n24430/mesos-master.22181>
  
mesos-master.22181
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n24430/mesos-master.22181>
  





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Running-spark-shell-on-mesos-with-zookeeper-on-spark-1-3-1-tp24430.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to