Hi all, 
I have one master and two slave node, I did not set any ip for spark driver 
because I thought it uses its default ( localhost). In my etc/hosts I got the 
following : 192.168.0.1 master, 192.168.0.2 slave, 192.168.03 slave2 127.0.0.0 
local host and 127.0.1.1 virtualbox . Should I do something in hosts? or should 
I set a ip to spark-local-ip? I got the following in my stderr: 

Spark 
Executor Command: "java" "-cp" "::
/usr/local/spark-1.0.0/conf:
/usr/local/spark-1.0.0
/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar:/usr/local/hadoop/conf"
 "
-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M" 
"org.apache.spark.executor.CoarseGrainedExecutorBackend
" "akka.tcp://spark@master:54477/user/CoarseGrainedScheduler" "0" "slave2" "1
" "akka.tcp://sparkWorker@slave2:41483/user/Worker" "app-20140704174955-0002"
======================================== 14/07/04 17:50:14 ERROR 
CoarseGrainedExecutorBackend: 
Driver Disassociated [akka.tcp://sparkExecutor@slave2:33758] -> 
[akka.tcp://spark@master:54477] disassociated! Shutting down.
 

 

Best Regards 

....................................................... 

Amin Mohebbi 

PhD candidate in Software Engineering  
 at university of Malaysia   

H/P : +60 18 2040 017 



E-Mail : tp025...@ex.apiit.edu.my 

              amin_...@me.com

Reply via email to