Hi,

I'm starting using Spark and have installed Spark within CDH5 using
ClouderaManager.
I set up one master (hadoop-pg-5) and 3 workers (hadoop-pg-7[-8,-9]).
Master WebUI looks good, all workers seem to be registered.

If I open "spark-shell" and try to execute the wordcount example, the
execution hangs at the step "reduceByKey" and prints the Warning
""
14/04/11 21:29:47 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
14/04/11 21:30:02 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory
""
again and again. In the Web-UI the task is in state "WAITING".

Some googling just told me to check networking/DNS stuff between master and
workers, but "host" "ping" and "telnet" are working in both ways =>
on worker hadoop-pg-7:
----------------------
[root@hadoop-pg-7 ~]# host hadoop-pg-5
hadoop-pg-5.cluster has address 10.147.210.5

[root@hadoop-pg-7 ~]# host hadoop-pg-5.cluster
hadoop-pg-5.cluster has address 10.147.210.5

[root@hadoop-pg-7 ~]# telnet hadoop-pg-5.cluster 7077
Trying 10.147.210.5...
Connected to hadoop-pg-5.cluster.
Escape character is '^]'.


on master hadoop-pg-5:
----------------------
[root@hadoop-pg-5 ~]# host hadoop-pg-7
hadoop-pg-7.cluster has address 10.147.210.7
[root@hadoop-pg-5 ~]# host hadoop-pg-7.cluster
hadoop-pg-7.cluster has address 10.147.210.7

[root@hadoop-pg-5 ~]# ping -c 1 hadoop-pg-7.cluster
PING hadoop-pg-7.cluster (10.147.210.7) 56(84) bytes of data.
64 bytes from hadoop-pg-7.cluster (10.147.210.7): icmp_seq=1 ttl=64
time=0.878 ms

--- hadoop-pg-7.cluster ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.878/0.878/0.878/0.000 ms

[root@hadoop-pg-5 ~]# telnet hadoop-pg-7.cluster 7078
Trying 10.147.210.7...
Connected to hadoop-pg-7.cluster.
Escape character is '^]'.

This is the content of spark-env.sh on all hosts:
##
# Generated by Cloudera Manager and should not be modified directly
##
export SPARK_HOME=/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/spark
export STANDALONE_SPARK_MASTER_HOST=hadoop-pg-5.cluster
export SPARK_MASTER_PORT=7077
export
DEFAULT_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hadoop
### Path of Spark assembly jar in HDFS
export SPARK_JAR_HDFS_PATH=/user/spark/share/lib/spark-assembly.jar
### Let's run everything with JVM runtime, instead of Scala
export SPARK_LAUNCH_WITH_SCALA=0
export SPARK_LIBRARY_PATH=${SPARK_HOME}/lib
export SCALA_LIBRARY_PATH=${SPARK_HOME}/lib
export SPARK_MASTER_IP=$STANDALONE_SPARK_MASTER_HOST
export HADOOP_HOME=${HADOOP_HOME:-$DEFAULT_HADOOP_HOME}
if [ -n "$HADOOP_HOME" ]; then
  export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:${HADOOP_HOME}/lib/native
fi

What am I missing, or doing wrong?

Any help appreciated, br Gerd

Reply via email to