In order to check if there is any issue with python API I ran a scala
application provided in the examples. Still the same error
./bin/run-example org.apache.spark.examples.SparkPi
spark://[Master-URL]:7077
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/mn
This is the error from stderr:
Spark Executor Command: "java" "-cp"
":/root/ephemeral-hdfs/conf:/root/ephemeral-hdfs/conf:/root/ephemeral-hdfs/conf:/root/spark/conf:/root/spark/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop1.0.4.jar"
"-Djava.library.path=/root/ephemeral-hdfs/lib/nati
Well, we used the script that comes with spark I think v0.9.1. But I am gonna
try the newer version (1.0rvc2 script). I shall keep you posted about my
findings. Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Deploying-a-python-code-on-a-spark-EC2-cl
This happens to me when using the EC2 scripts for v1.0.0rc2 recent release.
The Master connects and then disconnects immediately, eventually saying
Master disconnected from cluster.
On Thu, Apr 24, 2014 at 4:01 PM, Matei Zaharia wrote:
> Did you launch this using our EC2 scripts (
> http://spark
Did you launch this using our EC2 scripts
(http://spark.apache.org/docs/latest/ec2-scripts.html) or did you manually set
up the daemons? My guess is that their hostnames are not being resolved
properly on all nodes, so executor processes can’t connect back to your driver
app. This error message
Same problem.
On Thu, Apr 24, 2014 at 10:54 AM, Shubhabrata wrote:
> Moreover it seems all the workers are registered and have sufficient memory
> (2.7GB where as I have asked for 512 MB). The UI also shows the jobs are
> running on the slaves. But on the termial it is still the same error
> "I
Moreover it seems all the workers are registered and have sufficient memory
(2.7GB where as I have asked for 512 MB). The UI also shows the jobs are
running on the slaves. But on the termial it is still the same error
"Initial job has not accepted any resources; check your cluster UI to ensure
that
Spark Command: /usr/lib/jvm/java-1.7.0/bin/java -cp
:/root/ephemeral-hdfs/conf:/root/ephemeral-hdfs/conf:/root/ephemeral-hdfs/conf:/root/ephemeral-hdfs/conf:/root/spark/conf:/root/spark/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop1.0.4.jar
-Dspark.akka.logLifecycleEvents=true
-Djava.