Hello,

Currently I deployed 0.9.1 spark using a new way of starting up spark

    exec start-stop-daemon --start --pidfile /var/run/spark.pid
--make-pidfile --chuid ${SPARK_USER}:${SPARK_GROUP} --chdir ${SPARK_HOME}
--exec /usr/bin/java -- -cp ${CLASSPATH}
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.port=10111
-Dspark.akka.logLifecycleEvents=true -Djava.library.path=
-XX:ReservedCodeCacheSize=512M -XX:+UseCodeCacheFlushing
-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC
-Dspark.executor.memory=10G -Xmx10g ${MAIN_CLASS} ${MAIN_CLASS_ARGS}


where class path points to the spark jar that we compile with sbt. When I
try to run a job I receive the following warning

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
memory


My first question is do I need the entire spark project on disk in order to
run jobs? Or what else am I doing wrong?

Reply via email to