Hi 齐忠,

Thanks for reporting this. You're correct that the default deploy mode is
"client". However, this seems to be a bug in the YARN integration code; we
should not throw null pointer exception in any case. What version of Spark
are you using?

Andrew


2014-08-15 0:23 GMT-07:00 centerqi hu <cente...@gmail.com>:

> The code does not run as follows
>
> ../bin/spark-submit --class org.apache.spark.examples.SparkPi \
>
> --master yarn \
>
> --deploy-mode cluster \
>
> --verbose \
>
> --num-executors 3 \
>
> --driver-memory 4g \
>
> --executor-memory 2g \
>
> --executor-cores 1 \
>
> ../lib/spark-examples*.jar \
>
> 100
>
> Exception in thread "main" java.lang.NullPointerException
>
> at
> org.apache.spark.deploy.yarn.Client$anonfun$logClusterResourceDetails$2.apply(Client.scala:109)
>
> at
> org.apache.spark.deploy.yarn.Client$anonfun$logClusterResourceDetails$2.apply(Client.scala:108)
>
> at org.apache.spark.Logging$class.logInfo(Logging.scala:58)
>
>
> However, when I removed "--deploy-mode cluster \"
>
> Exception disappear.
>
> I think with the "deploy-mode cluster" is running in yarn cluster mode, if
> not, the default will be run in yarn client mode.
>
> But why did yarn cluster get Exception?
>
>
> Thanks
>
>
>
>
>
> --
> cente...@gmail.com|齐忠
>

Reply via email to