thanks!
i solved the problem.
spark-submit changed the HADOOP_CONF_DIR to spark/conf and was corrent
but using java *... didn't change the HADOOP_CONF_DIR. it still use
hadoop/etc/hadoop.
At 2016-05-10 16:39:47, "Saisai Shao" wrote:
The code is in Client.scala under yarn sub-module (s
The code is in Client.scala under yarn sub-module (see the below link).
Maybe you need to check the vendor version about their changes to the
Apache Spark code.
https://github.com/apache/spark/blob/branch-1.3/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
Thanks
Saisai
On Tue, May
What is the version of Spark are you using? From my understanding, there's
no code in yarn#client will upload "__hadoop_conf__" into distributed cache.
On Tue, May 10, 2016 at 3:51 PM, 朱旻 wrote:
> hi all:
> I found a problem using spark .
> WHEN I use spark-submit to launch a task. it works
>