[ 
https://issues.apache.org/jira/browse/SPARK-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103859#comment-14103859
 ] 

Thomas Graves commented on SPARK-3120:
--------------------------------------

No. I am saying that you probably do have to restart your yarn cluster.  Since 
I dont know what version of yarn or what distribution you are using its hard 
for me to say for positive you do.  Try it without restarting and see if it 
works.  If it doesn't then restart the nodemanagers on the yarn cluster.

> Local Dirs is not useful in yarn-client mode
> --------------------------------------------
>
>                 Key: SPARK-3120
>                 URL: https://issues.apache.org/jira/browse/SPARK-3120
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, YARN
>    Affects Versions: 1.0.2
>         Environment: Spark 1.0.2
> Yarn 2.3.0
>            Reporter: hzw
>
> I was using spark1.0.2 and hadoop 2.3.0 to run a spark application on yarn.
> I was excepted to set the spark.local.dir to separate  the shuffle files to 
> many disks, so I exported LOCAL_DIRS in Spark-env.sh.
> But it failed to create the local dirs in my specify path.
> It just go to the path in 
> "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/" as the hadoop 
> default path.
> To reappear this:
> 1.Do not set the “yarn.nodemanager.local-dirs” in yarn-site.xml which 
> influence the result.
> 2.run a job and then find the executor log at the INFO "DiskBlockManager: 
> Created local directory at ......"
> Inaddtion, I tried to add the "exported LOCAL_DIRS" in yarn-env.sh. It will 
> lanch the LOCAL_DIRS value in the ExecutorLancher and it still would be 
> overwrite by yarn in lanching the executor container.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to