[ 
https://issues.apache.org/jira/browse/SPARK-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103241#comment-14103241
 ] 

hzw commented on SPARK-3120:
----------------------------

I can not understand what you say clearly.
Do you mean that there is no need for me to restart the yarn cluster if change 
the value of "yarn.nodemanager.local-dirs" in yarn-site.xml ? 

> Local Dirs is not useful in yarn-client mode
> --------------------------------------------
>
>                 Key: SPARK-3120
>                 URL: https://issues.apache.org/jira/browse/SPARK-3120
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, YARN
>    Affects Versions: 1.0.2
>         Environment: Spark 1.0.2
> Yarn 2.3.0
>            Reporter: hzw
>
> I was using spark1.0.2 and hadoop 2.3.0 to run a spark application on yarn.
> I was excepted to set the spark.local.dir to separate  the shuffle files to 
> many disks, so I exported LOCAL_DIRS in Spark-env.sh.
> But it failed to create the local dirs in my specify path.
> It just go to the path in 
> "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/" as the hadoop 
> default path.
> To reappear this:
> 1.Do not set the “yarn.nodemanager.local-dirs” in yarn-site.xml which 
> influence the result.
> 2.run a job and then find the executor log at the INFO "DiskBlockManager: 
> Created local directory at ......"
> Inaddtion, I tried to add the "exported LOCAL_DIRS" in yarn-env.sh. It will 
> lanch the LOCAL_DIRS value in the ExecutorLancher and it still would be 
> overwrite by yarn in lanching the executor container.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to