[ 
https://issues.apache.org/jira/browse/SPARK-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152526#comment-14152526
 ] 

Matthew Farrellee commented on SPARK-3685:
------------------------------------------

if you're going to go down this path the best (i'd say correct) way to 
implement it is to have support from yarn - a way to tell yarn "i'm only going 
to need X,Y,Z resources from now on" without giving up the execution container. 
i bet there's a way to re-exec the jvm into a smaller form factor.

> Spark's local dir should accept only local paths
> ------------------------------------------------
>
>                 Key: SPARK-3685
>                 URL: https://issues.apache.org/jira/browse/SPARK-3685
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, YARN
>    Affects Versions: 1.1.0
>            Reporter: Andrew Or
>
> When you try to set local dirs to "hdfs:/tmp/foo" it doesn't work. What it 
> will try to do is create a folder called "hdfs:" and put "tmp" inside it. 
> This is because in Util#getOrCreateLocalRootDirs we use java.io.File instead 
> of Hadoop's file system to parse this path. We also need to resolve the path 
> appropriately.
> This may not have an urgent use case, but it fails silently and does what is 
> least expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to