[ https://issues.apache.org/jira/browse/SPARK-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152152#comment-14152152 ]
Matthew Farrellee commented on SPARK-3685: ------------------------------------------ the root of the resource problem is how they're handed out. yarn is giving you a whole cpu, some amount of memory, some amount of network and some amount of disk to work with. your executor (like any program) uses different amounts of resources throughout its execution. at points in the execution the resource profile changes, call the demarcated regions "phases". so an executor may transition from a high resource phase to a low resource phase. in a low resource phase, you may want to free up resources for other executors, but maintain enough to do basic operations (say: serve a shuffle file). this is a problem that should be solved by the resource manager. in my opinion, a solution w/i spark that isn't faciliated by the RN is a workaround/hack and should be avoided. an example of a RN facilitated solution might be a message the executor can send to yarn to indicate its resources can be free'd, except for some minimum amount. > Spark's local dir should accept only local paths > ------------------------------------------------ > > Key: SPARK-3685 > URL: https://issues.apache.org/jira/browse/SPARK-3685 > Project: Spark > Issue Type: Bug > Components: Spark Core, YARN > Affects Versions: 1.1.0 > Reporter: Andrew Or > > When you try to set local dirs to "hdfs:/tmp/foo" it doesn't work. What it > will try to do is create a folder called "hdfs:" and put "tmp" inside it. > This is because in Util#getOrCreateLocalRootDirs we use java.io.File instead > of Hadoop's file system to parse this path. We also need to resolve the path > appropriately. > This may not have an urgent use case, but it fails silently and does what is > least expected. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org