[ 
https://issues.apache.org/jira/browse/SPARK-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085306#comment-15085306
 ] 

Jean-Baptiste Onofré commented on SPARK-12430:
----------------------------------------------

I just checked in Utils and DiskBlockManager.

Clearly, in Utils, the folders are relative to java.io.tmpdir:

{code}
  def getConfiguredLocalDirs(conf: SparkConf): Array[String] = {
    val shuffleServiceEnabled = 
conf.getBoolean("spark.shuffle.service.enabled", false)
    if (isRunningInYarnContainer(conf)) {
      // If we are in yarn mode, systems can have different disk layouts so we 
must set it
      // to what Yarn on this system said was available. Note this assumes that 
Yarn has
      // created the directories already, and that they are secured so that 
only the
      // user has access to them.
      getYarnLocalDirs(conf).split(",")
    } else if (conf.getenv("SPARK_EXECUTOR_DIRS") != null) {
      conf.getenv("SPARK_EXECUTOR_DIRS").split(File.pathSeparator)
    } else if (conf.getenv("SPARK_LOCAL_DIRS") != null) {
      conf.getenv("SPARK_LOCAL_DIRS").split(",")
    } else if (conf.getenv("MESOS_DIRECTORY") != null && 
!shuffleServiceEnabled) {
      // Mesos already creates a directory per Mesos task. Spark should use 
that directory
      // instead so all temporary files are automatically cleaned up when the 
Mesos task ends.
      // Note that we don't want this if the shuffle service is enabled because 
we want to
      // continue to serve shuffle files after the executors that wrote them 
have already exited.
      Array(conf.getenv("MESOS_DIRECTORY"))
    } else {
      if (conf.getenv("MESOS_DIRECTORY") != null && shuffleServiceEnabled) {
        logInfo("MESOS_DIRECTORY available but not using provided Mesos sandbox 
because " +
          "spark.shuffle.service.enabled is enabled.")
      }
      // In non-Yarn mode (or for the driver in yarn-client mode), we cannot 
trust the user
      // configuration to point to a secure directory. So create a subdirectory 
with restricted
      // permissions under each listed directory.
      conf.get("spark.local.dir", 
System.getProperty("java.io.tmpdir")).split(",")
    }
  }
{code}

When the DiskBlockManager stops, it deletes all local dirs:

{code}
  private def doStop(): Unit = {
    // Only perform cleanup if an external service is not serving our shuffle 
files.
    // Also blockManagerId could be null if block manager is not initialized 
properly.
    if (!blockManager.externalShuffleServiceEnabled ||
      (blockManager.blockManagerId != null && 
blockManager.blockManagerId.isDriver)) {
      localDirs.foreach { localDir =>
        if (localDir.isDirectory() && localDir.exists()) {
          try {
            if (!ShutdownHookManager.hasRootAsShutdownDeleteDir(localDir)) {
              Utils.deleteRecursively(localDir)
            }
          } catch {
            case e: Exception =>
              logError(s"Exception while deleting local spark dir: $localDir", 
e)
          }
        }
      }
    }
  }
{code}

The local dirs are evaluated when DiskBlockManager starts:

{code}
  private def createLocalDirs(conf: SparkConf): Array[File] = {
    Utils.getConfiguredLocalDirs(conf).flatMap { rootDir =>
      try {
        val localDir = Utils.createDirectory(rootDir, "blockmgr")
        logInfo(s"Created local directory at $localDir")
        Some(localDir)
      } catch {
        case e: IOException =>
          logError(s"Failed to create local dir in $rootDir. Ignoring this 
directory.", e)
          None
      }
    }
  }
{code}

So, it looks good on master. Let me double check on Mesos.

> Temporary folders do not get deleted after Task completes causing problems 
> with disk space.
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-12430
>                 URL: https://issues.apache.org/jira/browse/SPARK-12430
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.1, 1.5.2
>         Environment: Ubuntu server
>            Reporter: Fede Bar
>
> We are experiencing an issue with automatic /tmp folder deletion after 
> framework completes. Completing a M/R job using Spark 1.5.2 (same behavior as 
> Spark 1.5.1) over Mesos will not delete some temporary folders causing free 
> disk space on server to exhaust. 
> Behavior of M/R job using Spark 1.4.1 over Mesos cluster:
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/slaves/id#* , */tmp/spark-#/*  , 
>  */tmp/spark-#/blockmgr-#*
> - When task is completed */tmp/spark-#/* gets deleted along with 
> */tmp/spark-#/blockmgr-#* sub-folder.
> Behavior of M/R job using Spark 1.5.2 over Mesos cluster (same identical job):
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/mesos/slaves/id** * , 
> */tmp/spark-***/ *  ,{color:red} /tmp/blockmgr-***{color}
> - When task is completed */tmp/spark-***/ * gets deleted but NOT shuffle 
> container folder {color:red} /tmp/blockmgr-***{color}
> Unfortunately, {color:red} /tmp/blockmgr-***{color} can account for several 
> GB depending on the job that ran. Over time this causes disk space to become 
> full with consequences that we all know. 
> Running a shell script would probably work but it is difficult to identify 
> folders in use by a running M/R or stale folders. I did notice similar issues 
> opened by other users marked as "resolved", but none seems to exactly match 
> the above behavior. 
> I really hope someone has insights on how to fix it.
> Thank you very much!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to