Github user mridulm commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1609#discussion_r15540934
  
    --- Diff: 
core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala ---
    @@ -107,68 +109,296 @@ private[spark] class DiskBlockObjectWriter(
       private var fos: FileOutputStream = null
       private var ts: TimeTrackingOutputStream = null
       private var objOut: SerializationStream = null
    +
    +  // Did we create this file or was it already present : used in revert to 
decide
    +  // if we should delete this file or not. Also used to detect if file was 
deleted
    +  // between creation of BOW and its actual init
    +  private val initiallyExists = file.exists() && file.isFile
       private val initialPosition = file.length()
       private var lastValidPosition = initialPosition
    +
       private var initialized = false
    +  // closed explicitly ?
    +  private var closed = false
    +  // Attempt to cleanly close ? (could also be closed via revert)
    +  // Note, a cleanly closed file could be subsequently reverted
    +  private var cleanCloseAttempted = false
    +  // Was the file actually opened atleast once.
    +  // Note: initialized/streams change state with close/revert.
    +  private var wasOpenedOnce = false
       private var _timeWriting = 0L
     
    -  override def open(): BlockObjectWriter = {
    -    fos = new FileOutputStream(file, true)
    -    ts = new TimeTrackingOutputStream(fos)
    -    channel = fos.getChannel()
    +  // Due to some directory creation race issues in spark, it has been 
observed that
    +  // sometimes file creation happens 'before' the actual directory has 
been created
    +  // So we attempt to retry atleast once with a mkdirs in case directory 
was missing.
    +  private def init() {
    +    init(canRetry = true)
    +  }
    +
    +  private def init(canRetry: Boolean) {
    +
    +    if (closed) throw new IOException("Already closed")
    +
    +    assert (! initialized)
    +    assert (! wasOpenedOnce)
    +    var exists = false
    +    try {
    +      exists = file.exists()
    +      if (! exists && initiallyExists && 0 != initialPosition && ! 
Utils.inShutdown) {
    +        // Was deleted by cleanup thread ?
    +        throw new IOException("file " + file + " cleaned up ? exists = " + 
exists +
    +          ", initiallyExists = " + initiallyExists + ", initialPosition = 
" + initialPosition)
    +      }
    +      fos = new FileOutputStream(file, true)
    +    } catch {
    +      case fEx: FileNotFoundException =>
    +        // There seems to be some race in directory creation.
    +        // Attempts to fix it dont seem to have worked : working around 
the problem for now.
    +        logDebug("Unable to open " + file + ", canRetry = " + canRetry + 
", exists = " + exists +
    +          ", initialPosition = " + initialPosition + ", in shutdown = " + 
Utils.inShutdown(), fEx)
    +        if (canRetry && ! Utils.inShutdown()) {
    +          // try creating the parent directory if that is the issue.
    +          // Since there can be race with others, dont bother checking for
    +          // success/failure - the call to init() will resolve if fos can 
be created.
    +          file.getParentFile.mkdirs()
    +          // Note, if directory did not exist, then file does not either - 
and so
    +          // initialPosition would be zero in either case.
    +          init(canRetry = false)
    --- End diff --
    
    As mentioned in the comments, this tries to retry once in case file could 
not be created due to lack of presence of directory (which is what the 
FileNotFoundException is usually for) : except when we are already in shutdown.
    This is a case which happens due to some race in spark between creation of 
directory and allowing files to be created under that directory.
    
    We have fixed a double checked locking bug below (in DiskBlockManager) but 
looks like it is not sufficient - since this was observed even after that.
    (In our branch, the logDebug is actually logError to flush out these cases).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to