[ 
https://issues.apache.org/jira/browse/SPARK-17562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Wang updated SPARK-17562:
---------------------------------
    Description: 
In ExternalSorter.spillMemoryIteratorToDisk, I think the code below will never 
be executed, so we can remove them
{code}
else  {
writer.revertPartialWritersAndClose()
}
{code}
the source code is as below:
{code}
try {
      while (inMemoryIterator.hasNext) {
        val partitionId = inMemoryIterator.nextPartition()
        require(partitionId >= 0 && partitionId < numPartitions,
          s"partition Id: ${partitionId} should be in the range [0, 
${numPartitions})")
        inMemoryIterator.writeNext(writer)
        elementsPerPartition(partitionId) += 1
        objectsWritten += 1

        if (objectsWritten == serializerBatchSize) {
          flush()
        }
      }
      if (objectsWritten > 0) {
        flush()
      } else {
        writer.revertPartialWritesAndClose()
   }
      success = true
    } finally {
      if (success) {
        writer.close()
      } else {
        // This code path only happens if an exception was thrown above before 
we set success;
        // close our stuff and let the exception be thrown further
        writer.revertPartialWritesAndClose()
        if (file.exists()) {
          if (!file.delete()) {
            logWarning(s"Error deleting ${file}")
          }
        }
      }
    }
{code}

  was:
In ExternalSorter.spillMemoryIteratorToDisk, I think the code below will never 
be executed, so we can remove them
else  {
writer.revertPartialWritersAndClose()
}

the source code is as below:
{code}
try {
      while (inMemoryIterator.hasNext) {
        val partitionId = inMemoryIterator.nextPartition()
        require(partitionId >= 0 && partitionId < numPartitions,
          s"partition Id: ${partitionId} should be in the range [0, 
${numPartitions})")
        inMemoryIterator.writeNext(writer)
        elementsPerPartition(partitionId) += 1
        objectsWritten += 1

        if (objectsWritten == serializerBatchSize) {
          flush()
        }
      }
      if (objectsWritten > 0) {
        flush()
      } else {
        writer.revertPartialWritesAndClose()
   }
      success = true
    } finally {
      if (success) {
        writer.close()
      } else {
        // This code path only happens if an exception was thrown above before 
we set success;
        // close our stuff and let the exception be thrown further
        writer.revertPartialWritesAndClose()
        if (file.exists()) {
          if (!file.delete()) {
            logWarning(s"Error deleting ${file}")
          }
        }
      }
    }
{code}


> I think a little code is unnecessary to exist in 
> ExternalSorter.spillMemoryIteratorToDisk
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-17562
>                 URL: https://issues.apache.org/jira/browse/SPARK-17562
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.0.0
>            Reporter: Jianfei Wang
>            Priority: Trivial
>              Labels: easyfix, performance
>
> In ExternalSorter.spillMemoryIteratorToDisk, I think the code below will 
> never be executed, so we can remove them
> {code}
> else  {
> writer.revertPartialWritersAndClose()
> }
> {code}
> the source code is as below:
> {code}
> try {
>       while (inMemoryIterator.hasNext) {
>         val partitionId = inMemoryIterator.nextPartition()
>         require(partitionId >= 0 && partitionId < numPartitions,
>           s"partition Id: ${partitionId} should be in the range [0, 
> ${numPartitions})")
>         inMemoryIterator.writeNext(writer)
>         elementsPerPartition(partitionId) += 1
>         objectsWritten += 1
>         if (objectsWritten == serializerBatchSize) {
>           flush()
>         }
>       }
>       if (objectsWritten > 0) {
>         flush()
>       } else {
>         writer.revertPartialWritesAndClose()
>    }
>       success = true
>     } finally {
>       if (success) {
>         writer.close()
>       } else {
>         // This code path only happens if an exception was thrown above 
> before we set success;
>         // close our stuff and let the exception be thrown further
>         writer.revertPartialWritesAndClose()
>         if (file.exists()) {
>           if (!file.delete()) {
>             logWarning(s"Error deleting ${file}")
>           }
>         }
>       }
>     }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to