[
https://issues.apache.org/jira/browse/HADOOP-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12507147
]
Devaraj Das commented on HADOOP-1513:
-------------------------------------
My bad, that I mentioned "for each component". The call will return after
creating the necessary parents.
However, the problem is that if it got recursively invoked for a parent which
was not existent, and that invocation is not able to complete successfully
since another thread just got scheduled and created that same dir., it signals
an error (returns false).
> A likely race condition between the creation of a directory and checking for
> its existence in the DiskChecker class
> -------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-1513
> URL: https://issues.apache.org/jira/browse/HADOOP-1513
> Project: Hadoop
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.14.0
> Reporter: Devaraj Das
> Assignee: Devaraj Das
> Priority: Critical
> Fix For: 0.14.0
>
> Attachments: 1513.patch
>
>
> Got this exception in a job run. It looks like the problem is a race
> condition between the creation of a directory and checking for its existence.
> Specifically, the line:
> if (!dir.exists() && !dir.mkdirs()), doesn't seem safe when invoked by
> multiple processes at the same time.
> 2007-06-21 07:55:33,583 INFO org.apache.hadoop.mapred.MapTask:
> numReduceTasks: 1
> 2007-06-21 07:55:33,818 WARN org.apache.hadoop.fs.AllocatorPerContext:
> org.apache.hadoop.util.DiskChecker$DiskErrorException: can not create
> directory: /export/crawlspace/kryptonite/ddas/dfs/data/tmp
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:26)
> at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createPath(LocalDirAllocator.java:211)
> at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:248)
> at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:276)
> at
> org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:155)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.newBackupFile(DFSClient.java:1171)
> at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.(DFSClient.java:1136)
> at org.apache.hadoop.dfs.DFSClient.create(DFSClient.java:342)
> at
> org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.create(DistributedFileSystem.java:145)
> at
> org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.(ChecksumFileSystem.java:368)
> at
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:254)
> at org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:675)
> at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:165)
> at
> org.apache.hadoop.examples.RandomWriter$Map.map(RandomWriter.java:137)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:46)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:189)
> at
> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1740)
> 2007-06-21 07:55:33,821 WARN org.apache.hadoop.mapred.TaskTracker: Error
> running child
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.