[jira] [Updated] (HDFS-12293) DataNode should log file name on disk error

2017-08-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12293:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~jojochuang]. I've committed this.

Thanks for the contribution [~ajayydv].

> DataNode should log file name on disk error
> ---
>
> Key: HDFS-12293
> URL: https://issues.apache.org/jira/browse/HDFS-12293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12293.01.patch, HDFS-12293.02.patch
>
>
> Found the following error message in precommit build 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {noformat}
> 2017-08-10 09:36:53,619 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
> BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
> datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
> BlockReceiver constructor. Cause is 
> java.io.IOException: Not a directory
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> {noformat}
> It is not known what file was being created.
> What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does 
> carry file name in the exception message, but the exception handler at 
> {{DataTransfer#run()}} and {{BlockReceiver#BlockReceiver}} ignores it:
> {code:title=BlockReceiver#BlockReceiver}
>   // check if there is a disk error
>   IOException cause = DatanodeUtil.getCauseIfDiskError(ioe);
>   DataNode.LOG.warn("IOException in BlockReceiver constructor"
>   + (cause == null ? "" : ". Cause is "), cause);
>   if (cause != null) {
> ioe = cause;
> // Volume error check moved to FileIoProvider
>   }
> {code}
> The logger should print the file name in addition to the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12293) DataNode should log file name on disk error

2017-08-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12293:
--
Attachment: HDFS-12293.02.patch

> DataNode should log file name on disk error
> ---
>
> Key: HDFS-12293
> URL: https://issues.apache.org/jira/browse/HDFS-12293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12293.01.patch, HDFS-12293.02.patch
>
>
> Found the following error message in precommit build 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {noformat}
> 2017-08-10 09:36:53,619 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
> BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
> datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
> BlockReceiver constructor. Cause is 
> java.io.IOException: Not a directory
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> {noformat}
> It is not known what file was being created.
> What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does 
> carry file name in the exception message, but the exception handler at 
> {{DataTransfer#run()}} and {{BlockReceiver#BlockReceiver}} ignores it:
> {code:title=BlockReceiver#BlockReceiver}
>   // check if there is a disk error
>   IOException cause = DatanodeUtil.getCauseIfDiskError(ioe);
>   DataNode.LOG.warn("IOException in BlockReceiver constructor"
>   + (cause == null ? "" : ". Cause is "), cause);
>   if (cause != null) {
> ioe = cause;
> // Volume error check moved to FileIoProvider
>   }
> {code}
> The logger should print the file name in addition to the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12293) DataNode should log file name on disk error

2017-08-14 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12293:
--
Attachment: HDFS-12293.01.patch

> DataNode should log file name on disk error
> ---
>
> Key: HDFS-12293
> URL: https://issues.apache.org/jira/browse/HDFS-12293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12293.01.patch
>
>
> Found the following error message in precommit build 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {noformat}
> 2017-08-10 09:36:53,619 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
> BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
> datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
> BlockReceiver constructor. Cause is 
> java.io.IOException: Not a directory
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> {noformat}
> It is not known what file was being created.
> What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does 
> carry file name in the exception message, but the exception handler at 
> {{DataTransfer#run()}} and {{BlockReceiver#BlockReceiver}} ignores it:
> {code:title=BlockReceiver#BlockReceiver}
>   // check if there is a disk error
>   IOException cause = DatanodeUtil.getCauseIfDiskError(ioe);
>   DataNode.LOG.warn("IOException in BlockReceiver constructor"
>   + (cause == null ? "" : ". Cause is "), cause);
>   if (cause != null) {
> ioe = cause;
> // Volume error check moved to FileIoProvider
>   }
> {code}
> The logger should print the file name in addition to the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12293) DataNode should log file name on disk error

2017-08-14 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12293:
--
Status: Patch Available  (was: Open)

> DataNode should log file name on disk error
> ---
>
> Key: HDFS-12293
> URL: https://issues.apache.org/jira/browse/HDFS-12293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HDFS-12293.01.patch
>
>
> Found the following error message in precommit build 
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {noformat}
> 2017-08-10 09:36:53,619 [DataXceiver for client 
> DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
> BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
> datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
> BlockReceiver constructor. Cause is 
> java.io.IOException: Not a directory
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> {noformat}
> It is not known what file was being created.
> What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does 
> carry file name in the exception message, but the exception handler at 
> {{DataTransfer#run()}} and {{BlockReceiver#BlockReceiver}} ignores it:
> {code:title=BlockReceiver#BlockReceiver}
>   // check if there is a disk error
>   IOException cause = DatanodeUtil.getCauseIfDiskError(ioe);
>   DataNode.LOG.warn("IOException in BlockReceiver constructor"
>   + (cause == null ? "" : ". Cause is "), cause);
>   if (cause != null) {
> ioe = cause;
> // Volume error check moved to FileIoProvider
>   }
> {code}
> The logger should print the file name in addition to the cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12293) DataNode should log file name on disk error

2017-08-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12293:
---
Description: 
Found the following error message in precommit build 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/

{noformat}
2017-08-10 09:36:53,619 [DataXceiver for client 
DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
BlockReceiver constructor. Cause is 
java.io.IOException: Not a directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1012)
at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
at 
org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
{noformat}

It is not known what file was being created.
What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does 
carry file name in the exception message, but the exception handler at 
{{DataTransfer#run()}} and {{BlockReceiver#BlockReceiver}} ignores it:

{code:title=BlockReceiver#BlockReceiver}
  // check if there is a disk error
  IOException cause = DatanodeUtil.getCauseIfDiskError(ioe);
  DataNode.LOG.warn("IOException in BlockReceiver constructor"
  + (cause == null ? "" : ". Cause is "), cause);
  if (cause != null) {
ioe = cause;
// Volume error check moved to FileIoProvider
  }
{code}
The logger should print the file name in addition to the cause.

  was:
Found the following error message in precommit build 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/488/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/

{noformat}
2017-08-10 09:36:53,619 [DataXceiver for client 
DFSClient_NONMAPREDUCE_670847838_18 at /127.0.0.1:55851 [Receiving block 
BP-219227751-172.17.0.2-1502357801473:blk_1073741829_1005]] WARN  
datanode.DataNode (BlockReceiver.java:(287)) - IOException in 
BlockReceiver constructor. Cause is 
java.io.IOException: Not a directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1012)
at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
at 
org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:306)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:933)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1202)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1356)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:215)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
{noformat}

It is not known what file was being created.
What's interesting is that {{DatanodeUtil#createFileWithExistsCheck}} does