[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15288127#comment-15288127
 ] 

Yiqun Lin commented on HDFS-10400:
----------------------------------

Hi, [~knoguchi], your commets looks right. I have tested the case and found 
that the exception will also catch exception in {{FsShell}} and return -1 if 
the {{Command}} not caught this.

So there is one possibility that the IOException in {{DataStreamer#run}} was 
caught and not be threw out. Looks these code
{code}
      } catch (Throwable e) {
        // Log warning if there was a real error.
        if (!errorState.isRestartingNode()) {
          // Since their messages are descriptive enough, do not always
          // log a verbose stack-trace WARN for quota exceptions.
          if (e instanceof QuotaExceededException) {
            LOG.debug("DataStreamer Quota Exception", e);
          } else {
            LOG.warn("DataStreamer Exception", e);
          }
        }
        lastException.set(e);
        assert !(e instanceof NullPointerException);
        errorState.setInternalError();
        if (!errorState.isNodeMarked()) {
          // Not a datanode issue
          streamerClosed = true;
        }
      }
{code}
Because the IOException was not threw out, and command will execute normally 
and return code 0. But actually the Exception have happened in copying files.

If I am think correctly, we can do a file's checksum check between source and 
destination file.

> hdfs dfs -put exits with zero on error
> --------------------------------------
>
>                 Key: HDFS-10400
>                 URL: https://issues.apache.org/jira/browse/HDFS-10400
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Jo Desmet
>            Assignee: Yiqun Lin
>         Attachments: HDFS-10400.001.patch, HDFS-10400.002.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
>                 at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
>                 at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
>                 at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
>                 at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to