[
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15283529#comment-15283529
]
Yiqun Lin commented on HDFS-10400:
----------------------------------
I have looked into the code. When the src local files's num is more than one,
it will invoke its parent method. And the potenial IOException in method
{{processArgument}} is catched in {{Command}} and not be threw again.
{code}
protected void processArguments(LinkedList<PathData> args)
throws IOException {
for (PathData arg : args) {
try {
processArgument(arg);
} catch (IOException e) {
displayError(e);
}
}
}
{code}
The similar case is also happens in {{Commands#processPaths}}. And these method
will be involed in {{processRawArguments(args);}}, its IOException will not be
thred here. The numErrors will also not be incrased.
{code}
public int run(String...argv) {
LinkedList<String> args = new LinkedList<String>(Arrays.asList(argv));
try {
if (isDeprecated()) {
displayWarning(
"DEPRECATED: Please use '"+ getReplacementCommand() + "' instead.");
}
processOptions(args);
processRawArguments(args);
} catch (CommandInterruptException e) {
displayError("Interrupted");
return 130;
} catch (IOException e) {
displayError(e);
}
return (numErrors == 0) ? exitCode : exitCodeForError();
}
{code}
So I think this is likely the reason.
I'm glad to do further work for this, who can assign this JIRA to me? It seems
that I can't assign JIRA to myself now, thanks.
> hdfs dfs -put exits with zero on error
> --------------------------------------
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Jo Desmet
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file
> that is big enough to go over the limit. As a result, the command fails with
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different
> than zero.
> Documentation on
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error.
> following is the exception generated:
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]