[
https://issues.apache.org/jira/browse/HDFS-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13684006#comment-13684006
]
Hadoop QA commented on HDFS-4906:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12587936/HDFS-4906.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:red}-1 javac{color:red}. The patch appears to cause the build to
fail.
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4523//console
This message is automatically generated.
> HDFS Output streams should not accept writes after being closed
> ---------------------------------------------------------------
>
> Key: HDFS-4906
> URL: https://issues.apache.org/jira/browse/HDFS-4906
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs-client
> Affects Versions: 2.0.5-alpha
> Reporter: Aaron T. Myers
> Assignee: Aaron T. Myers
> Attachments: HDFS-4906.patch
>
>
> Currently if one closes an OutputStream obtained from FileSystem#create and
> then calls write(...) on that closed stream, the write will appear to succeed
> without error though no data will be written to HDFS. A subsequent call to
> close will also silently appear to succeed. We should make it so that
> attempts to write to closed streams fails fast.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira