[ 
https://issues.apache.org/jira/browse/HBASE-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-5995:
---------------------------------

    Attachment: hbase-5995_v2.patch

Here is a second attempt. This patch also fixes a condition where we the hdfs 
output stream is closed because of errors, but we try to continuously do 
hflush() before closing the stream, although it is already closed. 
FSDataOutputStream does not have an isClosed() kind of API, or throw a special 
exception, so I had to parse the exception msg (which is ugly I admit). 

In hadoop2, we check the GS's of the block and replica to reason about the file 
length, which is why we get the error. In hadoop1, it seems that we do not do 
that, and except the replica length with checking GS (I might be wrong on this 
though, haven't confirmed it with hdfs folks). 
                
> Fix and reenable TestLogRolling.testLogRollOnPipelineRestart
> ------------------------------------------------------------
>
>                 Key: HBASE-5995
>                 URL: https://issues.apache.org/jira/browse/HBASE-5995
>             Project: HBase
>          Issue Type: Sub-task
>          Components: test
>            Reporter: stack
>            Assignee: Enis Soztutar
>            Priority: Blocker
>             Fix For: 0.98.0, 0.95.1
>
>         Attachments: hbase-5995_v1.patch, hbase-5995_v2.patch
>
>
> HBASE-5984 disabled this flakey test (See the issue for more).  This issue is 
> about getting it enabled again.  Made a blocker on 0.96.0 so it gets 
> attention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to