[
https://issues.apache.org/jira/browse/HBASE-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13616060#comment-13616060
]
Jeffrey Zhong commented on HBASE-8207:
--------------------------------------
[[email protected]] It depends if log splitting can finish in time before
replication gives up. In our test case setup, log splitting normally completes
within 1-2 secs while replication takes about 5 secs to give up. So in most
cases, the test runs fine. In local dev env, we don't have machine name
containing "-" so we can't even reproduce it.
Since we just changed Jenkins, I can't find more build history. From my "flaky
test detector" tool(HBASE-8018) result in trunk which I ran on March 5. We can
see replication in trunk is flaky at that time:
_HBase-TRUNK (from last 10 runs)_
Failed Test Cases 3908 3909 3910 3912 3913 3914 3915 3916
========================================================
org.apache.hadoop.hbase.replication.testreplicationqueuefailover.queuefailover
1 1 1 -1 0 -1 0 1
org.apache.hadoop.hbase.replication.testreplicationqueuefailovercompressed.queuefailover
1 1 1 -1 0 -1 0 1
_HBase-0.95 (from last 10 runs configurable)_
Failed Test Cases 21 22 23 24 25 27
========================================================
org.apache.hadoop.hbase.replication.testreplicationqueuefailover.queuefailover
1 -1 0 1 -1 0
org.apache.hadoop.hbase.replication.testreplicationqueuefailovercompressed.queuefailover
0 1 -1 0 -1 0
> Replication could have data loss when machine name contains hyphen "-"
> ----------------------------------------------------------------------
>
> Key: HBASE-8207
> URL: https://issues.apache.org/jira/browse/HBASE-8207
> Project: HBase
> Issue Type: Bug
> Components: Replication
> Affects Versions: 0.95.0, 0.94.6
> Reporter: Jeffrey Zhong
> Assignee: Jeffrey Zhong
> Priority: Critical
> Fix For: 0.95.0, 0.98.0, 0.94.7
>
> Attachments: failed.txt
>
>
> In the recent test case TestReplication* failures, I'm finally able to find
> the cause(or one of causes) for its intermittent failures.
> When a machine name contains "-", it breaks the function
> ReplicationSource.checkIfQueueRecovered. It causes the following issue:
> deadRegionServers list is way off so that replication doesn't wait for log
> splitting finish for a wal file and move on to the next one(data loss)
> You can see that replication use those weird paths constructed from
> deadRegionServers to check a file existence
> {code}
> 2013-03-26 21:26:51,385 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/1.compute.internal,52170,1364333181125/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,386 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/1.compute.internal,52170,1364333181125-splitting/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,387 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/west/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,389 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/west-splitting/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,391 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/156.us/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,394 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/156.us-splitting/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,396 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/0/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> 2013-03-26 21:26:51,398 INFO
> [ReplicationExecutor-0.replicationSource,2-ip-10-197-0-156.us-west-1.compute.internal,52170,1364333181125]
> regionserver.ReplicationSource(524): Possible location
> hdfs://localhost:52882/user/ec2-user/hbase/.logs/0-splitting/ip-10-197-0-156.us-west-1.compute.internal%252C52170%252C1364333181125.1364333199540
> {code}
> This happened in the recent test failure in
> http://54.241.6.143/job/HBase-0.94/org.apache.hbase$hbase/21/testReport/junit/org.apache.hadoop.hbase.replication/TestReplicationQueueFailover/queueFailover/?auto_refresh=false
> Search for
> {code}
> File does not exist:
> hdfs://localhost:52882/user/ec2-user/hbase/.oldlogs/ip-10-197-0-156.us-west-1.compute.internal%2C52170%2C1364333181125.1364333199540
> {code}
> After 10 times retries, replication source gave up and move on to the next
> file. Data loss happens.
> Since lots of EC2 machine names contain "-" including our Jenkin servers,
> this is a high impact issue.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira