[
https://issues.apache.org/jira/browse/HBASE-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14527918#comment-14527918
]
Hudson commented on HBASE-13382:
--------------------------------
FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #930 (See
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/930/])
HBASE-13382 IntegrationTestBigLinkedList should use SecureRandom (Dima Spivak)
(apurtell: rev 4e83f5781c2ce885a06f2956b803990ebadb3425)
*
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
> IntegrationTestBigLinkedList should use SecureRandom
> ----------------------------------------------------
>
> Key: HBASE-13382
> URL: https://issues.apache.org/jira/browse/HBASE-13382
> Project: HBase
> Issue Type: Bug
> Components: integration tests
> Reporter: Todd Lipcon
> Assignee: Dima Spivak
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.13
>
> Attachments: HBASE-13382_master_v1.patch
>
>
> IntegrationTestBigLinkedList currently uses java.util.Random to generate its
> random keys. The keys are 128 bits long, but we generate them using
> Random.nextBytes(). The Random implementation itself only has a 48-bit seed,
> so even though we have a very long key string, it doesn't have anywhere near
> that amount of entropy.
> This means that after a few billion rows, it's quite likely to run into a
> collision: filling in a 16-byte key is equivalent to four calls to
> rand.nextInt(). So, for 10B rows, we are cycling through 40B different 'seed'
> values. With a 48-bit seed, it's quite likely we'll end up using the same
> seed twice, after which point any future rows generated by the colliding
> mappers are going to be equal. This results in broken chains and a failed
> verification job.
> The fix is simple -- we should use SecureRandom to generate the random keys,
> instead.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)