[
https://issues.apache.org/jira/browse/HBASE-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13471358#comment-13471358
]
Phabricator commented on HBASE-6597:
------------------------------------
tedyu has commented on the revision "[jira] [HBASE-6597] [89-fb] Incremental
data block encoding".
I got some test failures in TestCacheOnWrite:
testStoreFileCacheOnWrite[2](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite):
expected:<{ENCODED_DATA=9[65, LEAF_INDEX=121], BLOOM_CHUNK=9, INT...> but
was:<{ENCODED_DATA=9[91, LEAF_INDEX=124], BLOOM_CHUNK=9, INT...>
testStoreFileCacheOnWrite[5](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite):
expected:<{ENCODED_DATA=9[65, LEAF_INDEX=121], BLOOM_CHUNK=9, INT...> but
was:<{ENCODED_DATA=9[91, LEAF_INDEX=124], BLOOM_CHUNK=9, INT...>
testStoreFileCacheOnWrite[8](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite):
expected:<{ENCODED_DATA=9[65, LEAF_INDEX=121], BLOOM_CHUNK=9, INT...> but
was:<{ENCODED_DATA=9[91, LEAF_INDEX=124], BLOOM_CHUNK=9, INT...>
testStoreFileCacheOnWrite[11](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite):
expected:<{ENCODED_DATA=9[65, LEAF_INDEX=121], BLOOM_CHUNK=9, INT...> but
was:<{ENCODED_DATA=9[91, LEAF_INDEX=124], BLOOM_CHUNK=9, INT...>
testStoreFileCacheOnWrite[14](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite):
expected:<{ENCODED_DATA=9[65, LEAF_INDEX=121], BLOOM_CHUNK=9, INT...> but
was:<{ENCODED_DATA=9[91, LEAF_INDEX=124], BLOOM_CHUNK=9, INT...>
testStoreFileCacheOnWrite[17](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite):
expected:<{ENCODED_DATA=9[65, LEAF_INDEX=121], BLOOM_CHUNK=9, INT...> but
was:<{ENCODED_DATA=9[91, LEAF_INDEX=124], BLOOM_CHUNK=9, INT...>
Here is one of the above:
testStoreFileCacheOnWrite[2](org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite)
Time elapsed: 0.295 sec <<< FAILURE!
org.junit.ComparisonFailure: expected:<{ENCODED_DATA=9[65, LEAF_INDEX=121],
BLOOM_CHUNK=9, INT...> but was:<{ENCODED_DATA=9[91, LEAF_INDEX=124],
BLOOM_CHUNK=9, INT...>
at org.junit.Assert.assertEquals(Assert.java:123)
at org.junit.Assert.assertEquals(Assert.java:145)
at
org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.readStoreFile(TestCacheOnWrite.java:259)
at
org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testStoreFileCacheOnWrite(TestCacheOnWrite.java:203)
REVISION DETAIL
https://reviews.facebook.net/D5895
To: Kannan, Karthik, Liyin, aaiyer, avf, JIRA, mbautin
Cc: tedyu
> Block Encoding Size Estimation
> ------------------------------
>
> Key: HBASE-6597
> URL: https://issues.apache.org/jira/browse/HBASE-6597
> Project: HBase
> Issue Type: Improvement
> Components: io
> Affects Versions: 0.89-fb
> Reporter: Brian Nixon
> Assignee: Mikhail Bautin
> Priority: Minor
> Attachments: D5895.1.patch, D5895.2.patch
>
>
> Blocks boundaries as created by current writers are determined by the size of
> the unencoded data. However, blocks in memory are kept encoded. By using an
> estimate for the encoded size of the block, we can get greater consistency in
> size.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira