[
https://issues.apache.org/jira/browse/HBASE-29667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18035805#comment-18035805
]
Hudson commented on HBASE-29667:
--------------------------------
Results for branch branch-2
[build #1344 on
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/]:
(x) *{color:red}-1 overall{color}*
----
details (if available):
(/) {color:green}+1 general checks{color}
-- For more information [see general
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/General_20Nightly_20Build_20Report/]
(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2)
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]
(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3)
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop3 checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop 3.3.5 backward compatibility checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop 3.3.6 backward compatibility checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop 3.4.0 backward compatibility checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop 3.4.1 backward compatibility checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/1344/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 source release artifact{color}
-- See build output for details.
(/) {color:green}+1 client integration test for HBase 2 {color}
(/) {color:green}+1 client integration test for 3.3.5 {color}
(/) {color:green}+1 client integration test for 3.3.6 {color}
(/) {color:green}+1 client integration test for 3.4.0 {color}
(/) {color:green}+1 client integration test for 3.4.1 {color}
(/) {color:green}+1 client integration test for 3.4.2 {color}
> The block priority is initialized as MULTI when the data block is first
> written into the BucketCache
> ----------------------------------------------------------------------------------------------------
>
> Key: HBASE-29667
> URL: https://issues.apache.org/jira/browse/HBASE-29667
> Project: HBase
> Issue Type: Bug
> Components: BucketCache
> Affects Versions: 3.0.0-beta-1, 2.7.0, 2.6.4
> Reporter: huginn
> Assignee: huginn
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.0.0, 2.7.0, 2.6.5
>
>
> When a data block is first written into the BucketCahce, BucketCache
> allocates a bucket for it, generates and returns the corresponding
> BucketEntry, which will later be placed into the BackingMap. I notice that
> BucketEntry sets the Block Priority to MULTI during initialization. Is this a
> bug that causes the BucketCache to only contain blocks with priorities MULTI
> and MEM, thereby confusing single-access and multiple-access blocks? In fact,
> the BucketCache has logic to upgrade SINGLE to MULTI when the block is
> accessed again, as well as different cleanup logic for blocks with different
> block priorities.
> {code:java}
> public BucketEntry writeToCache(final IOEngine ioEngine, final
> BucketAllocator alloc,
> final LongAdder realCacheSize, Function<BucketEntry, Recycler>
> createRecycler,
> ByteBuffer metaBuff, final Long acceptableSize) throws IOException {
> int len = data.getSerializedLength();
> if (len == 0) {
> return null;
> }
> if (isCachePersistent && data instanceof HFileBlock) {
> len += Long.BYTES;
> }
> long offset = alloc.allocateBlock(len);
> if (isPrefetch() && alloc.getUsedSize() > acceptableSize) {
> alloc.freeBlock(offset, len);
> return null;
> }
> boolean succ = false;
> BucketEntry bucketEntry = null;
> try {
> int diskSizeWithHeader = (data instanceof HFileBlock)
> ? ((HFileBlock) data).getOnDiskSizeWithHeader()
> : data.getSerializedLength();
> bucketEntry = new BucketEntry(offset, len, diskSizeWithHeader,
> accessCounter, inMemory,
> createRecycler, getByteBuffAllocator());
> bucketEntry.setDeserializerReference(data.getDeserializer());
> ...
> }
> {code}
> {code:java}
> BucketEntry(long offset, int length, int onDiskSizeWithHeader, long
> accessCounter,
> long cachedTime, boolean inMemory, Function<BucketEntry, Recycler>
> createRecycler,
> ByteBuffAllocator allocator) {
> if (createRecycler == null) {
> throw new IllegalArgumentException("createRecycler could not be null!");
> }
> setOffset(offset);
> this.length = length;
> this.onDiskSizeWithHeader = onDiskSizeWithHeader;
> this.accessCounter = accessCounter;
> this.cachedTime = cachedTime;
> this.priority = inMemory ? BlockPriority.MEMORY : BlockPriority.MULTI;
> this.refCnt = RefCnt.create(createRecycler.apply(this));
> this.markedAsEvicted = new AtomicBoolean(false);
> this.allocator = allocator;
> }
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)