Hello,

I have been trying for some time to figure out an issue where splits fail
due to trying to cache the same key, but different value. At this point I
am making very slow progress so I figured I would try to reach out for
help. I will try to explain all that I know/have found out:

What I know/have observed:
This occurs when prefetch on open is true
Both daugher store openers will try to access and cache the first and last
values in the store (HalfStoreFile). [1] [2] [3]
Both daugher store openers will access the same key (the splitkey, I
believe).
* However they will access two different paths and retrieve two different
values.
Then when they try to cache these values, it fails because the other thread
cached it's value and the two values aren't equivalent. [4]
This is past the point of no return for the region split so it tries to
roll forward, but fails. These regions are then corrupted and cannot be
recovered.

One thing I have notices is that these blocks are all leaf index blocks. I
guess these index blocks would be different if it was in a different file.
This is on HBase 1.3.1.

It is definitely a concurrency bug as both daughter openers need to try to
get the block from disk before the other caches the block. Why would it be
trying to access two different paths for the same key? Is that normal in a
reference file?

Does anyone have any idea as to what might be occurring?
If you need more info, I can show you the code path that it is running
through.

Thanks,
Zach

[1]
https://github.com/apache/hbase/blob/rel/1.3.1/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java#L357
[2]
https://github.com/apache/hbase/blob/rel/1.3.1/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java#L454
[3]
https://github.com/apache/hbase/blob/rel/1.3.1/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java#L461
[4]
https://github.com/apache/hbase/blob/rel/1.3.1/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java#L367

Reply via email to