wchevreuil commented on code in PR #5341:
URL: https://github.com/apache/hbase/pull/5341#discussion_r1294855630
##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java:
##########
@@ -1358,16 +1365,37 @@ private void verifyCapacityAndClasses(long
capacitySize, String ioclass, String
}
private void parsePB(BucketCacheProtos.BucketCacheEntry proto) throws
IOException {
+ backingMap = BucketProtoUtils.fromPB(proto.getDeserializersMap(),
proto.getBackingMap(),
+ this::createRecycler);
+ prefetchCompleted.clear();
+ prefetchCompleted.putAll(proto.getPrefetchedFilesMap());
if (proto.hasChecksum()) {
- ((PersistentIOEngine)
ioEngine).verifyFileIntegrity(proto.getChecksum().toByteArray(),
- algorithm);
+ try {
+ ((PersistentIOEngine)
ioEngine).verifyFileIntegrity(proto.getChecksum().toByteArray(),
+ algorithm);
+ } catch (IOException e) {
+ LOG.warn("Checksum for cache file failed. "
+ + "We need to validate each cache key in the backing map. This may
take some time...");
+ long startTime = EnvironmentEdgeManager.currentTime();
+ int totalKeysOriginally = backingMap.size();
+ for (Map.Entry<BlockCacheKey, BucketEntry> keyEntry :
backingMap.entrySet()) {
+ try {
+ ((FileIOEngine) ioEngine).checkCacheTime(keyEntry.getValue());
+ } catch (IOException e1) {
+ LOG.debug("Check for key {} failed. Removing it from map.",
keyEntry.getKey());
+ backingMap.remove(keyEntry.getKey());
+ prefetchCompleted.remove(keyEntry.getKey().getHfileName());
Review Comment:
I believe option #1 is feasible, I'm testing a solution based on that
approach, will update this PR afterwards.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]