[
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14629949#comment-14629949
]
Elliott Clark commented on HBASE-14098:
---------------------------------------
Yes it's possible that we will read the file again. It's also likely that on
large compactions we will just be blowing the fs cache out of the water.
Compacting 32gb on machine with 32gb memory free means that nothing else can be
in the fs cache. No /bin/bash, no inodes, nothing.
My plan is likely to set this only for large compactions; the thought being
that large compactions are much more likely to have stale data in them. I'm
going to test the current patch out on a cluster that's doing really large
compactions right now. If I see any positive changes then we can make this
smarter.
> Allow dropping caches behind compactions
> ----------------------------------------
>
> Key: HBASE-14098
> URL: https://issues.apache.org/jira/browse/HBASE-14098
> Project: HBase
> Issue Type: Bug
> Components: Compaction, hadoop2, HFile
> Affects Versions: 2.0.0, 1.3.0
> Reporter: Elliott Clark
> Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14098.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)