Compaction can merge some very large files together with data that may
be completely cold. So yeah caching the whole file just creates pressure
to evict useful stuff. In some theories.
In other theories the page cache is flush and scan resistant and should
just eat this stuff up without intervention. Sure it might hurt a bit,
but it's a bounded amount before the cache stops discarding useful stuff
in favor of new stuff that is unproven.
If there is a benchmark with this enabled/disabled I haven't seen it.
Doesn't mean it doesn't exist though.
On Tue, Oct 18, 2016, at 12:05 PM, Michael Kjellman wrote:
> Within a single SegmentedFile?
> On Oct 18, 2016, at 9:02 AM, Ariel Weisberg
> <ariel.weisb...@datastax.com<mailto:ariel.weisb...@datastax.com>> wrote:
> With compaction there can be hot and cold data mixed together.