[
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13969831#comment-13969831
]
T Jake Luciani commented on CASSANDRA-5863:
-------------------------------------------
I do think having a set of fast disks for hot data that doesn't fit into memory
is key because in a large per node deployment you want:
1. Memory (Really hot data)
2. SSD (Hot data that doesn't fit in memory)
3. Spinning disk (Historic cold data)
[~benedict] you are describing building a custom page cache impl off heap which
is pretty ambitious. Don't you think a baby step would be to rely on the OS
page cache to start and build a custom one as a phase II?
What would be the page size for uncompressed data. For compressed the chunk
size (conceptually) fits nicely.
> In process (uncompressed) page cache
> ------------------------------------
>
> Key: CASSANDRA-5863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
> Project: Cassandra
> Issue Type: New Feature
> Components: Core
> Reporter: T Jake Luciani
> Assignee: Pavel Yaskevich
> Labels: performance
> Fix For: 2.1 beta2
>
>
> Currently, for every read, the CRAR reads each compressed chunk into a
> byte[], sends it to ICompressor, gets back another byte[] and verifies a
> checksum.
> This process is where the majority of time is spent in a read request.
> Before compression, we would have zero-copy of data and could respond
> directly from the page-cache.
> It would be useful to have some kind of Chunk cache that could speed up this
> process for hot data. Initially this could be a off heap cache but it would
> be great to put these decompressed chunks onto a SSD so the hot data lives on
> a fast disk similar to https://github.com/facebook/flashcache.
--
This message was sent by Atlassian JIRA
(v6.2#6252)