[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255538#comment-15255538
 ] 

Benedict edited comment on CASSANDRA-5863 at 4/24/16 8:54 AM:
--------------------------------------------------------------

bq. I've double checked with multiple familiar people that most of the 
modern/popular filesystems (NTFS, ext*, xfs etc.) already have support for that

It's worth double checking what that support entails - in XFS (since I happen 
to have recently read the spec), such a gap would be represented by introducing 
a b+-tree, rather than a single-continuous allocation (on the assumption 
contiguous space was available on disk in the location of the first inode).  
This could result in multiple levels of inode, such that a random seek into the 
file (our usual modus operandi) could incur many more disk accesses than 
previously was the case.

(Obviously these would more likely be cacheable by the OS, so it mightn't be a 
major problem; just worth considering)


was (Author: benedict):
bq. I've double checked with multiple familiar people that most of the 
modern/popular filesystems (NTFS, ext*, xfs etc.) already have support for that

It's worth double checking what that support entails - in XFS (since I happen 
to have recently read the spec), such a gap would be represented by introducing 
a b+-tree, rather than a single-continuous allocation (on the assumption 
contiguous space was available on disk in the location of the first inode).  
This could result in multiple levels of inode, such that a random seek into the 
file (our usual modus operandi) could incur many more disk accesses than 
previously was the case.

> In process (uncompressed) page cache
> ------------------------------------
>
>                 Key: CASSANDRA-5863
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
>             Project: Cassandra
>          Issue Type: Sub-task
>            Reporter: T Jake Luciani
>            Assignee: Branimir Lambov
>              Labels: performance
>             Fix For: 3.x
>
>
> Currently, for every read, the CRAR reads each compressed chunk into a 
> byte[], sends it to ICompressor, gets back another byte[] and verifies a 
> checksum.  
> This process is where the majority of time is spent in a read request.  
> Before compression, we would have zero-copy of data and could respond 
> directly from the page-cache.
> It would be useful to have some kind of Chunk cache that could speed up this 
> process for hot data, possibly off heap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to