[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965066#comment-13965066
 ] 

Pavel Yaskevich commented on CASSANDRA-6694:
--------------------------------------------

[~jbellis] I will leave this a alone if you and others are fine with maintaing 
the code as it is in the patch set. Discussion I'm trying to have, and I 
presume others are interested too, centered around the question - if there is a 
better (cleaner if you will) way to organize Cell to avoid unnecessary field 
allocation as well as keeping us from introduction of static Impl classes with 
only static methods inside that extend each other, I still don't understand why 
we would extend one class, that has only static methods, from another with the 
same method layout (e.g. DeletedCell.Impl extends Cell.Impl) which results in 
bigger constants pool per class and has byte code implications that I have 
previously described. From my point of view, it looks like we are basically 
trying to re-build inside of Cassandra what JVM already provides as a platform.

> Slightly More Off-Heap Memtables
> --------------------------------
>
>                 Key: CASSANDRA-6694
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Benedict
>            Assignee: Benedict
>              Labels: performance
>             Fix For: 2.1 beta2
>
>
> The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
> the on-heap overhead is still very large. It should not be tremendously 
> difficult to extend these changes so that we allocate entire Cells off-heap, 
> instead of multiple BBs per Cell (with all their associated overhead).
> The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
> bytes per cell on average for the btree overhead, for a total overhead of 
> around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
> address (we will do alignment tricks like the VM to allow us to address a 
> reasonably large memory space, although this trick is unlikely to last us 
> forever, at which point we will have to bite the bullet and accept a 24-byte 
> per cell overhead), and 4-byte object reference for maintaining our internal 
> list of allocations, which is unfortunately necessary since we cannot safely 
> (and cheaply) walk the object graph we allocate otherwise, which is necessary 
> for (allocation-) compaction and pointer rewriting.
> The ugliest thing here is going to be implementing the various CellName 
> instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to