[ 
https://issues.apache.org/jira/browse/CASSANDRA-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112681#comment-15112681
 ] 

Benedict commented on CASSANDRA-9472:
-------------------------------------

This is reintroducing the "Slightly More Offheap Memtables" - i.e. 

# Partially
# If you use on-heap only; with off-heap memtables the on-heap data is only the 
object graph needed to easily utilise the offheap data
# Depends entirely on workload: there are fixed on-heap overheads per cell, per 
row/marker, and per partition. I haven't done the maths recently, but it works 
out roughly to 30-32 bytes per cell on-heap, 50-100 bytes per row, and 
something similar for a partition.  If your'e storing many rows of few cells of 
small values, then it will still mostly be on-heap.
# Depends entirely on what you mean by sufficient.  IMO there is no standard 
universal wisdom to provide for heap tuning; your workload again defines the 
best constraints.  Heap usage characteristics won't vary tremendously from 
current practice, really, you'll just be able to buffer more data in your 
memtables (or have less Java heap)  You will still accumulate longer-lived data 
that my survive multiple YG GCs, depending on your usage profile.
# It is always configurable just as on-heap limits are.  Currently it is set to 
the same heuristic as on-heap, i.e. 1/4 heap iirc
# as always, they are optional, and by default off
# I would assume soon.  Fully off-heap is still a ways off, most likely after 
TPC (Thread Per Core)



> Reintroduce off heap memtables
> ------------------------------
>
>                 Key: CASSANDRA-9472
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9472
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Benedict
>            Assignee: Benedict
>             Fix For: 3.x
>
>
> CASSANDRA-8099 removes off heap memtables. We should reintroduce them ASAP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to