[ 
https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202712#comment-15202712
 ] 

DOAN DuyHai edited comment on CASSANDRA-11383 at 3/19/16 10:44 AM:
-------------------------------------------------------------------

[~xedin]

bq. I've figured out what is going on and first of all period_end_month_int 
index is not sparse - at least first term in that index has ~11M tokens 
assigned to it

 You're right, {{period_end_month_int}} is not *parse* in the sense we mean it 
in English but SASI index mode {{SPARSE}} is the only one allowed for numeric 
fields, {{PREFIX}} and {{CONTAINS}} are reserved to text fields. So we have a 
fundamental issue here, how to index *dense* numeric values ?

bq. Temporary fix for this situation is switching to LCS with fixed maximum 
sstable size, as I mentioned in my previous comment.

 Can you elaborate further ? What, in LCS, makes it work with current situation 
compared to STCS ? Is it the total number of SSTables ? (currently with STCS 
there less than 100 SSTables per node so it's not really a big issue) Is it the 
fact that a partition is guanrateed to be in a single SSTable with LCS ? (again 
considering the schema we have mostly tiny rows but a lot of them)

 For now I'm going to switch to LCS to see if we can finish building the index 
without OOM. For long term,  LCS is not the solution because this table size 
will increase quickly over time and having tombstones in level > L3 will make 
them rarely compacted



was (Author: doanduyhai):
bq. I've figured out what is going on and first of all period_end_month_int 
index is not sparse - at least first term in that index has ~11M tokens 
assigned to it

 You're right, {{period_end_month_int}} is not *parse* in the sense we mean it 
in English but SASI index mode {{SPARSE}} is the only one allowed for numeric 
fields, {{PREFIX}} and {{CONTAINS}} are reserved to text fields. So we have a 
fundamental issue here, how to index *dense* numeric values ?

bq. Temporary fix for this situation is switching to LCS with fixed maximum 
sstable size, as I mentioned in my previous comment.

 Can you elaborate further ? What, in LCS, makes it work with current situation 
compared to STCS ? Is it the total number of SSTables ? (currently with STCS 
there less than 100 SSTables per node so it's not really a big issue) Is it the 
fact that a partition is guanrateed to be in a single SSTable with LCS ? (again 
considering the schema we have mostly tiny rows but a lot of them)

 For now I'm going to switch to LCS to see if we can finish building the index 
without OOM. For long term,  LCS is not the solution because this table size 
will increase quickly over time and having tombstones in level > L3 will make 
them rarely compacted


> SASI index build leads to massive OOM
> -------------------------------------
>
>                 Key: CASSANDRA-11383
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
>             Project: Cassandra
>          Issue Type: Bug
>          Components: CQL
>         Environment: C* 3.4
>            Reporter: DOAN DuyHai
>         Attachments: CASSANDRA-11383.patch, new_system_log_CMS_8GB_OOM.log, 
> system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
>  JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
>  - ≈ 100Gb/per node
>  - 1.3 Tb cluster-wide
>  - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
>  - 8 indices with text field, NonTokenizingAnalyser,  PREFIX mode, 
> case-insensitive
>  - 1 index with numeric field, SPARSE mode
>  After a while, the nodes just gone OOM.
>  I attach log files. You can see a lot of GC happening while index segments 
> are flush to disk. At some point the node OOM ...
> /cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to