[
https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15216186#comment-15216186
]
Jordan West commented on CASSANDRA-11383:
-----------------------------------------
bq. Was the conclusion that a SPARSE SASI index would work well even for low
cardinality data (as in the original reported case, for period_end_month_int),
or was there some application-level change required to adapt to a SASI change
as well?
{{period_end_month_int}} is still the incorrect use case for {{SPARSE}}. That
did not change. {{SPARSE}} is still intended for indexes/terms where there are
a large number of terms and a low number of tokens/keys per term (the token
trees in the index are sparse). The {{period_end_month_int}} use-case is a
dense index: there are few terms and each term has a large number of
tokens/keys (the token trees in the index are dense). The merged patch improves
memory overhead in either case when building indexes from a large sstable.
What was modified is that indexes marked {{SPARSE}} that have more than 5
tokens for any term in the index will fail to build and an exception will be
logged.
bq. Is it now official that a non-SPARSE SASI index (e.g., PREFIX) can be used
for non-TEXT data (int in particular), at least for the case of exact match
lookup?
{{PREFIX}} mode has always been supported for numeric data and was/continues to
be the default mode if none is specified. PREFIX mode should be considered "NOT
SPARSE" for numerical data.
> Avoid index segment stitching in RAM which lead to OOM on big SSTable files
> ----------------------------------------------------------------------------
>
> Key: CASSANDRA-11383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
> Project: Cassandra
> Issue Type: Bug
> Components: CQL
> Environment: C* 3.4
> Reporter: DOAN DuyHai
> Assignee: Jordan West
> Labels: sasi
> Fix For: 3.5
>
> Attachments: CASSANDRA-11383.patch,
> SASI_Index_build_LCS_1G_Max_SSTable_Size_logs.tar.gz,
> new_system_log_CMS_8GB_OOM.log, system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
> JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
> - ≈ 100Gb/per node
> - 1.3 Tb cluster-wide
> - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
> - 8 indices with text field, NonTokenizingAnalyser, PREFIX mode,
> case-insensitive
> - 1 index with numeric field, SPARSE mode
> After a while, the nodes just gone OOM.
> I attach log files. You can see a lot of GC happening while index segments
> are flush to disk. At some point the node OOM ...
> /cc [~xedin]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)