[
https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202196#comment-15202196
]
Jack Krupansky commented on CASSANDRA-11383:
--------------------------------------------
Just to make sure I understand what's going on...
1. The first index is for the territory_code column, whose values are simple
2-character country codes from allCountries which has 8 entries, with 'FR'
repeated 3 times in that list of 8 country codes.
2. How many rows are generated per machine - is it 100 * 40,000,000 = 4 billion?
3. That means that the SASI index will have six unique index values, each with
roughly 4 billion / 8 = 500 million rows, correct? (Actually, 5 of the 6 unique
values will have 500 million rows and the 6th will have 1.5 billion rows (3
times 500 million.) Sounds like a great stress test for SASI!
4. That's just for the territory_code column.
5. Some of the columns have only 2 unique values, like commercial_offer_code.
That would mean 2 billion rows for each indexed unique value. An even more
excellent stress test for SASI!
> SASI index build leads to massive OOM
> -------------------------------------
>
> Key: CASSANDRA-11383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
> Project: Cassandra
> Issue Type: Bug
> Components: CQL
> Environment: C* 3.4
> Reporter: DOAN DuyHai
> Attachments: CASSANDRA-11383.patch, new_system_log_CMS_8GB_OOM.log,
> system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
> JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
> - ≈ 100Gb/per node
> - 1.3 Tb cluster-wide
> - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
> - 8 indices with text field, NonTokenizingAnalyser, PREFIX mode,
> case-insensitive
> - 1 index with numeric field, SPARSE mode
> After a while, the nodes just gone OOM.
> I attach log files. You can see a lot of GC happening while index segments
> are flush to disk. At some point the node OOM ...
> /cc [~xedin]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)