I recently tested 0.7.0beta2 with good results on a reasonably powerful machine: 8 Xeon cores (16 if you count hyperthreads), 64G memory, and some nice HP RAID. So far so good. But when I took the same config and moved it to a basically identical box with 128G of memory, cassandra started responding to writes by creating a plethora of teeny tiny sstables -- something it had not done before. For example:
INFO [FLUSH-WRITER-POOL:1] 2010-10-27 01:53:27,881 Memtable.java (line 150) Writing memtable-standa...@950886315(651 bytes, 18 operations) INFO [MUTATION_STAGE:8] 2010-10-27 01:53:27,882 ColumnFamilyStore.java (line 459) switching in a fresh Memtable for Standard1 at CommitLogContext(file='/var/lib/cassandra/commitlog/_chip/CommitLog-1288169534132.log', position=27598) INFO [MUTATION_STAGE:8] 2010-10-27 01:53:27,883 ColumnFamilyStore.java (line 771) Enqueuing flush of memtable-standa...@1383310803(384 bytes, 10 operations) INFO [FLUSH-WRITER-POOL:1] 2010-10-27 01:53:27,902 Memtable.java (line 157) Completed flushing /var/lib/cassandra/data/_chip/Keyspace1/Standard1-e-7-Data.db INFO [FLUSH-WRITER-POOL:1] 2010-10-27 01:53:27,903 Memtable.java (line 150) Writing memtable-standa...@996627145(360 bytes, 10 operations) INFO [MUTATION_STAGE:3] 2010-10-27 01:53:27,927 ColumnFamilyStore.java (line 459) switching in a fresh Memtable for Standard1 at CommitLogContext(file='/var/lib/cassandra/commitlog/_chip/CommitLog-1288169534132.log', position=31446) INFO [MUTATION_STAGE:3] 2010-10-27 01:53:27,927 ColumnFamilyStore.java (line 771) Enqueuing flush of memtable-standa...@1909350010(957 bytes, 26 operations) INFO [FLUSH-WRITER-POOL:1] 2010-10-27 01:53:27,932 Memtable.java (line 157) Completed flushing /var/lib/cassandra/data/_chip/Keyspace1/Standard1-e-8-Data.db INFO [FLUSH-WRITER-POOL:1] 2010-10-27 01:53:27,933 Memtable.java (line 150) Writing memtable-standa...@1383310803(384 bytes, 10 operations) INFO [CompactionExecutor:1] 2010-10-27 01:53:27,934 CompactionManager.java (line 233) Compacting [org.apache.cassandra.io.sstable.SSTableReader(path='/var/lib/cassandra/data/_chip/Keyspace1/Standard1-e-5-Data.db'),org.apache.cassandra.io.sstable.SSTableReader(path='/var/lib/cassandra/data/_chip/Keyspace1/Standard1-e-6-Data.db'),org.apache.cassandra.io.sstable.SSTableReader(path='/var/lib/cassandra/data/_chip/Keyspace1/Standard1-e-7-Data.db'),org.apache.cassandra.io.sstable.SSTableReader(path='/var/lib/cassandra/data/_chip/Keyspace1/Standard1-e-8-Data.db')] INFO [MUTATION_STAGE:17] 2010-10-27 01:53:27,936 ColumnFamilyStore.java (line 459) switching in a fresh Memtable for Standard1 at CommitLogContext(file='/var/lib/cassandra/commitlog/_chip/CommitLog-1288169534132.log', position=33074) INFO [MUTATION_STAGE:17] 2010-10-27 01:53:27,936 ColumnFamilyStore.java (line 771) Enqueuing flush of memtable-standa...@595066677(396 bytes, 11 operations) INFO [FLUSH-WRITER-POOL:1] 2010-10-27 01:53:28,037 Memtable.java (line 157) Completed flushing /var/lib/cassandra/data/_chip/Keyspace1/Standard1-e-9-Data.db I tried lowering the available memory reported by bin/cassandra: system_memory_in_mb=`free -m | awk '/Mem:/ {print $2}'` [ "$system_memory_in_mb" -gt 65536 ] && system_memory_in_mb=65536 #<<<< new Didn't help. I also tried maually setting the max memtable sizes: binary_memtable_throughput_in_mb: 512 memtable_throughput_in_mb: 4096 Didn't help. Help?