Hi,

I keep seeing the error message below in my log files. Can someone explain what 
it means and how to prevent it?




 INFO [OptionalTasks:1] 2013-09-07 13:46:27,160 MeteredFlusher.java (line 58) 
flushing high-traffic column family CFS(Keyspace='pr
oducts', ColumnFamily='product') (estimated 3145728 bytes)
ERROR [CompactionExecutor:2] 2013-09-07 13:46:27,163 CassandraDaemon.java (line 
192) Exception in thread Thread[CompactionExecutor
:2,1,main]
java.lang.AssertionError: incorrect row data size 132289 written to 
/var/lib/cassandra/data/products/product/products-product-tmp-ic-177-Data.db; 
correct is 132382
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:162)
....


I am doing high frequency insert in a 3Node , each 2GB-RAM cluster and 
experience crashing C* processes.
My write pattern is a series of 70-col rows with 5-20 consecutive rows 
pertaining to the same wide row. Mabe that is causing too frequent compactions 
with high volume?.

Jan

Reply via email to