Another potential issue is when some failure happens to some of the mutations. 
Is atomic batches in 1.2 designed to resolve this?

http://www.datastax.com/dev/blog/atomic-batches-in-cassandra-1-2

-Wei

----- Original Message -----
From: "aaron morton" <aa...@thelastpickle.com>
To: user@cassandra.apache.org
Sent: Sunday, January 13, 2013 7:57:56 PM
Subject: Re: How many BATCH inserts in to many?

With regard to a large number of records in a batch mutation there are some 
potential issues. 


Each row becomes a task in the write thread pool on each replica. If a single 
client sends 1,000 rows in a mutation it will take time for the (default) 32 
threads in the write pool to work through the mutations. While they are doing 
this other clients / requests will appear to be starved / stalled. 


There are also issues with the max message size in thrift and cql over thrift. 


IMHO as a rule of thumb dont go over a few hundred if you have a high number of 
concurrent writers. 


Cheers 








----------------- 
Aaron Morton 
Freelance Cassandra Developer 
New Zealand 


@aaronmorton 
http://www.thelastpickle.com 


On 14/01/2013, at 12:56 AM, Radim Kolar < h...@filez.com > wrote: 


do not use cassandra for implementing queueing system with high throughput. It 
does not scale because of tombstone management. Use hornetQ, its amazingly fast 
broker but it has quite slow persistence if you want to create queues 
significantly larger then your memory and use selectors for searching for 
specific messages in them. 

My point is for implementing queue message broker is what you want. 


Reply via email to