Yes, that's correct - and that's a scaled number.  In practice:

On the local dev machine, CQL3 inserting 10,000 columns (for 1 row) in a
BATCH took 1.5 minutes.  50,000 columns (the desired amount) in a BATCH
took 7.5 minutes.  The same Thrift functionality took _235 milliseconds_.
 That's almost 2,000 times faster (3 orders of magnitude difference)!

However, according to Aleksey Yeschenko, this performance problem has been
addressed in 2.0 beta 1 via
https://issues.apache.org/jira/browse/CASSANDRA-4693.

I'll reserve judgement until I can performance-test 2.0 beta 1 ;)

Cheers,

--
Les Hazlewood | @lhazlewood
CTO, Stormpath | http://stormpath.com | @goStormpath | 888.391.5282

On Fri, Aug 30, 2013 at 12:50 PM, Alex Popescu <al...@datastax.com> wrote:

> On Fri, Aug 30, 2013 at 11:56 AM, Vivek Mishra <mishra.v...@gmail.com>wrote:
>
>> @lhazlewood
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-5959
>>
>> Begin batch
>>
>>      multiple insert statements.
>>
>> apply batch
>>
>> It doesn't work for you?
>>
>> -Vivek
>>
>>
> According to the OP batching inserts is slow. The SO thread [1] mentions
> that the in their environment BATCH takes 1.5min, while the Thrift-based
> approach is around 235millis.
>
> [1]
> http://stackoverflow.com/questions/18522191/using-cassandra-and-cql3-how-do-you-insert-an-entire-wide-row-in-a-single-reque
> --
>
> :- a)
>
>
> Alex Popescu
> Sen. Product Manager @ DataStax
> @al3xandru
>

Reply via email to