can you reproduce using stress.py ?

On Tue, Dec 8, 2009 at 10:16 AM, Timo Nentwig <[email protected]> wrote:
> On Dec 7, 2009, at 7:23 PM, Jonathan Ellis wrote:
>
>> same thing, you are going to need multiple threads to max it out
>
> I created up to 100 threads and read randomly. Some speed up but not actually 
> mentionable. The threads didn't load the CPU mentionably either. I noticed 
> that the thread dump was full of Thift (TBinaryProtocol something) 
> InputStream.read(), 10MiB constant read from HD.
>
>> but yes, reads are typically slower than writes in cassandra because
>> of how the log-based merge structures work
>>
>> On Mon, Dec 7, 2009 at 11:09 AM, Timo Nentwig <[email protected]> wrote:
>>>
>>> On Dec 7, 2009, at 5:59 PM, Jonathan Ellis wrote:
>>>
>>>> yes and no -- that's about 4200/s, which is typical for only a single
>>>
>>> When writing, yes. But I would expect reading to be much faster (?). 
>>> Re-executing the read test doesn't speed up things either (I/O caches).
>>>
>>>> thread but 1/3 to 1/5 of what you'd expect it to max out (on our
>>>> quad-core test boxes) when you add client threads
>>>>
>>>> On Mon, Dec 7, 2009 at 10:38 AM, Timo Nentwig <[email protected]> 
>>>> wrote:
>>>>> Hi!
>>>>>
>>>>> I just downloaded, installed, start cassandra and ran very simple 
>>>>> "benchmark": write n times something with 
>>>>> key==value==testInsertAndGetAndRemove_n (one thread).
>>>>>
>>>>> For n==10 million on a 7200rpm HDD (4G RAM - there should have be 
>>>>> "reasonably" free mem however I didn't check) this took 40min 
>>>>> (insert()ing one after another). Reading them one by one in sequence 
>>>>> delivers about 100/s, reading in 1.000er batches (i.e. multigetColumn()) 
>>>>> takes 5-10s (depending on n, the higher the slower).
>>>>>
>>>>> Are this typical numbers for cassandra (0.5)? I actually took the 
>>>>> configuration as it was.
>>>
>>>
>
>

Reply via email to