Hi,
Tried to use n=.. instead of duration=..., as one of users suggested.
Have more data on disks, indeed.
Succeed to fill up data disks up to 4% and have an error.
Questions:
1) May be you can hint what could that error mean?
2) When I see everything during the test, printed 3
I do not know if it can really help in your situation,
but from NGCC notes I discovered the existence of GatlingCQL (
https://github.com/gatling-cql/GatlingCql) as an alternative to
cassandra-stress.
In particular you can tweak a bit the data generation part.
giampaolo
2016-06-16 10:33
Thank you, guys.
I will try all proposals.
The limitation, mentioned by Benedict, is huge.
But anyway, there is something to do around.
From: Peter Kovgan
Sent: Wednesday, June 15, 2016 3:25 PM
To: 'user@cassandra.apache.org'
Subject: how to force cassandra-stress to actually generate enough
cassandra-stress has some (many) limitations - that I had planned to
address now it's seeing wider adoption, but since I no longer work on the
project for my day job I am unlikely to now... so, sorry but you'll have to
tolerate them :)
In particular, the problem you encounter here is that a given
Are you running with n=[number ops] or duration=[xx]? I’ve found you need
to you n= when inserting data. When you use duration cassandra-stress
defaults to 1,000,000 somethings (to be honest, I’m not entirely sure if
it’s rows, partitions or something else that the 1,000,000 relates to) and
I usually do a write only bench run first. Doing a 1B write iterations will
produce 200GB+ data on disk. You can then do mixed tests.
For instance, a write bench that would produce such volume on a 3 nodes cluster:
./tools/bin/cassandra-stress write cl=LOCAL_QUORUM n=10 -rate