If you’re after some benchmark that someone else has already run to help
estimate sizing, we pretty regularly publish benchmarking on various cloud
provider instances.
For example, see:
https://www.instaclustr.com/announcing-instaclustr-support-for-aws-i3en-instances/
and
Cassandra, being a scale-out database, can load any arbitrary number of
records per hour.
The best way to do this is for your given data model, find what your max
throughput is on a single node by scaling the number of clients until you
start seeing errors (or hit your latency SLA) then pull back
Have you tried ycsa?
It is a tool from yahoo for stress testing nosql databases.
On Tue, Aug 20, 2019 at 3:34 AM wrote:
> Hi Everyone,
>
>
>
> Anyone before who have bused Cassandra-stress. I want to test if it’s
> possible to load 600 milllions records per hour in Cassandra or
>
> Find a
Hi Everyone,
Anyone before who have bused Cassandra-stress. I want to test if it’s possible
to load 600 milllions records per hour in Cassandra or
Find a better way to optimize Cassandra for this case.
Any help will be highly appreciated.
Sent from Mail for Window
Hi,
Not yet fixed, probably Q4 2019.
Please find more informations on this thread:
https://lists.apache.org/thread.html/246a5d79240b7701455360d650de7acb11c66e53d007babe206fe0a7@%3Cdev.cassandra.apache.org%3E