Is anyone here on the list interested in helping out in working on the next
version of the benchmark?
Would love some assistance and you can potentially get your name on the
document as an author :)
Feel free to reach out, we're always looking for new contributors, you can
check them out
another host?
>
>
> *--*
>
> *Jacques-Henri Berthemet*
>
>
> *From:* onmstester onmstester [mailto:onmstes...@zoho.com
> <mailto:onmstes...@zoho.com>;]
> *Sent:* Monday, March 12, 2018 12:50 PM
> *To:* user <user@cassandra.apache.
es-Henri Berthemet*
*From:* onmstester onmstester [mailto:onmstes...@zoho.com
<mailto:onmstes...@zoho.com>]
*Sent:* Monday, March 12, 2018 12:50 PM
*To:* user mailto:user@cassandra.apache.org>>
*Subject:* RE: yet another benchmark bottleneck
no luck even with
benchmark bottleneck
I mentioned that already tested increasing client threads + many stress-client
instances in one node + two stress-client in two separate nodes, in all of them
the sum of throughputs is less than 130K. I've been tuning all aspects of OS
and Cassandra (whatever I've seen
on another host?
--
Jacques-Henri Berthemet
From: onmstester onmstester [mailto:onmstes...@zoho.com]
Sent: Monday, March 12, 2018 12:50 PM
To: user <user@cassandra.apache.org>
Subject: RE: yet another benchmark bottleneck
no luck even with 320 threads for write
What happens if you increase number of client threads?
Can you add another instance of cassandra-stress on another host?
--
Jacques-Henri Berthemet
From: onmstester onmstester [mailto:onmstes...@zoho.com]
Sent: Monday, March 12, 2018 12:50 PM
To: user
Subject: RE: yet another benchmark
To: user <user@cassandra.apache.org>
Subject: RE: yet another benchmark bottleneck
RF=1
No errors or warnings.
Actually its 300 Mbit/seconds and 130K OP/seconds. I missed a 'K' in first
mail, but anyway! the point is: More than half of node resources (cpu, mem,
disk
onmstester onmstester [mailto:onmstes...@zoho.com]
Sent: Monday, March 12, 2018 12:08 PM
To: user
Subject: RE: yet another benchmark bottleneck
RF=1
No errors or warnings.
Actually its 300 Mbit/seconds and 130K OP/seconds. I missed a 'K' in first
mail, but anyway! the point is: More than ha
onmstester onmstester [mailto:onmstes...@zoho.com]
Sent: Monday, March 12, 2018 11:38 AM
To: user <user@cassandra.apache.org>
Subject: RE: yet another benchmark bottleneck
1.2 TB 15K
latency reported by stress tool is 7.6 ms. disk latency is 2.6 ms
Sent using Zoho Mail
O
Any errors/warning in Cassandra logs? What’s your RF?
Using 300MB/s of network bandwidth for only 130 op/s looks very high.
--
Jacques-Henri Berthemet
From: onmstester onmstester [mailto:onmstes...@zoho.com]
Sent: Monday, March 12, 2018 11:38 AM
To: user
Subject: RE: yet another benchmark
-Henri Berthemet
From: onmstester onmstester [mailto:onmstes...@zoho.com]
Sent: Monday, March 12, 2018 10:48 AM
To: user <user@cassandra.apache.org>
Subject: Re: yet another benchmark bottleneck
Running two instance of Apache Cassandra on same server, each having their own
What’s your disk latency? What kind of disk is it?
--
Jacques-Henri Berthemet
From: onmstester onmstester [mailto:onmstes...@zoho.com]
Sent: Monday, March 12, 2018 10:48 AM
To: user
Subject: Re: yet another benchmark bottleneck
Running two instance of Apache Cassandra on same server, each
Would help to know your version. 130 ops/second sounds like a ridiculously low
rate. Are you doing a single host test?
On Sun, Mar 11, 2018 at 10:44 PM, onmstester onmstester
<onmstes...@zoho.com> wrote:
I'm going to benchmark Cassandra's write throughput on
e host test?
On Sun, Mar 11, 2018 at 10:44 PM, onmstester onmstester
<onmstes...@zoho.com> wrote:
I'm going to benchmark Cassandra's write throughput on a node with following
spec:
CPU: 20 Cores
Memory: 128 GB (32 GB as Cassandra heap)
Disk: 3 seprate disk for OS, data an
Would help to know your version. 130 ops/second sounds like a ridiculously
low rate. Are you doing a single host test?
On Sun, Mar 11, 2018 at 10:44 PM, onmstester onmstester wrote:
> I'm going to benchmark Cassandra's write throughput on a node with
> following spec:
>
&
I'm going to benchmark Cassandra's write throughput on a node with following
spec:
CPU: 20 Cores
Memory: 128 GB (32 GB as Cassandra heap)
Disk: 3 seprate disk for OS, data and commitlog
Network: 10 Gb (test it with iperf)
Os: Ubuntu 16
Running Cassandra-stress:
cassandra-stre
ow is
invaluable for really understanding any future production deployment.
On Sun, Sep 13, 2015 at 9:25 PM, Kevin Burton wrote:
> I’m trying to benchmark two scenarios…
>
> 10 columns with 150 bytes each
>
> vs
>
> 150 columns with 10 bytes each.
>
> The total row “siz
I’m trying to benchmark two scenarios…
10 columns with 150 bytes each
vs
150 columns with 10 bytes each.
The total row “size” would be 1500 bytes (ignoring overhead).
Our app uses 150 columns so I’m trying to see if packing it into a JSON
structure using one column would improve performance
ementation can get the best performance out of
Cassandra in future benchmark rounds.
Any review comments and pull requests would be welcome. The code can be
found on Github:
https://github.com/TechEmpower/FrameworkBenchmarks
https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frame
Today I've also seen this benchmark in Chinese websites. "SequoiaDB" seems
come from a Chinese startup company, and in db-engines ranking
<http://db-engines.com/en/ranking> it's score is 0.00. So IMO I have to say
I think this benchmark is a "soft sell". They
Hi,
I'm always interessted in such benchmark experiments, because the
databases evolve so fast, that the race is always open and there is a
lot motion in there.
And of course I askes myself the same question. And I think that this
publication is unreliable. For 4 reasons (from reading very
i just have read this benchmark pdf, does anyone have some opinion about this?
i think it's not fair about cassandra
url:http://www.bankmark.de/wp-content/uploads/2014/12/bankmark-20141201-WP-NoSQLBenchmark.pdf
http://msrg.utoronto.ca/papers/NoSQLBenchmark
to benchmark it
yourself.
From: Sai Kumar Ganji
mailto:saikumarganj...@gmail.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Wednesday, February 20, 2013 6:20 PM
To: "user@cassandra.apache.org
, #mysql, #benchmark
--
Thanks & Regards
Venkata Sai Ganji
Graduate Student
Dept of Computer Science
Montana State University - Bzn
If your data fits into memory you probably do not need NoSQL.
You may also notice the company that produced the benchmark is a cloudera
partner so they "forgot" to show how much faster couchdb is then hbase in
this scenario, but were more then happy to show you how much "faster&qu
if dataset fits into memory and data used in test almost fits into
memory then cassandra is slow compared to other leading nosql databases,
it can go up to 10:1 ratio. Check infinispan benchmarks. Common use
pattern is to use memcached on top of cassandra.
cassandra is good if you have way mor
ocably delete
this message and any copies.> -Original Message-
> From: Radim Kolar [mailto:h...@filez.com]
> Sent: Tuesday, December 11, 2012 17:42
> To: user@cassandra.apache.org
> Subject: cassandra vs couchbase benchmark
>
> http://www.slideshare.net/Couchbase/benchmarking-couchbase#btnNext
http://www.slideshare.net/Couchbase/benchmarking-couchbase#btnNext
Hi, I posted this message last month and I promised to put up a public
repository with all of our configuration details.
You can find it at https://github.com/vCider/BenchmarksCassandra
We've built an completely automated system with Puppet that configures EC2
instances with Cassandra as well as
longer the process is running.
The throughput difference between 10 and 50 is less than %1.
All seems fine.
Aaron
On 2 Sep 2010, at 18:59, ChingShen wrote:
> Hi Daniel,
>
>I have 4 nodes in my cluster, and run a benchmark on node A in Java.
> P.S. Replication =
1000 and 1 records take too short time to really benchmark anything. You
will use 2 seconds just for stuff like tcp_windows sizes to adjust to the
level were you get throughput.
The difference between 100k and 500k is less than 10%. Could be anything.
Filesystem caches, sizes of memtables
Batchmutate insert? Can be package size that differ if not nr threads sending
data to Cassandra nodes.
Från: ChingShen [mailto:chingshenc...@gmail.com]
Skickat: den 2 september 2010 08:59
Till: user@cassandra.apache.org
Ämne: Re: about insert benchmark
Hi Daniel,
I have 4 nodes in my
Sorry, my Cassandra version is 0.6.4.
Hi Daniel,
I have 4 nodes in my cluster, and run a benchmark on node A in Java.
P.S. Replication = 3
Shen
On Thu, Sep 2, 2010 at 2:49 PM, vineet daniel wrote:
> Hi Ching
>
> You are inserting using php,perl,python,java or ? and is cassandra
> installed locally or on a networ
ts on getting better results :-) .
___
Regards
Vineet Daniel
+918106217121
___
Let your email find you
On Thu, Sep 2, 2010 at 11:39 AM, ChingShen wrote:
> Hi all,
>
> I run a benchmark with my own code and found t
Hi all,
I run a benchmark with my own code and found that the 10 inserts
performance is better than others, Why?
Can anyone explain it?
Thanks.
Partitioner = OPP
CL = ONE
==
1000 records
insert one:201 ms
insert per:0.201 ms
insert thput
uning
parameters to improve any of the metrics reported."
On Fri, May 7, 2010 at 8:09 PM, Kristian Eide wrote:
> There is a benchmark comparing Cassandra to Voldemort performance here:
>
> http://blog.medallia.com/2010/05/choosing_a_keyvalue_storage_sy.html
>
> --
> Kristian
>
rg
Subject: Cassandra vs. Voldemort benchmark
There is a benchmark comparing Cassandra to Voldemort performance here:
http://blog.medallia.com/2010/05/choosing_a_keyvalue_storage_sy.html
--
Kristian
There is a benchmark comparing Cassandra to Voldemort performance here:
http://blog.medallia.com/2010/05/choosing_a_keyvalue_storage_sy.html
--
Kristian
Hell yeah!
--
Jeff
On Fri, Apr 23, 2010 at 10:59 AM, Brian Frank Cooper
wrote:
> Yahoo! Research is pleased to announce the release of the Yahoo! Cloud
> Serving Benchmark, YCSB v. 0.1.0, as an open source package. YCSB is a
> common benchmarking framework for cloud database, storage an
Yahoo! Research is pleased to announce the release of the Yahoo! Cloud Serving
Benchmark, YCSB v. 0.1.0, as an open source package. YCSB is a common
benchmarking framework for cloud database, storage and serving systems. Results
for benchmarking HBase, Cassandra, PNUTS and MySQL will be
41 matches
Mail list logo