> 1. Changing consistency level configurations from Write.ALL + Read.ONE
> to Write.ALL + Read.ALL increases write latency (expected) and
> decrease read latency (unexpected).

When you tested at CL.ONE, was read repair turned on?

The two ways I can think of right now, by which read latency might improve, are:

* You're benchmarking at saturation (rather than at reasonable
capacity), and the decrease overall throughput (thus load) causes
better latencies.
* No (or low) read repair could lead to different selection of read
endpoints by the co-ordinator, such as if a node never gets the chance
to be snitched "close" due to lack of traffic.

In addition, I should mention that it *would* be highly expected to
see better latencies at ALL if
https://issues.apache.org/jira/browse/CASSANDRA-2540 were done (same
with QUORUM).

> 2. Changing from a single-region Cassandra cluster to a multi-region
> Cassandra cluster on EC2 significantly increases write latency (approx
> 2x, again expected) but slightly decreases read latency (approx -10%,
> again unexpected).

Are all benchmarking clients in a single region? I could imagine some
kind of snitch effect here, possible. But not if your benchmarking
clients are connecting to random co-ordinators (assuming the
inter-region latency is in fact worse ;)).

Something is highly likely bogus IMO. If you have significantly higher
latencies across regions, there's just no way CL.ALL would get you
lower latencies than CL.ONE. Even if the clients selected hosts
randomly across the entire cluster, all requests that happen to go to
a region-local co-ordinator should see better latencies. That's
ASSUMING that you've set up the multi-region cluster as multi-DC in
Cassandra.

If not, Cassandra would not have knowledge about topology and relative
latency other than what is driven by traffic, and I could imagine this
happening if read repair were turned completely off.

-- 
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)

Reply via email to