Aren't you using mesos Cassandra framework to manage your multiple clusters
? (Seen a presentation in cass summit)
What's wrong with your current mesos approach ?
I am also thinking it's better to split a large cluster into smallers
except if you also manage client layer that query cass and you can
On 2017-02-20 05:52 (-0800), Edward Capriolo wrote:
> Not directly. Consider proxing request through an application server and
> log at that level.
As is often the case, Ed probably has the most straight-forward solution given
current feature set. The other is to turn on tracing with probabil
Older versions had a request scheduler api.
On Monday, February 20, 2017, Ben Slater > wrote:
> We’ve actually had several customers where we’ve done the opposite - split
> large clusters apart to separate uses cases. We found that this allowed us
> to better align hardware with use case requirem
Hi,
when C* coordinator writes to replicas does it write it in same order or
different order? other words, Does the replication happen synchronously or
asynchrnoulsy ? Also does this depend sync or async client? What happens in
the case of concurrent writes to a coordinator ?
Thanks,
kant
Hi,
1. Are Cassandra Triggers Thread Safe? what happens if two writes invoke
the trigger where the trigger is trying to modify same row in a partition?
2. Had anyone used it successfully on production? If so, any issues? (I am
using the latest version of C* 3.10)
3. I have partitions that are abou
We’ve actually had several customers where we’ve done the opposite - split
large clusters apart to separate uses cases. We found that this allowed us
to better align hardware with use case requirements (for example using AWS
c3.2xlarge for very hot data at low latency, m4.xlarge for more general
pu
Hah! Found the problem!
After setting read_ahead to 0 and compression chunk size to 4kb on all CFs,
the situation was PERFECT (nearly, please see below)! I scrubbed some CFs
but not the whole dataset, yet. I knew it was not too few RAM.
Some stats:
- Latency of a quite large CF: https://cl.ly/1r3
I guess I misspoke, sorry. It is true that count() as any other query is
still governed by the read timeout and any count that has to process a lot
of data will take a long time and will require a high timeout set to not
timeout (true of every aggregation query as it happens).
I guess I responded
Not directly. Consider proxing request through an application server and
log at that level.
On Friday, February 10, 2017, Benjamin Roth wrote:
> If you want to audit write operations only, you could maybe use CDC, this
> is a quite new feature in 3.x (I think it was introduced in 3.9 or 3.10)
>
+1 I also encountered timeouts many many times (using DS DevCenter).
Roughly this occured when count(*) > 1.000.000
2017-02-20 14:42 GMT+01:00 Edward Capriolo :
> Seems worth it to file a bug since some here are under the impression it
> almost always works and others are under the impression it
Seems worth it to file a bug since some here are under the impression it
almost always works and others are under the impression it almost never
works.
On Friday, February 17, 2017, kurt greaves wrote:
> really... well that's good to know. it still almost never works though. i
> guess every time
You could save space when storing your data (base64-)decoded as blobs.
2017-02-20 13:38 GMT+01:00 Oskar Kjellin :
> We currently have some cases where we store base64 as a text field instead
> of a blob (running version 2.0.17).
> I would like to move these to blob but wondering what benefits and
We currently have some cases where we store base64 as a text field instead
of a blob (running version 2.0.17).
I would like to move these to blob but wondering what benefits and
optimizations there are? The possible ones I can think of is (but there's
probably more):
* blob is stored as off heap B
On Sat, Feb 18, 2017 at 3:12 AM, Abhishek Verma wrote:
> Cassandra is being used on a large scale at Uber. We usually create
> dedicated clusters for each of our internal use cases, however that is
> difficult to scale and manage.
>
> We are investigating the approach of using a single shared clu
Hi Benjamin,
Yes, Read ahead of 8 would imply more IO count from disk but it should not
cause more data read off the disk as is happening in your case.
One probable reason for high disk io would be because the 512 vnode has
less page to RAM ratio of 22% (100G buff /437G data) as compared to 46%
(
15 matches
Mail list logo