We're on a read and update heavy access pattern. E.g. each request to
Cassandra goes like
1. read all columns of row
2. do something with row
3. write all columns of row
the columns we use are always the same, e.g. always (c1,c2,c3). c2 and
c3 have a TTL.
Since we always read c1,c2,c3 and after
Check this out:
http://www.datastax.com/docs/1.0/install/upgrading#upgrading-between-minor-releases-of-cassandra-1-0-x
Cheers
Am 11.03.2012 um 07:42 schrieb Tamar Fraenkel ta...@tok-media.com:
Hi!
I want to experiment with upgrading. Does anyone have a good link on how to
upgrade Cassandra?
Either you do that or you could think about using a secondary index on the
fb user name in your primary cf.
See http://www.datastax.com/docs/1.0/ddl/indexes
Cheers
Am 11.03.2012 um 09:51 schrieb Tamar Fraenkel ta...@tok-media.com:
Hi!
I need some advise:
I have user CF, which has a UUID key
We're running a 8 node cluster with different CFs for different applications.
One of the application uses 1.5TB out of 1.8TB in total, but only because we
started out with a deletion mechanism and implemented one later on. So there is
probably a high amount of old data in there, that we don't
- 50 % of your actual on-disk capacity. Let me know if anyone in
the community disagrees, but I'd say you're about 600 GB past the point at
which you have a lot of easy outs -- but I hope you find one anyways!
On Sat, Jan 21, 2012 at 2:45 AM, Marcel Steinbach marcel.steinb...@chors.de
might be out of bounds.
Cheers
Marcel
On 20.01.2012, at 16:28, Marcel Steinbach wrote:
Thanks for all the responses!
I found our problem:
Using the Random Partitioner, the key range is from 0..2**127.When we added
nodes, we generated the keys and out of convenience, we added an offset
also use cfs with a date (mmdd) as key, as well as cfs with
uuids as keys. And those cfs in itself are not balanced either. E.g. node 5 has
12 GB live space used in the cf the uuid as key, and node 8 only 428MB.
Cheers,
Marcel
On Thu, Jan 19, 2012 at 3:22 AM, Marcel Steinbach
**127 for the last two tokens, so they
were outside the RP's key range.
moving the last two tokens to their mod 2**127 will resolve the problem.
Cheers,
Marcel
On 20.01.2012, at 10:32, Marcel Steinbach wrote:
On 19.01.2012, at 20:15, Narendra Sharma wrote:
I believe you need to move the nodes
did
compactions and cleanups and didn't have a balanced cluster. So that should
have removed outdated data, right?
2012/1/18 Marcel Steinbach marcel.steinb...@chors.de:
We are running regular repairs, so I don't think that's the problem.
And the data dir sizes match approx. the load from
://www.thelastpickle.com
On 18/01/2012, at 2:19 PM, Maki Watanabe wrote:
Are there any significant difference of number of sstables on each nodes?
2012/1/18 Marcel Steinbach marcel.steinb...@chors.de:
We are running regular repairs, so I don't think that's the problem.
And the data dir sizes match
Hi,
we're running a 8 node cassandra-0.7.6 cluster, with avg. throughput of 5k
reads/s and almost as much writes/s. The client API is pelops 1.1-0.7.x.
Latencies in the CFs (RecentReadLatencyHistogramMicros) look fine with 99th
percentile at 61ms. However, on the client side, p99 latency is at
Hi,
we're using RP and have each node assigned the same amount of the token space.
The cluster looks like that:
Address Status State LoadOwnsToken
doubt that it would generate
'hotspots' for those kind of keys, right?
On 17.01.2012, at 17:34, Mohit Anchlia wrote:
Have you tried running repair first on each node? Also, verify using
df -h on the data dirs
On Tue, Jan 17, 2012 at 7:34 AM, Marcel Steinbach
marcel.steinb...@chors.de wrote
13 matches
Mail list logo