Hello,
I am having some very strange issues with a cassandra setup. I recognize that
this is not the ideal cluster setup, but I'd still like to try and understand
what is going wrong.
The cluster has 3 machines (A,B,C) running Cassandra 1.0.9 with JNA. A B are
in datacenter1 while C is in
I am going to have a supercolumn family where some rows can be quite large
(10-100 mb). I'd like to be able to pull a subset of this data without having
to pull the whole thing into memory and send it over the wire.
Each query will be for only one row. The supercolumn key and the child column
I am considering building a system as follows:
1. Data stored in Cassandra
2. Webservice cluster (stateless) will pull data from cassandra and do
business
operations plus security enforcement
3. Clients will hit the webservice cluster
I'm trying to maintain a low read latency and am worried
I'm trying to figure out the best way to achieve single row modification
isolation for readers.
As an example, I have 2 rows (1,2) with 2 columns (a,b). If I modify both
rows,
I don't care if the user sees the write operations completed on 1 and not on 2
for a short time period (seconds). I
, E S tr1skl...@yahoo.com wrote:
I'm trying to figure out the best way to achieve single row modification
isolation for readers.
As an example, I have 2 rows (1,2) with 2 columns (a,b). If I modify both
rows,
I don't care if the user sees the write operations completed on 1 and not on 2
9:50:51 AM
Subject: Re: Achieving isolation on single row modifications with batch_mutate
On Sat, Nov 27, 2010 at 10:12 AM, E S tr1skl...@yahoo.com wrote:
I'm trying to figure out the best way to achieve single row modification
isolation for readers.
I have a lot of No's for you
I've gotten myself really confused by
http://wiki.apache.org/cassandra/ArchitectureInternals and am hoping someone
can
help me understand what the io behavior of this operation would be.
When I do a get_slice for a column range, will it seek to every SSTable? I had
thought that it would use
I am trying to minimize my SSTable count to help cut down my read latency. I
have some very beefy boxes for my cassandra nodes (96 gigs of memory each). I
think this gives me a lot of flexibility to cut down SSTable count by having a
very large memtable throughput setting.
While
Already submitted and fixed! Thanks Jonathan for your help on this. I really
appreciate it!
https://issues.apache.org/jira/browse/CASSANDRA-2158
I am trying to understand the best procedure for adding new nodes. The one
that I see most often online seems to have a hole where there is a low
probability of permanently losing data. I want to understand what I am missing
in my understanding.
Let's say I have a 3 node cluster (node A,B,C)
10 matches
Mail list logo