Even if your query contains multiple columns which have secondary index on
each, current cassandra uses only one of them as a hash lookup. Other
columns are for filtering out from matched results. If a part of your
secondary index query has a lot of matches in data, cassandra has to
iterate over
, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com
--
It's always darkest just before you are eaten by a grue.
--
Shotaro Kamio
in the SSTable which can be purged until the rows are read.
Perhaps the file could hold the earliest deleted at time somewhere (same for
TTL), but I do not think we do that now.
Hope that helps.
Aaron
On 20 Apr 2011, at 21:25, Shotaro Kamio wrote:
Hi,
I found that our cluster repeats
Hi,
Our cluster uses cassandra 0.7.4 (upgraded from 0.7.3) with
replication = 3. I found that error occurs on one node during hinted
handoff with following error (log #1 below).
When I tried out scrub system HintsColumnFamily, I saw an ERROR in
log (log #2 below).
Do you think these errors are
Hi,
When I looking at countPendingHints in HintedHandoffManager via jmx,
I found that pending hints increases even when my cluster handles only
reads with quorum from clients.
The count decreases when I see it in long period (e.g., in an hour).
But it can increase in several thousands in short
[start] + 'z', False)
On Thu, Feb 17, 2011 at 11:09 PM, Shotaro Kamio kamios...@gmail.com wrote:
Hi Aaron,
Range slice means get_range_slices() in thrift api,
createSuperSliceQuery in hector, get_range() in pycassa. The example
code in pycassa is attached below.
The problem is a little
Hi,
We are in trouble with a strange behavior in cassandra 0.7.2 (also
happened in 0.7.0). Could someone help us?
The problem happens on a column family of super column type named Order.
Data structure is something like:
Order[ a_key ][ date + / + order_id + / (+ suffix) ][attribute] = value
be fixable as long as
one knows *why* (it should be due to heap usage, although I don't see
anything in your numbers that would indicate to me why the heap would
have so much live data as to cause problems given your 16 gig heap
size).
--
/ Peter Schuller
--
Shotaro Kamio