...@gmail.com]
Sent: Friday, January 13, 2012 9:14 AM
To: user@cassandra.apache.org
Subject: RE: cassandra 1.0.6 rpm
I believe the rpm location has changed
Look at the datastax documentation.
Le 12 janv. 2012 23:00, Shu Zhang
szh...@mediosystems.commailto:szh...@mediosystems.com a écrit :
Hello
Hello? Does anyone know why an rpm is still not available for 1.0.6?
Thanks,
Shu
From: Shu Zhang
Sent: Wednesday, December 28, 2011 1:16 PM
To: user@cassandra.apache.org
Subject: cassandra 1.0.6 rpm
Hi, it looks like cassandra 1.0.6 was released a while
Hi, just wondering if this is intentional:
[default@test] create column family index;
Syntax error at position 21: mismatched input 'index' expecting set null
[default@test] create column family idx;
b9aae960-1bb2-11e1--bf27a177f2f6
Waiting for schema agreement...
... schemas agree across the
At first, I was also thinking that one or more nodes in the cluster are broken
or not responding. But through nodetool cfstats, it looks like all the nodes
are working as expected and pings gives me the expected inter-node latencies.
Also the scores calculated by dynamic snitch in the steady
investigate
further would be greatly appreciated.
From: Shu Zhang [szh...@mediosystems.com]
Sent: Monday, November 07, 2011 6:07 PM
To: user@cassandra.apache.org
Subject: propertyfilesnitch problem
Hi,
We have a 2 DC setup on version 0.7.9 and have
Hi,
We have a 2 DC setup on version 0.7.9 and have observed the following:
1. Using a property file snitch, with dynamic snitch turned on. The performance
of LOCAL_QUORUM operations is poor for a while (around a minute) after a
cluster restart before drastically improving.
2. With the same
Hi all, I just stumbled on to what looks like issue CASSANDRA-2653. Here's my
stack trace:
ERROR [ReadStage:10] 2011-06-27 15:22:36,087 AbstractCassandraDaemon.java (line
114) Fatal exception in thread Thread[ReadStage:10,5,main]
java.lang.AssertionError: No data found for
Late reply, but I just got the same error restarting after upgrading from
0.7.2 to 0.7.5.
I did a drain using nodetool on each node before I killed them and did the
upgrade. Should all commitlogs have been cleaned up after a drain? I would
think so, but they were not. Maybe there is a bug around
How large are your rows? binary_memtable_throughput_in_
mb only tracks size of data, but there is an overhead associated with each row
on the order of magnitude of a few KBs. If your row data sizes are really small
then the overhead dominates the memory usage and binary_memtable_throughput_in_
. To
understand your memtable row overhead size, you can do the above exercise with
very different data sizes.
Also, you probably know this, but when setting your memory usage ceiling or
heap size, make sure to leave a few hundred MBs for GC.
From: Shu
Hi, a node in my cassandra cluster will not accept keyspace additions applied
to other nodes. In its logs, it says:
DEBUG [MigrationStage:1] 2011-02-15 15:39:57,995
DefinitionsUpdateResponseVerbHandler.java (line 71) Applying AddKeyspace from
{X}
DEBUG [MigrationStage:1] 2011-02-15
So if Key A is supposed to go to Node, 1,2,3 then the commit log for Key A
will be on each of these nodes?
There isn't a commit log per key, just one for each node tracking what's been
written to that node. If a node1 determines node2 or node3 should handle a
request it received, it'll route
unsubscribe
when a request comes in which node handles the request first
You (ie. cassandra client) always specifies the exact node to send requests to.
While most higher level clients let's you specify configurations for a whole
cluster, that's usually for their own basic load balancing. Each request any
Each row can have a maximum of 2 billion columns, which a logging system will
probably hit eventually.
More importantly, you'll only have 1 row per set of system logs. Every row is
stored on the same machine(s), which you means you'll definitely not be able to
distribute your load very well.
for that?
From: Brandon Williams [dri...@gmail.com]
Sent: Monday, January 17, 2011 5:09 PM
To: user@cassandra.apache.org
Cc: hector-us...@googlegroups.com
Subject: Re: please help with multiget
On Mon, Jan 17, 2011 at 6:53 PM, Shu Zhang
szh...@mediosystems.commailto:szh
the need for
complicated semantics when reading.
Aaron
On 19/01/2011, at 7:57 AM, Shu Zhang szh...@mediosystems.com wrote:
Well, maybe making a batch-get is not anymore efficient on the server side
but without it, you can get bottlenecked on client-server connections and
client resources
Here's the method declaration for quick reference:
mapstring,listColumnOrSuperColumn multiget_slice(string keyspace,
liststring keys, ColumnParent column_parent, SlicePredicate predicate,
ConsistencyLevel consistency_level)
It looks like you must have the same SlicePredicate for every key in
: Monday, December 27, 2010 6:59 PM
To: user
Subject: Re: read repair across datacenters?
https://issues.apache.org/jira/browse/CASSANDRA-982
On Mon, Dec 27, 2010 at 5:55 PM, Shu Zhang szh...@mediosystems.com wrote:
Brandon, for a read with quorum CL, a response is returned to the client
after half
19 matches
Mail list logo