Is there a particular reason why timestamp is required to do a deletion? If
i'm reading the api docs correctly, this would require a read of the column
first correct?
I know there is an issue filed to have a better way to delete via range
slices but I wanted to make sure this was the only way to
Just an update. I rolled the memtable size back to 128MB. I am still
seeing that the daemon runs for a while with reasonable heap usage, but then
the heap climbs up to the max (6GB in this case, should be plenty) and it
starts GCing, without much getting cleared. The client catches lots of
On Fri, May 21, 2010 at 8:55 AM, Mark Greene green...@gmail.com wrote:
Is there a particular reason why timestamp is required to do a deletion?
Because a delete is just a write with a tombstone flag, and the write with
the highest timestamp wins.
If i'm reading the api docs correctly, this
There is no other way to make the cluster forget a node w/o
decommission / removetoken.
You could do everything up to stop the entire cluster and do a
rolling restart instead, kill the 2 nodes you want to remove, and then
do removetoken, which would still do extra i/o but at least the slow
nodes
ts is when the delete is performed, not the ts of the column you're deleting.
you need to provide a ts for every operation so that if there are
multiple clients updating the same column at the same time, cassandra
can decide who wins.
On Fri, May 21, 2010 at 6:55 AM, Mark Greene
On the to-do list for today. Is there a tool to aggregate all the JMX
stats from all nodes? I mean, something a little more complete than nagios.
Ian
On Fri, May 21, 2010 at 10:23 AM, Jonathan Ellis jbel...@gmail.com wrote:
you should check the jmx stages I posted about
On Fri, May 21,
Thanks, I'll try that next time.
On May 21, 2010 5:23 PM, Jonathan Ellis jbel...@gmail.com wrote:
There is no other way to make the cluster forget a node w/o
decommission / removetoken.
You could do everything up to stop the entire cluster and do a
rolling restart instead, kill the 2 nodes you
Thanks.
You resolved my Problem.
My error :
I didn't see transport.open, open a new socket for each call.
I did think it's reuse the same one.
2010/5/20 Jonathan Ellis jbel...@gmail.com
disseminating load info is not related to your problem.
certainly you should be using connection
So at the moment, I'm not running my loader, and I'm looking at one node
which is slow to respond to nodetool requests. At this point, it has a pile
of hinted-handoffs pending which don't seem to be draining out. The
system.log shows that it's GCing pretty much constantly.
Ian
$
Hi all
In the following weeks I have developed a plugin to the java persistence
platform Datanucleus, similar to the one presented by Google for App Engine and
a Hbase already present in the platform . Datanucleus:
http://www.datanucleus.org/project/download.html
For now it allows the
curious how did things turn out?
On Tue, May 18, 2010 at 1:38 PM, Curt Bererton c...@zipzapplay.com wrote:
We only have a few CFs (6 or 7). I've increased the MemtableThroughputInMB
and MemtableOperationsInMillions as per your suggestions. Do we really
need a swap file though? I suppose it
We can get Cassandra to run great for a few hours now. Writing to and
reading from cassandra work well and the read/write times are good etc. We
also changed our config to enable row caching (we're hoping to ditch our
memcache server layer entirely).
Unfortunately, running on an EC2 High Memory
12 matches
Mail list logo