We do pretty much the same thing here, dynamic column with a timestamp for
column name and a different value type for each row. We use the
serialization/deserialization classes provided with Hector and store the
type of the value in the key of the row. Example of row key:
Hi,
The row cache capacity 0.
after reading a row - the Caches.KeySpace.CFKeyCache.Requests attribute
gets incremented but the ColumnFamilies.KeySpace.CF.ReadCount attribute
remains zero and the Caches.KeySpace.CFRowCache.Size and Requsts
attributes remain zero as well.
It looks like the
Hi there,
I read a lot of Cassandra's high scalability feature: allowing seamless
addition of nodes, no downtime etc.
But I wonder how one will do this in practice in an operational system.
In the system we're going to implement we're expecting a huge number of writes
with uniformly
Hi,
I know, I might be missing something here.
I am currently facing 1 issue.
I have 2 cassandra clients(1. Using CassandraServer 2. Using Cassandra.Client)
running connecting to same host.
I have created Keyspace K1, K2 using client1(e.g. CassandraServer), but somehow
those keyspaces are not
Just to make sure:
The yaml doesn't matter. The cache config is stored in the system tables. Its
the CREATE ... WITH ... stuff you did via cassandra-cli to create the CF.
In Jconsole you see that the cache capacity is 0?
On Jul 4, 2011, at 11:18 AM, Shay Assulin wrote:
Hi,
The row cache
mmap'd data will be attributed to res, but the OS can page it out
instead of killing the process.
On Mon, Jul 4, 2011 at 5:52 AM, Daniel Doubleday
daniel.double...@gmx.net wrote:
Hi all,
we have a mem problem with cassandra. res goes up without bounds (well until
the os kills the process
We had an issue like that a short while ago here. This was mainly happening
under heavy load and we managed to stabilize it by tweaking the Young/Old
space ratio of the JVM and by also tweaking the tenuring thresholds/survivor
ratios. What kind of load to you have on your systems? Mostly reads,
Moving nodes does not result in downtime provide you use proper replication
factors and read/write consistencies. The typical recommendation is RF=3 and
QUORUM reads/writes.
Dan
From: Paul Loy [mailto:ketera...@gmail.com]
Sent: July-04-11 5:59
To: user@cassandra.apache.org
Subject: Re:
Well, by issuing a nodetool move when a node is under high load, you
basically make that node unresponsive. That's fine, but a nodetool move on
one node also means that that node's replica data needs to move around the
ring and possibly some replica data from the next (or previous) node in the
Just to make sure:
You were seeing that res mem was more than twice of max java heap and that did
change after you tweaked GC settings?
Note that I am not having a heap / gc problem. The VM itself thinks everything
is golden.
On Jul 4, 2011, at 3:41 PM, Sebastien Coutu wrote:
We had an
Hi,
I am using Cassandra 0.7.5 on Linux machines.
I am trying to backup data from a multi-node cluster (3 nodes) and restore
it into a single node cluster that has a different name (for development
testing).
The multi-node cluster is backed up using clustertool global_snapshot, and
then I copy
On Mon, Jul 4, 2011 at 10:21 AM, Paul Loy ketera...@gmail.com wrote:
Well, by issuing a nodetool move when a node is under high load, you
basically make that node unresponsive. That's fine, but a nodetool move on
one node also means that that node's replica data needs to move around the
ring
It was among one of the issues we had. One of our hosts was using OpenJDK
and we've switched it to Sun and this part of the issue stabilized. The
other issues we had were Heap going through the roof and then OOM under
load.
On Mon, Jul 4, 2011 at 11:01 AM, Daniel Doubleday
Yes thank you.
I have read about the OpenJDK issue but unfortunately we are already on Sun JDK.
On Jul 4, 2011, at 6:04 PM, Sebastien Coutu wrote:
It was among one of the issues we had. One of our hosts was using OpenJDK and
we've switched it to Sun and this part of the issue stabilized.
Hi Sebastian,
one question: do you use jna.jar and do you see JNA mlockall successful in your
logs.
There's that wild theory here that our problem might be related to mlockall and
no swap.
Maybe the JVM does some realloc stuff and the pinned pages are not cleared ...
but that's really only
Hi Daniel,
Yes we do see it, since I've added the JNA libraries, it takes a bit more
time at that step and locks all the memory. We're using JNA 3.3.0 we've
downloaded from there:
https://github.com/twall/jna#readme
https://github.com/twall/jna#readmeOur servers currently have 32GB of
memory
Hi Udo,
I didn't read the whole thread but can you define the type of workload
you're looking at? Do you have jobs that require reading the whole data
stored in your database? For example one big column family that needs to be
read entirely by a job? Because the amount of time required to read a
Hello!
Since we installed cassandra 0.8, the RowKeys are displayed in hexadecimal
in the CLI.
Any idea why and how to fix that?
Thanks in advance
Sebastien
Because you haven't declared a key_validation_class.
On Mon, Jul 4, 2011 at 4:19 PM, Sébastien Druon sdr...@spotuse.com wrote:
Hello!
Since we installed cassandra 0.8, the RowKeys are displayed in hexadecimal
in the CLI.
Any idea why and how to fix that?
Thanks in advance
Sebastien
--
When you say using CassandraServer do you mean an embedded cassandra server ?
What process did you use to add the Keyspaces ? Adding a KS via the thrift API
should take care of everything.
The simple test is stop the server and the clients, start the server again and
see if the KS is defined
How do you change the name of a cluster? The FAQ instructions do not seem to
work for me - are they still valid for 0.7.5?
Is the backup / restore mechanism going to work, or is there a better/simpler
to copy data from multi-node to single-node?
Bug fixed on 0.7.6
On Tue, Jul 5, 2011 at 8:58 AM, aaron morton aa...@thelastpickle.comwrote:
How do you change the name of a cluster? The FAQ instructions do not seem
to work for me - are they still valid for 0.7.5?
Is the backup / restore mechanism going to work, or is there a
better/simpler to copy data
Hi,
When I am using multithreading with Cassandra Query Language ,I have to make
connections for each thread.
A single connection object for whole of the thread pool is not working. I am
using JDBC for connectivity.
I know ,I may be missing something.
Any help/suggestions?
As of Cassandra 0.8, we need to declare a key_validataion_class for the column
family:
For example:
update column family User with key_validation_class=UTF8Type;
From: Sébastien Druon [mailto:sdr...@spotuse.com]
Sent: 05 July 2011 02:50
To: user@cassandra.apache.org
Subject: RowKey in
24 matches
Mail list logo