There should be at least one = (equals) in the WHERE case on key or
secondary index column, this is the Cassandra limitation.
On Sun, Jul 29, 2012 at 9:30 AM, Abhijit Chanda
abhijit.chan...@gmail.com wrote:
There should be at least one = (equals) in the WHERE case on key or
secondary index column, this is the Cassandra limitation.
Yep, it's still there (see validateFilterClauses from line 531):
On Sat, Jul 28, 2012 at 4:21 PM, Ertio Lew ertio...@gmail.com wrote:
I heard that it is* not highly recommended* to create more than a single
keyspace for an application or on a single cluster !?
The only possible issue there is with connections and connection pooling,
where you need to set a
Hi
I got the below exception to Cassandra log while doing the *drain* via
*nodetool* operation before shutting down one node in 3 node development
Cassandra 1.1.2 cluster.
2012-07-30 09:37:45,347 ERROR [CustomTThreadPoolServer] Thrift error
occurred during processing of message.
I'm trying to determine if there are any practical limits on the amount of data
that a single node can handle efficiently, and if so, whether I've hit that
limit or not.
We've just set up a new 7-node cluster with Cassandra 1.1.2 running under
OpenJDK6. Each node is 12-core Xeon with 24GB of
Yikes. You should read:
http://wiki.apache.org/cassandra/LargeDataSetConsiderations
Essentially what it sounds like your are now running into is this:
The BloomFilters for each SSTable must exist in main memory. Repair
tends to create some extra data which normally gets compacted away
later.
Hi
As a part of the Cassandra upgrade to 1.1.2 from 1.0.6, I am running
*nodetool drain* node by node to empty the commit logs. When draining a
particular node, that node accepting READ+WRITE request from the clients and
giving below exceptions.
2012-07-30 23:08:18,169 ERROR