Im facing the following issue with Cassandra 1.0 set up. The same works for
0.8.7
# cassandra-cli -h x.x.x.x -f RTSCFs.sch
Connected to: Real Time Stats on x.x.x.x/9160
Authenticated to keyspace: Stats
39c3e120-fa24-11e0--61d449114eff
Waiting for schema agreement...
The schema has not
Hi,
I've a 4 node cluster of cassandra 0.8.7 (upgraded just recently from
0.7.8). Upgrade went smothly, no problem with the data.
Problem are my MapReduce tasks. The all report:
java.io.IOException: InvalidRequestException(why:Column timestamp is required)
at
As usually happen, I've found the problem just after I sent the
question. I have to use setters for setting values to thrift.*
classes.
So instead of:
Deletion d = new Deletion();
d.timestamp = 1;
use:
Deletion d = new Deletion();
d.setTimestamp(1);
etc.
Regards,
Patrik
Hello,
SimpleAuthenticator SimpleAuthorization just disappear in release 1.0.0...
Will this stay like this or is it a release bug ?
Thanks,
- Pierre
See:
https://issues.apache.org/jira/browse/CASSANDRA-2922
On Thu, Oct 20, 2011 at 4:08 AM, Pierre Chalamet pie...@chalamet.netwrote:
Hello,
SimpleAuthenticator SimpleAuthorization just disappear in release
1.0.0...
Will this stay like this or is it a release bug ?
Thanks,
- Pierre
Thanks for the answer.
- Pierre
-Original Message-
From: Yi Yang i...@iyyang.com
Date: Thu, 20 Oct 2011 04:20:25
To: user@cassandra.apache.org; pie...@chalamet.net
Subject: Re: SimpleAuthenticator / SimpleAuthorization missing
See:
https://issues.apache.org/jira/browse/CASSANDRA-2922
Hello Aaron,
I happen to have 48GB on each machines I use in the cluster. Can I
assume that I can't really use all of this memory productively? Do you
have any suggestion related to that? Can I run more than one instance on
Cassandra on the same box (using different ports) to take advantage
I have noticed this too. Apparently it is a thrift code generation thing.
On Thu, Oct 20, 2011 at 5:33 AM, Patrik Modesto patrik.mode...@gmail.comwrote:
As usually happen, I've found the problem just after I sent the
question. I have to use setters for setting values to thrift.*
classes.
So
Hi all.
I am using cassandra 1.0.0.
I created a keyspace with all the column family definitions at runtime and
it works fine until I stop and then restart the cassandra server.
During it's startup I see this error in the cassandra log :
ERROR 16:22:16,977 Exception encountered during startup
Hi,
I installed Cassandra 0.8.7 on a single machine having 16GB RAM.
Testing performance using stress tool. Noticed it creates 2 standard
column families and 2 super column families with the default settings.
I want to test for multiple column families. Is there any way to test it
using
I have been playing around with Cassandra 1.0.0 in our test environment it
seems pretty sweet so far. I have however come across what appears to be a
bug tracking node load. I have enabled compression and levelled compaction
on all CFs (scrub + snapshot deletion) and the nodes have been operating
Are you using cassandra's caching? If you are then you will need to play
around with the RAM setting to find a sweet spot. A low hit rate on the
cache (which is counter productive anyway) will cause more GC. A high hit
rate, less GC.
If you are not caching, no need to use a large heap as the
On 10/20/2011 09:38 AM, Maxim Potekhin wrote:
I happen to have 48GB on each machines I use in the cluster. Can I
assume that I can't really use all of this memory productively? Do you
have any suggestion related to that? Can I run more than one instance on
Cassandra on the same box (using
On Thu, Oct 20, 2011 at 12:53 PM, Dan Hendry dan.hendry.j...@gmail.comwrote:
I have been playing around with Cassandra 1.0.0 in our test environment it
seems pretty sweet so far. I have however come across what appears to be a
bug tracking node load. I have enabled compression and levelled
We are running a 8 node cassandra 1.0 cluster. We are seeing this
exception quite often. Any idea how to debug this issue?
java.lang.IllegalArgumentException: Illegal Capacity: -2
at java.util.ArrayList.init(ArrayList.java:110)
at
I have a vague memory of there been a bug about this in the past.
A
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 19/10/2011, at 10:58 PM, Radim Kolar wrote:
Dne 18.10.2011 22:35, aaron morton napsal(a):
Looks like the column meta for the CF
Solr can use a dynamic schema…
https://github.com/apache/lucene-solr/blob/trunk/solr/example/solr/conf/schema.xml#L538
But you may still want to define a schema so you can adjust the index and query
time processing/typing of the field values.
Cheers
-
Aaron Morton
Freelance
It's unlikely that HH is the issue. (Disclaimer, am not familiar with HH in
1.0, i know it's changes a bit)
Take a look at the TP Stats, what's happening ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/10/2011, at 10:10 AM, Jérémy
found it , https://issues.apache.org/jira/browse/CASSANDRA-3387
On Thu, Oct 20, 2011 at 1:37 PM, aaron morton aa...@thelastpickle.com wrote:
It's unlikely that HH is the issue. (Disclaimer, am not familiar with HH in
1.0, i know it's changes a bit)
Take a look at the TP Stats, what's happening
Looks like a bug, patch is here
https://issues.apache.org/jira/browse/CASSANDRA-3391
Until it is fixed avoid using CompositeType in the key_validator_class and blow
away the Schema and Migrations SSTables.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
See http://www.mail-archive.com/user@cassandra.apache.org/msg18132.html
https://issues.apache.org/jira/browse/CASSANDRA-3391
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 21/10/2011, at 5:31 AM, Vitaly Vengrov wrote:
Hi all.
I am
2) If a single key, would adding a file/block/record-level encryption to
Cassandra solve this problem? If not, why not? Is there something
special about your encryption methods?
There is nothing special about our encryption methods but will never be
able to encrypt or decrypt data on our
That looks to me like it's reporting uncompressed size as the load.
Should be fixed in the 1.0 branch for 1.0.1.
(https://issues.apache.org/jira/browse/CASSANDRA-3338)
On Thu, Oct 20, 2011 at 11:53 AM, Dan Hendry dan.hendry.j...@gmail.com wrote:
I have been playing around with Cassandra 1.0.0 in
The stress tool doesn't support doing multi-CF batches out of the box. Of
course you're free to extend it any way you like to more accurately simulate
your workload. (That's one reason we've kept stress.py around -- it's less
code to customize, than the Java version.)
On Thu, Oct 20, 2011 at
24 matches
Mail list logo