RE: Unavailable exception with CL.ANY

2010-12-14 Thread Rajat Chopra
Okay. Got that. Thanks. Bringing back just one of the nodes solved it. What I was keen on knowing was if there exists a way to know which keys reside on which node. Like a 'nodetool column.path' and it prints the nodelist that the column path resides on :). I have HH disabled for some other

Re: Dual NIC server problems

2010-12-14 Thread aaron morton
The code for nodetool appears to just pass the host value through to the NodeProbe. Was there anything else in the stack trace ? If you use the host name of the machine rather than ip what happens? cassandra-env.sh includes a link to this page about getting JMX running with firewalls

Re: Dynamic Snitch / Read Path Questions

2010-12-14 Thread Daniel Doubleday
On Dec 14, 2010, at 2:29 AM, Brandon Williams wrote: On Mon, Dec 13, 2010 at 6:43 PM, Daniel Doubleday daniel.double...@gmx.net wrote: Oh - well but I see that the coordinator is actually using its own score for ordering. I was only concerned that dropped messages are ignored when

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Jedd Rashbrooke
Peter, Jonathon - thank you for your replies. I should probably have repeated myself in the body, but as I mentioned in the subject line, we're running Sun Java 1.6. On 10 December 2010 18:37, Peter Schuller peter.schul...@infidyne.com wrote: Memory-mapped files will account for both

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 12, 2010, at 17:21, Jonathan Ellis wrote: http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dying-with-oom-errors I can rule out the first 3. I was running cassandra with default settings, i.e. 1GB heap and 256M memtable. So, with 3 memtables+1GB the JVM should run with

Cassandra-Pig keyspace not found issue

2010-12-14 Thread Peter Davies
Pig seems to think my keyspace doesn't exist. I'm connecting to a remote cassandra instance configured in the environment variables PIG_RPC_PORT and PIG_INITIAL_ADDRESS (an ip address) I get the following backend logged output... **

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Clint Byrum
On Tue, 2010-12-14 at 11:06 +, Jedd Rashbrooke wrote: JNA is something I'd read briefly about a while back, but now it might be something I need to explore further. We're using Cassandra 0.6.6, and our Ubuntu version offers a packaged release of libjna 3.2.3-1 .. rumours on the

Fauna Questions

2010-12-14 Thread Alberto Velandia
Hi has anyone noticed that the documentation for the Cassandra Class is gone from the website? http://blog.evanweaver.com/2010/12/06/cassandra-0-8/ I was wondering if there's a way for me to count how many rows exist inside a Column Family and a way to erase the contents of that Column Family

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 14, 2010, at 15:31, Timo Nentwig wrote: On Dec 14, 2010, at 14:41, Jonathan Ellis wrote: This is A row has grown too large section from that troubleshooting guide. Why? This is what a typical row (?) looks like: [defa...@test] list tracking limit 1; --- RowKey:

Re: Consistency question caused by Read_all and Write_one

2010-12-14 Thread Alvin UW
Thanks. It is very helpful. I think I'd like to write to the same column. Would you please give me more details about your last sentence? For example, why can't I use locking mechanism inside of cassandra? Thanks. Alvin 2010/12/13 Aaron Morton aa...@thelastpickle.com In your example is a

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
Memory-mapped files will account for both virtual and, to the extent that they are resident in memory, to the resident size of the process.  This bears further investigation.  Would you consider a 3GB overhead  on a 4GB heap a possibility?  (From a position of some naivety, this  seems a bit

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
I can rule out the first 3. I was running cassandra with default settings, i.e. 1GB heap and 256M memtable. So, with 3 memtables+1GB the JVM should run with 1.75G (although http://wiki.apache.org/cassandra/MemtableThresholds considers to increase heap size only gently). Did so. 4GB machine

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
java.lang.OutOfMemoryError: Java heap space        at java.nio.HeapByteBuffer.init(HeapByteBuffer.java:39)        at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)        at org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtilities.java:261)        at

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Timo Nentwig
On Dec 14, 2010, at 19:38, Peter Schuller wrote: For debugging purposes you may want to switch Cassandra to standard IO mode instead of mmap. This will have a performance-penalty, but the virtual/resident sizes won't be polluted with mmap():ed data. Already did so. It *seems* to run more

org.apache.cassandra.service.ReadResponseResolver question

2010-12-14 Thread Daniel Doubleday
Hi I'm sorry - don't want to be a pain in the neck with source questions. So please just ignore me if this is stupid: Isn't org.apache.cassandra.service.ReadResponseResolver suposed to throw a DigestMismatchException if it receives a digest wich does not match the digest of a read message?

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
The stack trace doesn't make sense relative to what I get checking out 0.6.6. Are you *sure* this is 0.6.6, without patches or other changes? Oh, sorry, the original poster of this thread was/is actually using 0.6, I am (as mentioned in other posts) actually on 0.7rc2. Sorry that I didn't

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
I just uncommented the GC JVMOPTS from the shipped cassandra start script and use Sun JVM 1.6.0_23. Hmm, but these GC tuning options are also uncommented. I'll comment them again and try again. Maybe I was just too quick trying to mentally parse it and given the jumbled line endings. You're

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
For debugging purposes you may want to switch Cassandra to standard IO mode instead of mmap. This will have a performance-penalty, but the virtual/resident sizes won't be polluted with mmap():ed data. Already did so. It *seems* to run more stable, but it's still far off from being stable.

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
 I posted mostly as a heads up for others using similar profiles (4GB  heap on ~8GB boxes) to keep an eye out for.  I expect a few people,  particularly if they're on Amazon EC2, are running this type of setup.  On the other hand, mum always said I was unique.  ;) So, now that I get that we

Re: org.apache.cassandra.service.ReadResponseResolver question

2010-12-14 Thread Jonathan Ellis
Correct. https://issues.apache.org/jira/browse/CASSANDRA-1830 is open to fix that. If you'd like to review the patch there, that would be very helpful. :) On Tue, Dec 14, 2010 at 1:55 PM, Daniel Doubleday daniel.double...@gmx.netwrote: Hi I'm sorry - don't want to be a pain in the neck with

Re: cassandra database viewer

2010-12-14 Thread Brandon Williams
On Tue, Dec 14, 2010 at 7:11 AM, Amin Sakka, Novapost amin.sa...@novapost.fr wrote: Thanks for your answers, I have checkout the 0.7 branch but still having troubles: *init_() takes at least 3 arguments (2 given)* Are you using the 0.7 branch of telephus too? -Brandon

Re: Insertion batch stoping for some reason at 100 records

2010-12-14 Thread Alberto Velandia
Makes perfect sense thanks, how can I set the Count limit for an specific Column Family? On Dec 14, 2010, at 3:47 PM, Peter Schuller wrote: Hi i'm using Cassandra 0.6.8 and Fauna, I'm running a batch to populate my db and for some reason every time it gets to a 100 records it stops no error

Re: Insertion batch stoping for some reason at 100 records

2010-12-14 Thread Peter Schuller
(Btw I said row count in my response; that was a poor choice of words given that row has a specific meaning in Cassandra. I meant column count.) Makes perfect sense thanks, how can I set the Count limit for an specific Column Family? Looks like you can pass a :count option to get() (I just

Re: Memory leak with Sun Java 1.6 ?

2010-12-14 Thread Peter Schuller
If it helps, I also found quite a few of these in the logs org.apache.cassandra.db.UnserializableColumnFamilyException: Couldn't find cfId=224101 However a single cassandra instance locally (OSX, 1.6.0_22, mmap) runs just perfect for hours. No exceptions, no OOM. Given that these

Re: Fauna Questions

2010-12-14 Thread Aaron Morton
There is a truncate() function on the ruby api if you require Cassandra/0.7 this can truncate all the data in a CF. It will call the truncate function on the thrift api. I do not know of a precise way to get a count of rows. There is a function to count the number of columns, see

Re: Fauna Questions

2010-12-14 Thread Tyler Hobbs
There's an estimateKeys() function exposed via JMX that will give you an approximate row count for the node. In jconsole this shows up under o.a.c.db - ColumnFamilies - Keyspace - CF - Operations. There's not a precise way to count rows other than to do a get_range_slices() over the entire CF,

How to get columns in a super column in cassandra-cli ?

2010-12-14 Thread Hayarobi Park
Hello, I'm using cassandra 0.7.0-rc2. When I tried to get column contents in a super column of Super CF like below; ] get myCF['key']['scName']; the client reply supercolumn parameter is not optional for super CF user It seemed to work in cassandra-0.7.0-beta2, if my memory is not wrong. The

Re: Dual NIC server problems

2010-12-14 Thread Oleg Anastasyev
This is probably because rmi code jmx uses to listen detected wrong address. To fix this add the following to cassandra nodes startup script instances: -Djava.rmi.server.hostname=127.0.0.1 (change 127.0.0.1 to actual internal address of cassandra node)