Re: Cassandra C client implementation

2011-12-16 Thread Vlad Paiu
Hello, Sorry, wrong link in the previous email. Proper link is http://svn.apache.org/viewvc/thrift/trunk/lib/c_glib/test/ Regards, Vlad Paiu OpenSIPS Developer On 12/15/2011 08:35 PM, Vlad Paiu wrote: Hello, While digging more for this I've found these :

Re: Counters != Counts

2011-12-16 Thread Alain RODRIGUEZ
Can we have a hope that counters will be replayed as safely as a classical data someday ? Do someone still work on jiras like issues.apache.org/jira/browse/CASSANDRA-2495 ? I thought that replaying a write from the client didn't lead to over-counts contrary to the internal cassandra replay from

Re: commit log size

2011-12-16 Thread Alexandru Dan Sicoe
Hi Maxim, Sorry for the late reply but I was away for a course. Lower the memtable_flush_after_mins for your low traffic CFs. If in the meantime you upgraded to 1.0 (which by the way 1.0.3 for me ended not working and me converting a lot of data to it) I think there was a discussion you sent me

RE: [RELEASE] Apache Cassandra 1.0.6 released

2011-12-16 Thread Viktor Jevdokimov
Created https://issues.apache.org/jira/browse/CASSANDRA-3642 -Original Message- From: Viktor Jevdokimov [mailto:viktor.jevdoki...@adform.com] Sent: Thursday, December 15, 2011 18:26 To: user@cassandra.apache.org Subject: RE: [RELEASE] Apache Cassandra 1.0.6 released Cassandra 1.0.6

Re: cassandra as an email store ...

2011-12-16 Thread Rustam Aliyev
Hi Sasha, Replying to the old thread just for reference. We've released a code which we use to store emails in Cassandra as an open source project: http://elasticinbox.com/ Hope you find it helpful. Regards, Rustam. On Fri Apr 29 15:20:07 2011, Sasha Dolgy wrote: Great read. thanks. On

Some problems with stress testing

2011-12-16 Thread Chi Shin Hsu
Hi, all I am confused by my stress testing result. The test environment: One Cassandra node, one client The size of each row is 1MB, and the client write 10 rows continually. Totally data size is 100GB. First, my client connected to the server by 100Mbs Ethernet. The result was 7.3MB/s. I

Re: cassandra as an email store ...

2011-12-16 Thread Sasha Dolgy
Hi Rustam, Thanks for posting that. Interesting to see that you opted to use Super Column's: https://github.com/elasticinbox/elasticinbox/wiki/Data-Model .. wondering, for the sake of argument/discussion .. if anyone can come up with an alternative data model that doesn't use SC's. -sd On Fri,

Re: cassandra as an email store ...

2011-12-16 Thread Rustam Aliyev
Hi Sasha, There's been a lot of fud in regards to SuperColumns. But actually in our case we found them quite useful. The main argument for using SC in that case is that message metadata is immutable and in most of the cases read and written alltogether (i.e. you fetch all message headers

Re: [RELEASE] Apache Cassandra 1.0.6 released

2011-12-16 Thread Terje Marthinussen
Works if you turn off mmap? We run without mmap and see hardly any difference in performance, but with huge benefits in the form of a memory consumption which can actually be monitored easily and it just seem like things are more stable this way in general. Just turn off and see how that

Re: Cassandra C client implementation

2011-12-16 Thread Vlad Paiu
Hi, I've also decided to give the C++ Thrift a try, but I can't seem to compile the simple examples from http://wiki.apache.org/cassandra/ThriftExamples . I get lots of errors like : /usr/local/include/thrift/transport/TTransport.h:34:1: error: ‘uint32_t’ does not name a type

gracefully recover from data file corruptions

2011-12-16 Thread Ramesh Natarajan
We are running a 30 node 1.0.5 cassandra cluster running RHEL 5.6 x86_64 virtualized on ESXi 5.0. We are seeing Decorated Key assertion error during compactions and at this point we are suspecting anything from OS/ESXi/HBA/iSCSI RAID. Please correct me i am wrong, once a node gets into this

Re: gracefully recover from data file corruptions

2011-12-16 Thread Jeremiah Jordan
You need to run repair on the node once it is back up (to get back the data you just deleted). If this is happening on more than one node you could have data loss... -Jeremiah On 12/16/2011 07:46 AM, Ramesh Natarajan wrote: We are running a 30 node 1.0.5 cassandra cluster running RHEL 5.6

Re: Using Cassandra in Rails App

2011-12-16 Thread Jeremy Hanna
Traditionally there are two places to go. Twitter's ruby client at https://github.com/twitter/cassandra or the newer cql driver at http://code.google.com/a/apache-extras.org/p/cassandra-ruby/. The latter might be nice for green field applications but CQL is still gaining features. Some

Re: Using Cassandra in Rails App

2011-12-16 Thread Aaron Turner
On Thu, Dec 15, 2011 at 3:13 AM, Wolfgang Vogl aon.912508...@aon.at wrote: Hi, I have a couple of questions about working with Ruby on Rails and Cassandra. What is the recommended way of Cassandra integration into a Rails app ? active_column cassandra-cql some other gems? Is there

Re: gracefully recover from data file corruptions

2011-12-16 Thread Ben Coverston
Hi Ramesh, Every time I have seen this in the last year it has been caused by bad hardware or bad memory. Usually we find errors in the syslog. Jeremiah is right about running repair when you get your nodes back up. Fortunately with the addition of checksums in 1.0 I don't think that the

how to debug/trace

2011-12-16 Thread S Ahmed
How can you possibly trace a read/write in cassandra's codebase when it uses so many threadpools/executers? I'm just getting into threads so I'm not to familiar with how one can trace things while in debug mode in IntelliJ when various thread pools are processing things etc.

Re: how to debug/trace

2011-12-16 Thread Yang
normally I'd just fire up debug in eclipse, make a break point on the Cassandra.server methods. On Fri, Dec 16, 2011 at 2:19 PM, S Ahmed sahmed1...@gmail.com wrote: How can you possibly trace a read/write in cassandra's codebase when it uses so many threadpools/executers? I'm just getting

Re: gracefully recover from data file corruptions

2011-12-16 Thread Ramesh Natarajan
Thanks Ben and Jeremiah. We are actively working with our 3rd party vendors to determine the root cause for this issue. Hopefully we will figure something out. This repair procedure is more like a last resort which i really don't want to use but something to keep in mind if such necessity arises.

memory estimate for each key in the key cache

2011-12-16 Thread Kent Tong
Hi, From the source code I can see that for each key, the hash (token), the key itself (ByteBuffer) and the position (long. offset in the sstable) are stored into the key cache. The hash is an MD5 hash, so it is 16 bytes. So, the total size required is at least 16+size-of(key)+4 which is 20

Re: memory estimate for each key in the key cache

2011-12-16 Thread Brandon Williams
On Fri, Dec 16, 2011 at 8:52 PM, Kent Tong freemant2...@yahoo.com wrote: Hi, From the source code I can see that for each key, the hash (token), the key itself (ByteBuffer) and the position (long. offset in the sstable) are stored into the key cache. The hash is an MD5 hash, so it is 16

Re: memory estimate for each key in the key cache

2011-12-16 Thread Dave Brosius
On 12/16/2011 10:13 PM, Brandon Williams wrote: On Fri, Dec 16, 2011 at 8:52 PM, Kent Tongfreemant2...@yahoo.com wrote: Hi, From the source code I can see that for each key, the hash (token), the key itself (ByteBuffer) and the position (long. offset in the sstable) are stored into the key

Re: memory estimate for each key in the key cache

2011-12-16 Thread Brandon Williams
On Fri, Dec 16, 2011 at 9:31 PM, Dave Brosius dbros...@mebigfatguy.com wrote: Wow, Java is a lot better than I thought if it can perform that kind of magic.  I'm guessing the wiki information is just old and out of date. It's probably more like 60 + sizeof(key) With jamm and MAT it's fairly