Unable to connect to cassandra server on EC2?

2014-08-10 Thread Subodh Nijsure
Hello, I have setup a new EC2 instance to run cassandra on EC2, have gone through bunch of questions that don't seem to help. I am running apache-cassandra-2.1.0-rc3 I have opened port 9160, 9042 on my EC2 instance say its IP address is 1.2.3.4 Since this is single node system I haven't opened

Re: Unable to connect to cassandra server on EC2?

2014-08-10 Thread Mark Reddy
Hi, While I have no direct experience with the Python driver itself I took a quick and it uses Cassandra's native transport protocol, so setting the port to 9160 (Thrift protocol) won't work. You will need to set it to the native transport port, which is 9042. Also make sure that you have the

Re: Strange slow schema agreement on 2.0.9 ... anyone seen this? - knowsVersion may get stuck as false?

2014-08-10 Thread graham sanderson
We saw this problem again today, so it certainly seems reasonable that it was introduced by upgrade from 2.0.5 to 2.0.9 (we hadn’t seen it ever before that) I think this must be related to https://issues.apache.org/jira/browse/CASSANDRA-6695 or

clarification on 100k tombstone limit in indexes

2014-08-10 Thread Ian Rose
Hi - On this page ( http://www.datastax.com/documentation/cql/3.0/cql/ddl/ddl_when_use_index_c.html), the docs state: Do not use an index [...] On a frequently updated or deleted column and *Problems using an index on a frequently updated or deleted column*¶

Re: clarification on 100k tombstone limit in indexes

2014-08-10 Thread Mark Reddy
Hi Ian, The issues here, which relates to normal and index column families, is scanning over a large number of tombstones can cause Cassandra to fall over due to increased GC pressure. This pressure is caused because tombstones will create DeletedColumn objects which consume heap. Also these

Re: clarification on 100k tombstone limit in indexes

2014-08-10 Thread Ian Rose
Hi Mark - Thanks for the clarification but as I'm not too familiar with the nuts bolts of Cassandra I'm not sure how to apply that info to my current situation. It sounds like this 100k limit is, indeed, a global limit as opposed to a per-row limit. Are these tombstones ever GCed out of the

Re: clarification on 100k tombstone limit in indexes

2014-08-10 Thread Mark Reddy
Hi Ian Are these tombstones ever GCed out of the index? How frequently? Yes, tombstones are removed after the time specified by gc_grace_seconds has elapsed, which by default is 10 days and is configurable. Knowing and understanding how Cassandra handles distributed deletes is key to designing

Re: clarification on 100k tombstone limit in indexes

2014-08-10 Thread DuyHai Doan
Hello Ian It sounds like this 100k limit is, indeed, a global limit as opposed to a per-row limit --The threshold applies to each REQUEST, not partition or globally. The threshold does not apply to a partition (physical row) simply because in one request you can fetch data from many partitions