Re: Exception in cassandra logs while processing the message

2014-02-17 Thread ankit tyagi
Hello, anyone has the idea regarding this exception. Regards, Ankit Tyagi On Fri, Feb 14, 2014 at 7:02 PM, ankit tyagi ankittyagi.mn...@gmail.comwrote: Hello, I am seeing below exception in my cassandra logs(/var/log/cassandra/system.log). INFO [ScheduledTasks:1] 2014-02-13 13:13:57,641

Re: Exception in cassandra logs while processing the message

2014-02-17 Thread Vivek Mishra
looks like thrift inter operability issue. Seems column family or data created via CQL3 and using Thrift based API to read it. Else, recreate your schema and try. -Vivek On Mon, Feb 17, 2014 at 1:50 PM, ankit tyagi ankittyagi.mn...@gmail.comwrote: Hello, anyone has the idea regarding this

Re: Exception in cassandra logs while processing the message

2014-02-17 Thread ankit tyagi
Hi, I am using Hector-client: 1.0-2 to insert the data. Problem is , i am not seeing any exception in my application logs where i am inserting data through hector. looks like internal to cassandra. Regards, Ankit TYagi On Mon, Feb 17, 2014 at 1:56 PM, Vivek Mishra mishra.v...@gmail.com wrote:

Re: Exception in cassandra logs while processing the message

2014-02-17 Thread Sylvain Lebresne
That looks like a thrift error. My best bet would be that you have an incompatibility of versions between the thrift lib used by your Hector version and the one of the Cassandra version used. All I can tell you is that Cassandra 1.2 uses libthrift 0.7.0, not sure what Hector-client 1.0-2 uses

Re: Intermittent long application pauses on nodes

2014-02-17 Thread Ondřej Černoš
Hi all, we are seeing the same kind of long pauses in Cassandra. We tried to switch CMS to G1 without positive result. The stress test is read heavy, 2 datacenters, 6 nodes, 400reqs/sec on one datacenter. We see spikes in latency on 99.99 percentil and higher, caused by threads being stopped in

Re: Intermittent long application pauses on nodes

2014-02-17 Thread Benedict Elliott Smith
Ondrej, It seems like your issue is much less difficult to diagnose: your collection times are long. At least, the pause you printed the time for is all attributable to the G1 pause. Note that G1 has not generally performed well with Cassandra in our testing. There are a number of changes going

Re: Exception in cassandra logs while processing the message

2014-02-17 Thread ankit tyagi
Hi Sylvain, hector core 1.0-2 uses libthrift 0.6.1,but this exception is not consistent, getting intermittently. if there would be any issue related to compatibility of thrift jar, then this error should be consistent ryt? Regards, Ankit Tyagi On Mon, Feb 17, 2014 at 2:30 PM, Sylvain

Re: Intermittent long application pauses on nodes

2014-02-17 Thread Ondřej Černoš
Hi, we tried to switch to G1 because we observed this behaviour on CMS too (27 seconds pause in G1 is quite an advise not to use it). Pauses with CMS were not easily traceable - JVM stopped even without stop-the-world pause scheduled (defragmentation, remarking). We thought the go-to-safepoint

Re: Intermittent long application pauses on nodes

2014-02-17 Thread Benedict Elliott Smith
Hi Ondrej, It's possible you were hit by the problems in this thread before, but it looks potentially like you may have other issues. Of course it may be that on G1 you have one issue and CMS another, but 27s is extreme even for G1, so it seems unlikely. If you're hitting these pause times in CMS

Re: Cass 1.2.11 : java.lang.AssertionError: originally calculated column size

2014-02-17 Thread Oleg Dulin
Bumping this up -- anything ? anyone ? On 2014-02-13 16:01:50 +, Oleg Dulin said: I am getting these exceptions on one of the nodes, quite often, during compactions: java.lang.AssertionError: originally calculated column size of 84562492 but now it is 84562600 Usually this is on the

Failed to decode value

2014-02-17 Thread PARASHAR, BHASKARJYA JAY
Hi When I do a CQL3 select * on TABLE1 (describe below), I see rows displayed but get this error at the end Failed to decode value '9131146' (for column 'accountId') as bigint: unpack requires a string argument of length 8 Cassandra version 1.2.5. accountId is of type text. Why would it

Re: Expired column showing up

2014-02-17 Thread mahesh rajamani
Christain, Yes. Is it a problem? Can you explain what happens in this scenario? Thanks Mahesh On Fri, Feb 14, 2014 at 3:07 PM, horschi hors...@gmail.com wrote: Hi Mahesh, is it possible you are creating columns with a long TTL, then update these columns with a smaller TTL? kind

Re: Expired column showing up

2014-02-17 Thread horschi
Hi Mahesh, the problem is that every column is only tombstoned for as long as the original column was valid. So if the last update was only valid for 1 sec, then the tombstone will also be valid for 1 second! If the previous was valid for a longer time, then this old value might reappear. Maybe

cold_reads_to_omit and tombstones

2014-02-17 Thread David Chia
Hi, It's not clear to me in the doc about cold_reads_to_omit. when cold_reads_to_omit ignores low read rate sstables during compactions, what would gc tombstones in those cold sstables? Do we rely on tombstone compactions, which, I might misconfigured, I've never seen tombstone compactions got

Re: Expired column showing up

2014-02-17 Thread mahesh rajamani
Christian, There are 2 use cases which are failing, and both looks to be similar issue, basically happens in column family set with TTL. case 1) I manage index for specific data as single row in a column family. I set TTL to 1 second if the data need to be removed from the index row. Under some

Cassandra DSC installation fail due to some python dependecies. How to rectify ?

2014-02-17 Thread Ertio Lew
I am trying to install cassandra dsc20 but the installation fails due to some python dependecies. How could I make this work ? root@server1:~# sudo apt-get install dsc20 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be

Re: Cassandra DSC installation fail due to some python dependecies. How to rectify ?

2014-02-17 Thread Al Tobey
This is the root cause: IOError: [Errno 2] No such file or directory: '/usr/lib/python2.7/sitecustomize.py' Off the top of my head, you may be able to work around this packaging issue in Ubuntu with: sudo touch /usr/lib/python2.7/sitecustomize.py sudo apt-get -f install Then resume re-run your

Re: Cass 1.2.11 : java.lang.AssertionError: originally calculated column size

2014-02-17 Thread Peter Sanford
The issue you should look at is CASSANDRA-4206. This is apparently fixed on 2.0 so upgrading is one option. If you are not ready to upgrade to 2.0 then you can try increasing in_memory_compaction. We were hitting this exception on one of our nodes and increasing in_memory_compaction did fix it.

GCInspector GC for ConcurrentMarkSweep running every 15 seconds

2014-02-17 Thread John Pyeatt
I have a 6 node cluster running on AWS. We are using m1.large instances with heap size set to 3G. 5 of the 6 nodes seem quite healthy. The 6th one however is running GCInspector GC for ConcurrentMarkSweep every 15 seconds or so. There is nothing going on on this box. No repairs and almost not

Turn off compression (1.2.11)

2014-02-17 Thread Plotnik, Alexey
Each compressed SSTable uses additional transfer buffer in CompressedRandomAccessReader instance. After analyzing Heap I saw this buffer has a size about 70KB per SSTable. I have more than 30K SSTables per node. I want to turn off a compression for this column family to save some Heap. How can

Getting different results each time after inserting data into cassandra using lightweight transaction

2014-02-17 Thread Jim Xu
Hi all, The cluster has 5 nodes, one keyspace(RF=3), a table(named t1) and a column family(named cf1,and has two fixed columns each row). Single thread, two test programs to insert data using lightweight transaction. the pseudocode is as follows: program 1: ... for (int i=1;i1;i++){ ...