Re: Maintaining counter column consistency

2013-10-02 Thread Haithem Jarraya
Hi Ben, If you make sure R + W N you should be fine. Have a read of this http://www.slideshare.net/benjaminblack/introduction-to-cassandra-replication-and-consistency Thanks, H On 1 Oct 2013, at 18:29, Ben Hood 0x6e6...@gmail.comhttp://gmail.com wrote: Hi, We're maintaining a bunch of

Re: Maintaining counter column consistency

2013-10-02 Thread Ben Hood
Hi Haithem, I might have phrased my question wrongly - I wasn't referring to the considerations of consistency level or replication factors - I was referring to fact that I want to insert a row and increment a counter in the same operation. I was concerned about the inconsistency that could

Re: Cassandra Heap Size for data more than 1 TB

2013-10-02 Thread srmore
The version of Cassandra I am using is 1.0.11, we are migrating to 1.2.X though. We had tuned bloom filters (0.1) and AFAIK making it lower than this won't matter. Thanks ! On Tue, Oct 1, 2013 at 11:54 PM, Mohit Anchlia mohitanch...@gmail.comwrote: Which Cassandra version are you on?

RE: Rollback question regarding system metadata change

2013-10-02 Thread Christopher Wirt
I went with deleting the extra rows created in schema_columns and I've now successfully bootstrapped three nodes back on 1.2.10. No sour side effects to report yet. Thanks for your help From: Robert Coli [mailto:rc...@eventbrite.com] Sent: 02 October 2013 01:00 To:

Re: Cassandra Heap Size for data more than 1 TB

2013-10-02 Thread cem
Have a look to index_interval. Cem. On Wed, Oct 2, 2013 at 2:25 PM, srmore comom...@gmail.com wrote: The version of Cassandra I am using is 1.0.11, we are migrating to 1.2.X though. We had tuned bloom filters (0.1) and AFAIK making it lower than this won't matter. Thanks ! On Tue, Oct

Re: Cassandra Heap Size for data more than 1 TB

2013-10-02 Thread srmore
I changed my index_interval from 128 to index_interval: 128 to 512, does it make sense to increase more than this ? On Wed, Oct 2, 2013 at 9:30 AM, cem cayiro...@gmail.com wrote: Have a look to index_interval. Cem. On Wed, Oct 2, 2013 at 2:25 PM, srmore comom...@gmail.com wrote: The

Issue with source command and utf8 file

2013-10-02 Thread Paolo Crosato
Hi, I'm trying to load some data in Cassandra by the source command in cqlsh. The file is utf8 encoded, however Cassandra seems unable to detect utf8 encoded characters. Here is a sample: insert into positions8(iddevice,timestampevent,idunit,idevent,status,value)

Unable to bootstrap new node

2013-10-02 Thread Keith Wright
Hi all, We are running C* 1.2.8 with Vnodes enabled and are attempting to bootstrap a new node and are having issues. When we add the node we see it bootstrap and we see data start to stream over from other nodes however we are seeing one of the other nodes get stuck in full GCs to the

Re: Cassandra Heap Size for data more than 1 TB

2013-10-02 Thread cem
I think 512 is fine. Could you tell more about your traffic characteristics? Cem On Wed, Oct 2, 2013 at 4:32 PM, srmore comom...@gmail.com wrote: I changed my index_interval from 128 to index_interval: 128 to 512, does it make sense to increase more than this ? On Wed, Oct 2, 2013 at 9:30

Problem with sstableloader from text data

2013-10-02 Thread Paolo Crosato
Hi, following the article at http://www.datastax.com/dev/blog/bulk-loading , I developed a custom builder app to serialize a text file with rows in json format to a sstable. I managed to get the tool running and building the tables, however when I try to load them I get this error:

Re: Cassandra Heap Size for data more than 1 TB

2013-10-02 Thread srmore
Sure, I was testing using high traffic with about 6K - 7K req/sec reads and writes combined I added a node and ran repair, at this time the traffic was stopped and heap was 8G. I saw a lot of flushing and GC activity and finally it died saying out of memory. So I gave it more memory 12 G and

Re: Cassandra Heap Size for data more than 1 TB

2013-10-02 Thread Mohit Anchlia
Did you upgrade your existing sstables after lowering the value? BTW: If you have tried all other avenues then my suggestion is to increase your heap to 12GB and ParNew to 3GB. Test it out. On Wed, Oct 2, 2013 at 5:25 AM, srmore comom...@gmail.com wrote: The version of Cassandra I am using is

Re: Unable to bootstrap new node

2013-10-02 Thread Robert Coli
On Wed, Oct 2, 2013 at 8:12 AM, Keith Wright kwri...@nanigans.com wrote: We are running C* 1.2.8 with Vnodes enabled and are attempting to bootstrap a new node and are having issues. When we add the node we see it bootstrap and we see data start to stream over from other nodes however we

Re: Best version to upgrade from 1.1.10 to 1.2.X

2013-10-02 Thread Paulo Motta
Hello, I just started the rolling upgrade procedure from 1.1.10 to 2.1.10. Our strategy is to simultaneously upgrade one server from each replication group. So, if we have a 6 nodes with RF=2, we will upgrade 3 nodes at a time (from distinct replication groups). My question is: do the newly

Re: Best version to upgrade from 1.1.10 to 1.2.X

2013-10-02 Thread Paulo Motta
Nevermind the question. It was a firewall problem. Now the nodes between different versions are able to see ach other! =) Cheers, Paulo 2013/10/2 Paulo Motta pauloricard...@gmail.com Hello, I just started the rolling upgrade procedure from 1.1.10 to 2.1.10. Our strategy is to