Re: change cluster name retaining keypsace

2013-01-10 Thread Alain RODRIGUEZ
By the way, The cluster name is only cosmetic. It may not be worth to change it since it induce a downtime and considering the mishandling risk. Alain 2013/1/10 Michael Kjellman mkjell...@barracuda.com I think Arron meant /var/lib/cassandra (by default) Check there (unless you changed you

Re: How long does it take for a write to actually happen?

2013-01-10 Thread Vitaly Sourikov
But Cassandra 1.1.7 is not fully CQL3, yet. Is it? I did not have any timestamps explicitly, as columns, in that CF, but synchronizing clocks on Cassandra and its clients actually seems to solve the latency problem I had. So, I just want to make sure that it makes sense. And thanks for your

Collecting of tombstones columns during read query fills up heap

2013-01-10 Thread André Cruz
Hello. I have a schema to represent a filesystem for my users. In this schema one of the CF stores a directory listing this way: CF DirList Dir1: File1:NOVAL File2:NOVAL ... So, one column represents a file in that directory and it has no value. The file metadata is stored

Re: Wide rows in CQL 3

2013-01-10 Thread Vegard Berget
Thanks for explaining, Sylvain.You say that it is not a mandatory one, how long could we expect it to be not mandatory?I think the new CQL stuff is great and I will probably use it heavily.  I understand the upgrade path, but my question is if I should start planning for an all-CQL future, or if I

VNodes, Replication and Minimum cluster size

2013-01-10 Thread Ryan Lowe
I have heard before that the recommended minimum cluster size is 4 (with replication factor of 3). I am curious to know if vnodes would change that or if that statement was valid to begin with! The use case I am working on is one where we see tremendous amount of load for just 2 days out of the

Re: VNodes, Replication and Minimum cluster size

2013-01-10 Thread Alain RODRIGUEZ
I am curious to know if vnodes would change that or if that statement was valid to begin with! This question was answered yesterday by Jonathan Ellis during the Datastax C*ollege Webinar: http://www.datastax.com/resources/webinars/whatsnewincassandra12 (about the end of the video). The answer is

Re: VNodes, Replication and Minimum cluster size

2013-01-10 Thread Sam Overton
On 10 January 2013 13:07, Ryan Lowe ryanjl...@gmail.com wrote: I have heard before that the recommended minimum cluster size is 4 (with replication factor of 3). I am curious to know if vnodes would change that or if that statement was valid to begin with! The reason that RF=3 is recommended

Re: VNodes, Replication and Minimum cluster size

2013-01-10 Thread Alain RODRIGUEZ
The key advantage of vnodes in this case is that you do not need to manually rebalance the cluster when adding or removing nodes. Well, I thing that a bigger key advantage of vnodes would rather be the performance improvement due to the evenly distributed load while streaming data. But it indeed

Starting Cassandra

2013-01-10 Thread Sloot, Hans-Peter
Hello, Can someone help me out? I have installed Cassandra enterprise and followed the cookbook - Configured the cassandra.yaml file - Configured the cassandra-topoloy.properties file But when I try to start the cluster with 'service dse start' nothing starts. With cassandra -f

RE: Starting Cassandra

2013-01-10 Thread Sloot, Hans-Peter
The java version is 1.6_24. The manual said that 1.7 was not the best choice. But I will try it. -Origineel bericht- Van: adeel.ak...@panasiangroup.com Verz.: 10-01-2013, 16:08 Aan: user@cassandra.apache.org; Sloot, Hans-Peter CC: user@cassandra.apache.org Onderwerp: Re: Starting

Re: Starting Cassandra

2013-01-10 Thread Andrea Gazzarini
Hi, I'm running Cassandra with 1.6_24 and all it's working, so probably the problem is elsewhere. What about your hardware / SO configuration? On 01/10/2013 04:19 PM, Sloot, Hans-Peter wrote: The java version is 1.6_24. The manual said that 1.7 was not the best choice. But I will try it.

RE: Starting Cassandra

2013-01-10 Thread Sloot, Hans-Peter
I have 4 vm's with 1024M memory. 1 cpu. -Origineel bericht- Van: Andrea Gazzarini Verz.: 10-01-2013, 16:24 Aan: user@cassandra.apache.org Onderwerp: Re: Starting Cassandra Hi, I'm running Cassandra with 1.6_24 and all it's working, so probably the problem is elsewhere. What about your

Re: Starting Cassandra

2013-01-10 Thread Edward Capriolo
I think 1.6.0_24 is too low and 1.7.0 is too high. Try a more recent 1.6. I just had problems with 1.6.0_23 see here: https://issues.apache.org/jira/browse/CASSANDRA-4944 On Thu, Jan 10, 2013 at 10:29 AM, Sloot, Hans-Peter hans-peter.sl...@atos.net wrote: I have 4 vm's with 1024M memory. 1

Re: Starting Cassandra

2013-01-10 Thread Alain RODRIGUEZ
If I remember well the default minimum heap size is 1 GB. That may cause you a problem. You have to run with more RAM CPU. Maybe can you try with 2 VM with twice CPU RAM for your test. You will need at least 4 GB RAM to run in production (8GB is better) and I would say at least 2 cpu (4 would

Re: Starting Cassandra

2013-01-10 Thread Michael Kjellman
I've seen this with OpenJDK 7. Grab Java 7 u10 from Oracle and you should be good to go. From: Alain RODRIGUEZ arodr...@gmail.commailto:arodr...@gmail.com Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org user@cassandra.apache.orgmailto:user@cassandra.apache.org Date:

Re: Starting Cassandra

2013-01-10 Thread Vladi Feigin
Hi I had this problem with openJdk ,moving to jdk solved the problem On Jan 10, 2013 5:23 PM, Andrea Gazzarini andrea.gazzar...@gmail.com wrote: Hi, I'm running Cassandra with 1.6_24 and all it's working, so probably the problem is elsewhere. What about your hardware / SO configuration? On

RE: Starting Cassandra

2013-01-10 Thread Sloot, Hans-Peter
I have increased the memory to 4096. Did not help It is openjdk indeed. java-1.6.0-openjdk.x86_64 1:1.6.0.0-1.49.1.11.4.el6_3installed I will try jdk 1.6._38 from oracle.com Regards Hans-Peter From: Vladi Feigin

Re: Pagination over row Keys in Cassandra using Kundera/CQL queries

2013-01-10 Thread Snehal Nagmote
Thank you Aaron , that link helps. However, In my application , I am using jpa(Kundera) to query cassandra. Is there a way to achieve this in cql or jpa query language? Thanks, Snehal On 9 January 2013 16:28, aaron morton aa...@thelastpickle.com wrote: Try this

Re: Starting Cassandra

2013-01-10 Thread Yang Song
Could you also let us know if switching openjdk to jdk@oracle indeed solves the problem? Thanks! Yang 2013/1/10 Sloot, Hans-Peter hans-peter.sl...@atos.net I have increased the memory to 4096. Did not help It is openjdk indeed. java-1.6.0-openjdk.x86_64

Re: change cluster name retaining keypsace

2013-01-10 Thread aaron morton
I think Arron meant /var/lib/cassandra (by default) Yup, sorry. A - Aaron Morton Freelance Cassandra Developer New Zealand @aaronmorton http://www.thelastpickle.com On 10/01/2013, at 4:38 PM, Michael Kjellman mkjell...@barracuda.com wrote: I think Arron meant

Re: How long does it take for a write to actually happen?

2013-01-10 Thread aaron morton
But Cassandra 1.1.7 is not fully CQL3, yet. Is it? By default 1.1 is CQL 2 1.1 is CQL 3 beta and it not compatible with CQL 3 in 1.2. I do not think there are plans to bring CQL in 1.1 up to the CQL 1.2 spec. Though I may be wrong there. If you are using a higher level client it will be using

Re: change cluster name retaining keypsace

2013-01-10 Thread Tim Dunphy
Cool guys.. and thanks. I'll give this a shot. And I do understand it's a cosmetic issue. It's just an OCD little detail I want to correct before my cluster starts growing more nodes. :) On Thu, Jan 10, 2013 at 2:39 PM, aaron morton aa...@thelastpickle.comwrote: I think Arron meant

Re: Collecting of tombstones columns during read query fills up heap

2013-01-10 Thread aaron morton
So, one column represents a file in that directory and it has no value. Just so I understand, the file contents are *not* stored in the column value ? Basically the heap fills up and if several queries happens simultaneously, the heap is exhausted and the node stops. Are you seeing the

Re: Collecting of tombstones columns during read query fills up heap

2013-01-10 Thread André Cruz
On Jan 10, 2013, at 8:01 PM, aaron morton aa...@thelastpickle.com wrote: So, one column represents a file in that directory and it has no value. Just so I understand, the file contents are *not* stored in the column value ? No, on that particular CF the columns are SuperColumns with 5 sub

Re: change cluster name retaining keypsace

2013-01-10 Thread Tim Dunphy
Hey Aaron, That worked beautifully. Thank you sir! Tim On Thu, Jan 10, 2013 at 2:59 PM, Tim Dunphy bluethu...@gmail.com wrote: Cool guys.. and thanks. I'll give this a shot. And I do understand it's a cosmetic issue. It's just an OCD little detail I want to correct before my cluster starts

CQL3 Blob Value Literal?

2013-01-10 Thread Mike Sample
Does CQL3 support blob/BytesType literals for INSERT, UPDATE etc commands? I looked at the CQL3 syntax (http://cassandra.apache.org/doc/cql3/CQL.html) and at the DataStax 1.2 docs. As for why I'd want such a thing, I just wanted to initialize some test values for a blob column with cqlsh.

Re: CQL3 Blob Value Literal?

2013-01-10 Thread Derek Williams
Yes, but need to encode as hex if you want to use literals. On Thu, Jan 10, 2013 at 7:15 PM, Mike Sample mike.sam...@gmail.com wrote: Does CQL3 support blob/BytesType literals for INSERT, UPDATE etc commands? I looked at the CQL3 syntax (http://cassandra.apache.org/doc/cql3/CQL.html) and at

Re: inconsistent hadoop/cassandra results

2013-01-10 Thread aaron morton
But this is the first time I've tried to use the wide-row support, which makes me a little suspicious. The wide-row support is not very well documented, so maybe I'm doing something wrong there in ignorance. This was the area I was thinking about. Can you drill in and see a pattern. Are

Re: Wide rows in CQL 3

2013-01-10 Thread aaron morton
Is this possible without using multiple rows in CQL3 non compact tables? Depending on the number of (log record) keys you *could* do this as a map type in your CQL Table. create table log_row ( sequence timestamp, props maptext, text ) Cheers - Aaron Morton Freelance

Re: inconsistent hadoop/cassandra results

2013-01-10 Thread Michael Kjellman
I found that overall Hadoop input/output from Cassandra could use a little more QA and input from the community. (Especially with large datasets). There were some serious BOF bugs in 1.1 that have been resolved in 1.2. (Yay!) But, the problems in 1.1 weren't immediately apparent. Testing in my