[VOTE] Release Mojo's Cassandra Maven Plugin 1.2.0-1

2013-02-04 Thread Stephen Connolly
Hi,

I'd like to release version 1.2.0-1 of Mojo's Cassandra Maven Plugin
to sync up with the 1.2.0 release of Apache Cassandra. (a 1.2.1-1 will
follow shortly after this release, but it should be possible to use the
xpath://project/build/plugins/plugin/dependencies/dependency override of
cassandra-server to use C* releases from the 1.2.x stream now that the link
errors have been resolved, so that is less urgent)

We solved 1 issues:
http://jira.codehaus.org/secure/ReleaseNote.jspa?projectId=12121version=18467

Staging Repository:
https://nexus.codehaus.org/content/repositories/orgcodehausmojo-013/

Site:
http://mojo.codehaus.org/cassandra-maven-plugin/index.html

SCM Tag:
https://svn.codehaus.org/mojo/tags/cassandra-maven-plugin-1.2.0-1@17921

 [ ] +1 Yeah! fire ahead oh and the blind man on the galloping horse
says it looks fine too.
 [ ] 0 Mehhh! like I care, I don't have any opinions either, I'd
follow somebody else if only I could decide who
 [ ] -1 No! wait up there I have issues (in general like, ya know,
and being a trouble-maker is only one of them)

The vote is open for 72h and will succeed by lazy consensus.

Guide to testing staged releases:
http://maven.apache.org/guides/development/guide-testing-releases.html

Cheers

-Stephen

P.S.
 In the interest of ensuring (more is) better testing, and as is now
tradition for Mojo's Cassandra Maven Plugin, this vote is
also open to any subscribers of the dev and user@cassandra.apache.org
mailing lists that want to test or use this plugin.


Re:

2013-02-04 Thread Víctor Hugo Oliveira Molinar
How do you establish the connection?
Are you closing and reopening it?
It's normal for cassandra slowing down after many insertions, but it would
only take more time to process your write, nothing more than that.

On Fri, Feb 1, 2013 at 5:53 PM, Marcelo Elias Del Valle
mvall...@gmail.comwrote:

 Hello,

  I am trying to figure out why the following behavior happened. Any
 help would be highly appreciated.
  This graph shows the server resources allocation of my single
 cassandra machine (running at Amazon EC2):
 http://mvalle.com/downloads/cassandra_host1.png
  I ran a hadoop process that reads a CSV file and writtes data to
 Cassandra. For about 1 h, the process ran fine, but taking about 100% of
 CPU. After 1 h, my hadoop process started to have its connection attempts
 refused by cassandra, as shown bellow.
  Since them, it has been taking 100% of the machine IO. It has been 2
 h already since the IO is 100% on the machine running Cassandra.
  I am running Cassandra under Amazon EBS, which is slow, but I didn't
 think it would be that slow. Just wondering, is it normal for Cassandra to
 use a high amount of CPU? I am guessing all the writes were going to the
 memtables and when it was time to flush the server went down.
  Makes sense? I am still learning Cassandra as it's the first time I
 use it in production, so I am not sure if I am missing something really
 basic here.


 2013-02-01 16:44:43,741 ERROR com.s1mbi0se.dmp.input.service.InputService 
 (Thread-18): EXCEPTION:PoolTimeoutException: [host=(10.84.65.108):9160, 
 latency=5005(5005), attempts=1] Timed out waiting for connection
 com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException: 
 PoolTimeoutException: [host=nosql1.s1mbi0se.com.br(10.84.65.108):9160, 
 latency=5005(5005), attempts=1] Timed out waiting for connection
   at 
 com.netflix.astyanax.connectionpool.impl.SimpleHostConnectionPool.waitForConnection(SimpleHostConnectionPool.java:201)
   at 
 com.netflix.astyanax.connectionpool.impl.SimpleHostConnectionPool.borrowConnection(SimpleHostConnectionPool.java:158)
   at 
 com.netflix.astyanax.connectionpool.impl.RoundRobinExecuteWithFailover.borrowConnection(RoundRobinExecuteWithFailover.java:60)
   at 
 com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:50)
   at 
 com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229)
   at 
 com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$1.execute(ThriftColumnFamilyQueryImpl.java:186)
   at 
 com.s1mbi0se.dmp.input.service.InputService.searchUserByKey(InputService.java:700)

 ...
   at 
 com.s1mbi0se.dmp.importer.map.ImporterMapper.map(ImporterMapper.java:20)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
   at 
 org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper$MapRunner.run(MultithreadedMapper.java:268)
 2013-02-01 16:44:43,743 ERROR com.s1mbi0se.dmp.input.service.InputService 
 (Thread-15): EXCEPTION:PoolTimeoutException:


 Best regards,
 --
 Marcelo Elias Del Valle
 http://mvalle.com - @mvallebr



RE: Not enough replicas???

2013-02-04 Thread Stephen.M.Thompson
Hi Edward - thanks for responding.   The keyspace could not have been created 
more simply:



create keyspace KEYSPACE_NAME;



According to the help, this should have created a replication factor of 1:



Keyspace Attributes (all are optional):

- placement_strategy: Class used to determine how replicas

  are distributed among nodes. Defaults to NetworkTopologyStrategy with

  one datacenter defined with a replication factor of 1 ([datacenter1:1]).



Steve



-Original Message-
From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
Sent: Friday, February 01, 2013 5:49 PM
To: user@cassandra.apache.org
Subject: Re: Not enough replicas???



Please include the information on how your keyspace was created. This may 
indicate you set the replication factor to 3, when you only have 1 node, or 
some similar condition.



On Fri, Feb 1, 2013 at 4:57 PM,  
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

 I need to offer my profound thanks to this community which has been so

 helpful in trying to figure this system out.







 I've setup a simple ring with two nodes and I'm trying to insert data

 to them.  I get failures 100% with this error:







 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not

 be enough replicas present to handle consistency level.







 I'm not doing anything fancy - this is just from setting up the

 cluster following the basic instructions from datastax for a simple

 one data center cluster.  My config is basically the default except

 for the changes they discuss (except that I have configured for my IP

 addresses... my two boxes are

 .126 and .127)







 cluster_name: 'MyDemoCluster'



 num_tokens: 256



 seed_provider:



   - class_name: org.apache.cassandra.locator.SimpleSeedProvider



 parameters:



  - seeds: 10.28.205.126



 listen_address: 10.28.205.126



 rpc_address: 0.0.0.0



 endpoint_snitch: RackInferringSnitch







 Nodetool shows both nodes active in the ring, status = up, state = normal.







 For the CF:







ColumnFamily: SystemEvent



  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type



  Default column value validator:

 org.apache.cassandra.db.marshal.UTF8Type



  Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type



  GC grace seconds: 864000



  Compaction min/max thresholds: 4/32



  Read repair chance: 0.1



  DC Local Read repair chance: 0.0



  Replicate on write: true



  Caching: KEYS_ONLY



  Bloom Filter FP chance: default



  Built indexes: [SystemEvent.IdxName]



  Column Metadata:



Column Name: eventTimeStamp



  Validation Class: org.apache.cassandra.db.marshal.DateType



Column Name: name



  Validation Class: org.apache.cassandra.db.marshal.UTF8Type



  Index Name: IdxName



  Index Type: KEYS



  Compaction Strategy:

 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy



  Compression Options:



sstable_compression:

 org.apache.cassandra.io.compress.SnappyCompressor







 Any ideas?


cassandra cqlsh error

2013-02-04 Thread Kumar, Anjani

I am facing problem while trying to run cqlsh. Here is what I did:


1.   I brought the tar ball files for both 1.1.7 and 1.2.0 version.

2.   Unzipped and untarred it

3.   Started Cassandra

4.   And then tried starting cqlsh but I am getting the following error in 
both the versions:

Connection error: Invalid method name: 'set_cql_version'

Before installing Datastax 1.1.7 and 1.2.0 cassandra, I had installed Cassandra 
through sudo apt-get install Cassandra on my ubuntu. Since it doesn't have 
CQL support(at least I cant find it) so I thought of installing Datastax 
version of Cassandra but still no luck starting cqlsh so far. Any suggestion?

Thanks,
Anjani



Re: Not enough replicas???

2013-02-04 Thread Tyler Hobbs
RackInferringSnitch determines each node's DC and rack by looking at the
second and third octets in its IP address (
http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch),
so your nodes are in DC 28.

Your replication strategy says to put one replica in DC datacenter1, but
doesn't mention DC 28 at all, so you don't have any replicas for your
keyspace.


On Mon, Feb 4, 2013 at 7:55 AM, stephen.m.thomp...@wellsfargo.com wrote:

 Hi Edward - thanks for responding.   The keyspace could not have been
 created more simply:

 ** **

 create keyspace KEYSPACE_NAME;

 ** **

 According to the help, this should have created a replication factor of 1:
 

 ** **

 Keyspace Attributes (all are optional):

 - placement_strategy: Class used to determine how replicas

   are distributed among nodes. Defaults to NetworkTopologyStrategy with***
 *

   one datacenter defined with a replication factor of 1
 ([datacenter1:1]).

 ** **

 Steve

 ** **

 -Original Message-
 From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
 Sent: Friday, February 01, 2013 5:49 PM
 To: user@cassandra.apache.org
 Subject: Re: Not enough replicas???

 ** **

 Please include the information on how your keyspace was created. This may
 indicate you set the replication factor to 3, when you only have 1 node, or
 some similar condition.

 ** **

 On Fri, Feb 1, 2013 at 4:57 PM,  stephen.m.thomp...@wellsfargo.com
 wrote:

  I need to offer my profound thanks to this community which has been so *
 ***

  helpful in trying to figure this system out.

 ** **

 ** **

 ** **

  I’ve setup a simple ring with two nodes and I’m trying to insert data **
 **

  to them.  I get failures 100% with this error:

 ** **

 ** **

 ** **

  me.prettyprint.hector.api.exceptions.HUnavailableException: : May not **
 **

  be enough replicas present to handle consistency level.

 ** **

 ** **

 ** **

  I’m not doing anything fancy – this is just from setting up the 

  cluster following the basic instructions from datastax for a simple 

  one data center cluster.  My config is basically the default except 

  for the changes they discuss (except that I have configured for my IP **
 **

  addresses… my two boxes are

  .126 and .127)

 ** **

 ** **

 ** **

  cluster_name: 'MyDemoCluster'

 ** **

  num_tokens: 256

 ** **

  seed_provider:

 ** **

- class_name: org.apache.cassandra.locator.SimpleSeedProvider

 ** **

  parameters:

 ** **

   - seeds: 10.28.205.126

 ** **

  listen_address: 10.28.205.126

 ** **

  rpc_address: 0.0.0.0

 ** **

  endpoint_snitch: RackInferringSnitch

 ** **

 ** **

 ** **

  Nodetool shows both nodes active in the ring, status = up, state =
 normal.

 ** **

 ** **

 ** **

  For the CF:

 ** **

 ** **

 ** **

 ColumnFamily: SystemEvent

 ** **

   Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type

 ** **

   Default column value validator:

  org.apache.cassandra.db.marshal.UTF8Type

 ** **

   Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type

 ** **

   GC grace seconds: 864000

 ** **

   Compaction min/max thresholds: 4/32

 ** **

   Read repair chance: 0.1

 ** **

   DC Local Read repair chance: 0.0

 ** **

   Replicate on write: true

 ** **

   Caching: KEYS_ONLY

 ** **

   Bloom Filter FP chance: default

 ** **

   Built indexes: [SystemEvent.IdxName]

 ** **

   Column Metadata:

 ** **

 Column Name: eventTimeStamp

 ** **

   Validation Class: org.apache.cassandra.db.marshal.DateType

 ** **

 Column Name: name

 ** **

   Validation Class: org.apache.cassandra.db.marshal.UTF8Type

 ** **

   Index Name: IdxName

 ** **

   Index Type: KEYS

 ** **

   Compaction Strategy:

  org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

 ** **

   Compression Options:

 ** **

 sstable_compression:

  org.apache.cassandra.io.compress.SnappyCompressor

 ** **

 ** **

 ** **

  Any ideas?




-- 
Tyler Hobbs
DataStax http://datastax.com/


Re: BloomFilter

2013-02-04 Thread aaron morton
 1) What is the ratio of the sstable file size to bloom filter size ? If i 
 have a sstable of 1 GB, what is the approximate bloom filter size ? Assuming
 0.000744 default val configured.
The size of the bloom filter varies with the number of rows in the CF, not the 
on disk size. More correctly it's the number of rows in each SSTable as a row 
can be stored in multiple sstables. 

nodetool cfstats reports the total bloom filter size for each cf. 

 2) The bloom filters are stored in RAM but not in help from 1.2 onwards ?
They are always in RAM. Pre 1.2 they were stored in the JVM heap, from 1.2 
onwards they are stored off heap. 

 3) What is the ratio of the RAM/Disk per node ?  What is the max disk size 
 recommended for 1 node ? If I have 10 TB of data per node, how much RAM will 
 the bloomfilter consume ?
If you are using a spinning disk (HDD) and have 1GB networking, I would 
consider 300GB to 500GB a good rule of thumb for a small 6 node cluster.

There issues have to do with the time it takes to run nodetool repair, and the 
time it takes to replace a failed node. Once you have a feel for how long this 
takes you may want to put more data on each node.

In 1.2 there are things that make replacing a node faster, but they tend to 
kick in at higher node counts.

Cheers

  
-
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 3/02/2013, at 6:45 AM, Kanwar Sangha kan...@mavenir.com wrote:

 Hi - Couple of questions -
  
 1) What is the ratio of the sstable file size to bloom filter size ? If i 
 have a sstable of 1 GB, what is the approximate bloom filter size ? Assuming
 0.000744 default val configured.
  
 2) The bloom filters are stored in RAM but not in help from 1.2 onwards ?
  
 3) What is the ratio of the RAM/Disk per node ?  What is the max disk size 
 recommended for 1 node ? If I have 10 TB of data per node, how much RAM will 
 the bloomfilter consume ?
  
 Thanks,
 kanwar
  



Re: Index file

2013-02-04 Thread aaron morton
-Index.db components only contain the index.

In v1.2+ -Summary.db contains a sampling of the index read at startup. 

Cheers

-
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 3/02/2013, at 11:03 AM, Kanwar Sangha kan...@mavenir.com wrote:

 Hi – The index files created for the SSTables. Do they contain a sampling or 
 the complete index ? Cassandra on startup loads these files based on the 
 sampling rate in Cassandra.yaml ..right ?
  
  



Re: CPU hotspot at BloomFilterSerializer#deserialize

2013-02-04 Thread aaron morton
 Yes, it contains a big row that goes up to 2GB with more than a million of 
 columns.

I've run tests with 10 million small columns and reasonable performance. I've 
not looked at 1 million large columns.  

 - BloomFilterSerializer#deserialize does readLong iteratively at each page
 of size 4K for a given row, which means it could be 500,000 loops(calls
 readLong) for a 2G row(from 1.0.7 source).
There is only one Bloom filter per row in an SSTable, not one per column 
index/page. 

It could take a while if there are a lot of sstables in the read. 

nodetool cfhistorgrams will let you know, run it once to reset the counts , 
then do your test, then run it again. 

Cheers

-
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 4/02/2013, at 4:13 AM, Edward Capriolo edlinuxg...@gmail.com wrote:

 It is interesting the press c* got about having 2 billion columns in a
 row. You *can* do it but it brings to light some realities of what
 that means.
 
 On Sun, Feb 3, 2013 at 8:09 AM, Takenori Sato ts...@cloudian.com wrote:
 Hi Aaron,
 
 Thanks for your answers. That helped me get a big picture.
 
 Yes, it contains a big row that goes up to 2GB with more than a million of
 columns.
 
 Let me confirm if I correctly understand.
 
 - The stack trace is from Slice By Names query. And the deserialization is
 at the step 3, Read the row level Bloom Filter, on your blog.
 
 - BloomFilterSerializer#deserialize does readLong iteratively at each page
 of size 4K for a given row, which means it could be 500,000 loops(calls
 readLong) for a 2G row(from 1.0.7 source).
 
 Correct?
 
 That makes sense Slice By Names queries against such a wide row could be CPU
 bottleneck. In fact, in our test environment, a
 BloomFilterSerializer#deserialize of such a case takes more than 10ms, up to
 100ms.
 
 Get a single named column.
 Get the first 10 columns using the natural column order.
 Get the last 10 columns using the reversed order.
 
 Interesting. A query pattern could make a difference?
 
 We thought the only solutions is to change the data structure(don't use such
 a wide row if it is retrieved by Slice By Names query).
 
 Anyway, will give it a try!
 
 Best,
 Takenori
 
 On Sat, Feb 2, 2013 at 2:55 AM, aaron morton aa...@thelastpickle.com
 wrote:
 
 5. the problematic Data file contains only 5 to 10 keys data but
 large(2.4G)
 
 So very large rows ?
 What does nodetool cfstats or cfhistograms say about the row sizes ?
 
 
 1. what is happening?
 
 I think this is partially large rows and partially the query pattern, this
 is only by roughly correct
 http://thelastpickle.com/2011/07/04/Cassandra-Query-Plans/ and my talk here
 http://www.datastax.com/events/cassandrasummit2012/presentations
 
 3. any more info required to proceed?
 
 Do some tests with different query techniques…
 
 Get a single named column.
 Get the first 10 columns using the natural column order.
 Get the last 10 columns using the reversed order.
 
 Hope that helps.
 
 -
 Aaron Morton
 Freelance Cassandra Developer
 New Zealand
 
 @aaronmorton
 http://www.thelastpickle.com
 
 On 31/01/2013, at 7:20 PM, Takenori Sato ts...@cloudian.com wrote:
 
 Hi all,
 
 We have a situation that CPU loads on some of our nodes in a cluster has
 spiked occasionally since the last November, which is triggered by requests
 for rows that reside on two specific sstables.
 
 We confirmed the followings(when spiked):
 
 version: 1.0.7(current) - 0.8.6 - 0.8.5 - 0.7.8
 jdk: Oracle 1.6.0
 
 1. a profiling showed that BloomFilterSerializer#deserialize was the
 hotspot(70% of the total load by running threads)
 
 * the stack trace looked like this(simplified)
 90.4% - org.apache.cassandra.db.ReadVerbHandler.doVerb
 90.4% - org.apache.cassandra.db.SliceByNamesReadCommand.getRow
 ...
 90.4% - org.apache.cassandra.db.CollationController.collectTimeOrderedData
 ...
 89.5% - org.apache.cassandra.db.columniterator.SSTableNamesIterator.read
 ...
 79.9% - org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter
 68.9% - org.apache.cassandra.io.sstable.BloomFilterSerializer.deserialize
 66.7% - java.io.DataInputStream.readLong
 
 2. Usually, 1 should be so fast that a profiling by sampling can not
 detect
 
 3. no pressure on Cassandra's VM heap nor on machine in overal
 
 4. a little I/O traffic for our 8 disks/node(up to 100tps/disk by iostat
 1 1000)
 
 5. the problematic Data file contains only 5 to 10 keys data but
 large(2.4G)
 
 6. the problematic Filter file size is only 256B(could be normal)
 
 
 So now, I am trying to read the Filter file in the same way
 BloomFilterSerializer#deserialize does as possible as I can, in order to see
 if the file is something wrong.
 
 Could you give me some advise on:
 
 1. what is happening?
 2. the best way to simulate the BloomFilterSerializer#deserialize
 3. any more info required to proceed?
 
 Thanks,
 Takenori
 
 
 



Re: cassandra cqlsh error

2013-02-04 Thread aaron morton
Grab 1.2.1, it's fixed there http://cassandra.apache.org/download/

Cheers

-
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 5/02/2013, at 4:37 AM, Kumar, Anjani anjani.ku...@infogroup.com wrote:

  
 I am facing problem while trying to run cqlsh. Here is what I did:
  
 1.   I brought the tar ball files for both 1.1.7 and 1.2.0 version.
 2.   Unzipped and untarred it
 3.   Started Cassandra
 4.   And then tried starting cqlsh but I am getting the following error 
 in both the versions:
 Connection error: Invalid method name: ‘set_cql_version’
  
 Before installing Datastax 1.1.7 and 1.2.0 cassandra, I had installed 
 Cassandra through “sudo apt-get install Cassandra” on my ubuntu. Since it 
 doesn’t have CQL support(at least I cant find it) so I thought of installing 
 Datastax version of Cassandra but still no luck starting cqlsh so far. Any 
 suggestion?
  
 Thanks,
 Anjani
  



RE: cassandra cqlsh error

2013-02-04 Thread Kumar, Anjani
Thank you Aaron! I uninstalled older version of Cassandra and then brought 
1.2.1 version of apache Cassandra as per your mail below. However, I am 
receiving the following error while starting Cassandra.
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 ./cassandra
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k


Thanks,
Anjani

From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Monday, February 04, 2013 11:34 AM
To: user@cassandra.apache.org
Subject: Re: cassandra cqlsh error

Grab 1.2.1, it's fixed there http://cassandra.apache.org/download/

Cheers

-
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 5/02/2013, at 4:37 AM, Kumar, Anjani 
anjani.ku...@infogroup.commailto:anjani.ku...@infogroup.com wrote:



I am facing problem while trying to run cqlsh. Here is what I did:

1.   I brought the tar ball files for both 1.1.7 and 1.2.0 version.
2.   Unzipped and untarred it
3.   Started Cassandra
4.   And then tried starting cqlsh but I am getting the following error in 
both the versions:
Connection error: Invalid method name: 'set_cql_version'

Before installing Datastax 1.1.7 and 1.2.0 cassandra, I had installed Cassandra 
through sudo apt-get install Cassandra on my ubuntu. Since it doesn't have 
CQL support(at least I cant find it) so I thought of installing Datastax 
version of Cassandra but still no luck starting cqlsh so far. Any suggestion?

Thanks,
Anjani




RE: Not enough replicas???

2013-02-04 Thread Stephen.M.Thompson
Thanks Tyler ... so I created my keyspace to explicitly indicate the datacenter 
and replication, as follows:

create keyspace KEYSPACE_NAME
  with placement_strategy = 
'org.apache.cassandra.locator.NetworkTopologyStrategy'
  and strategy_options={DC28:2};

And yet I still get the exact same error message:

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough 
replicas present to handle consistency level.

It certainly is showing that it took my change:

[default@KEYSPACE_NAME] describe;
Keyspace: KEYSPACE_NAME:
  Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
  Durable Writes: true
Options: [DC28:2]

Looking at the ring 

[root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost ring

Datacenter: 28
==
Replicas: 0

Address RackStatus State   LoadOwns
Token
   
9187343239835811839
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9187343239835811840
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9151314442816847872
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9115285645797883904

( HUGE SNIP )

10.28.205.127   205 Up Normal  84.63 KB0.00%   
9115285645797883903
10.28.205.127   205 Up Normal  84.63 KB0.00%   
9151314442816847871
10.28.205.127   205 Up Normal  84.63 KB0.00%   
9187343239835811839

So both boxes are showing up in the ring.

Thank you guys SO MUCH for helping me figure this stuff out.


From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Monday, February 04, 2013 11:17 AM
To: user@cassandra.apache.org
Subject: Re: Not enough replicas???

RackInferringSnitch determines each node's DC and rack by looking at the second 
and third octets in its IP address 
(http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch),
 so your nodes are in DC 28.

Your replication strategy says to put one replica in DC datacenter1, but 
doesn't mention DC 28 at all, so you don't have any replicas for your 
keyspace.

On Mon, Feb 4, 2013 at 7:55 AM, 
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

Hi Edward - thanks for responding.   The keyspace could not have been created 
more simply:



create keyspace KEYSPACE_NAME;



According to the help, this should have created a replication factor of 1:



Keyspace Attributes (all are optional):

- placement_strategy: Class used to determine how replicas

  are distributed among nodes. Defaults to NetworkTopologyStrategy with

  one datacenter defined with a replication factor of 1 ([datacenter1:1]).



Steve



-Original Message-
From: Edward Capriolo 
[mailto:edlinuxg...@gmail.commailto:edlinuxg...@gmail.com]
Sent: Friday, February 01, 2013 5:49 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Not enough replicas???



Please include the information on how your keyspace was created. This may 
indicate you set the replication factor to 3, when you only have 1 node, or 
some similar condition.



On Fri, Feb 1, 2013 at 4:57 PM,  
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

 I need to offer my profound thanks to this community which has been so

 helpful in trying to figure this system out.







 I've setup a simple ring with two nodes and I'm trying to insert data

 to them.  I get failures 100% with this error:







 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not

 be enough replicas present to handle consistency level.







 I'm not doing anything fancy - this is just from setting up the

 cluster following the basic instructions from datastax for a simple

 one data center cluster.  My config is basically the default except

 for the changes they discuss (except that I have configured for my IP

 addresses... my two boxes are

 .126 and .127)







 cluster_name: 'MyDemoCluster'



 num_tokens: 256



 seed_provider:



   - class_name: org.apache.cassandra.locator.SimpleSeedProvider



 parameters:



  - seeds: 10.28.205.126



 listen_address: 10.28.205.126



 rpc_address: 0.0.0.0



 endpoint_snitch: RackInferringSnitch







 Nodetool shows both nodes active in the ring, status = up, state = normal.







 For the CF:







ColumnFamily: SystemEvent



  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type



  Default column value validator:

 org.apache.cassandra.db.marshal.UTF8Type



  Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type



  GC grace seconds: 864000



  Compaction min/max thresholds: 4/32



  Read repair chance: 0.1



  DC Local Read repair chance: 0.0



  Replicate on write: true



  Caching: KEYS_ONLY




Re: cassandra cqlsh error

2013-02-04 Thread Brian Jeltema
I had this problem using a rather old version of Open JDK. I downloaded the Sun 
JDK and its working now.

Brian

On Feb 4, 2013, at 1:04 PM, Kumar, Anjani wrote:

 Thank you Aaron! I uninstalled older version of Cassandra and then brought 
 1.2.1 version of apache Cassandra as per your mail below. However, I am 
 receiving the following error while starting Cassandra.
 anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$ ./cassandra
 xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
 -XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
 -XX:+HeapDumpOnOutOfMemoryError -Xss180k
  
  
 Thanks,
 Anjani
  
 From: aaron morton [mailto:aa...@thelastpickle.com] 
 Sent: Monday, February 04, 2013 11:34 AM
 To: user@cassandra.apache.org
 Subject: Re: cassandra cqlsh error
  
 Grab 1.2.1, it's fixed there http://cassandra.apache.org/download/
  
 Cheers
  
 -
 Aaron Morton
 Freelance Cassandra Developer
 New Zealand
  
 @aaronmorton
 http://www.thelastpickle.com
  
 On 5/02/2013, at 4:37 AM, Kumar, Anjani anjani.ku...@infogroup.com wrote:
 
 
  
 I am facing problem while trying to run cqlsh. Here is what I did:
  
 1.   I brought the tar ball files for both 1.1.7 and 1.2.0 version.
 2.   Unzipped and untarred it
 3.   Started Cassandra
 4.   And then tried starting cqlsh but I am getting the following error 
 in both the versions:
 Connection error: Invalid method name: ‘set_cql_version’
  
 Before installing Datastax 1.1.7 and 1.2.0 cassandra, I had installed 
 Cassandra through “sudo apt-get install Cassandra” on my ubuntu. Since it 
 doesn’t have CQL support(at least I cant find it) so I thought of installing 
 Datastax version of Cassandra but still no luck starting cqlsh so far. Any 
 suggestion?
  
 Thanks,
 Anjani
  
  



Re: Pycassa vs YCSB results.

2013-02-04 Thread Pradeep Kumar Mantha
Hi,

Could some one please let me know any hints, why the pycassa
client(attached) is much slower than the YCSB?
is it something to attribute to performance difference between python and
Java? or the pycassa api has some performance limitations?

I don't see any client statements affecting the pycassa performance. Please
have a look at the simple python script attached and let me know
your suggestions.

thanks
pradeep

On Thu, Jan 31, 2013 at 4:53 PM, Pradeep Kumar Mantha
pradeep...@gmail.comwrote:



 On Thu, Jan 31, 2013 at 4:49 PM, Pradeep Kumar Mantha 
 pradeep...@gmail.com wrote:

 Thanks.. Please find the script as attachment.

 Just re-iterating.
 Its just a simple python script which submit 4 threads.
 This script has been scheduled on 8 cores using taskset unix command ,
 thus running 32 threads/node.
 and then scaling to 16 nodes

 thanks
 pradeep


 On Thu, Jan 31, 2013 at 4:38 PM, Tyler Hobbs ty...@datastax.com wrote:

 Can you provide the python script that you're using?

 (I'm moving this thread to the pycassa mailing list (
 pycassa-disc...@googlegroups.com), which is a better place for this
 discussion.)


 On Thu, Jan 31, 2013 at 6:25 PM, Pradeep Kumar Mantha 
 pradeep...@gmail.com wrote:

 Hi,

 I am trying to benchmark cassandra on a 12 Data Node cluster using 16
 clients ( each client uses 32 threads) using custom pycassa client and 
 YCSB.

 I found the maximum number of operations/seconds achieved using pycassa
 client is nearly 70k+ reads/second.
 Whereas with YCSB it is ~ 120k reads/second.

 Any thoughts, why I see this huge difference in performance?


 Here is the description of setup.

 Pycassa client (a simple python script).
 1. Each pycassa client starts 4 threads - where each thread queries
 76896 queries.
 2. a shell script is used to submit 4threads/each core using taskset
 unix command on a 8 core single node. ( 8 * 4 * 76896 queries)
 3. Another shell script is used to scale the single node shell script
 to 16 nodes  ( total queries now - 16 * 8 * 4 * 76896 queries )

 I tried to keep YCSB configuration as much as similar to my custom
 pycassa benchmarking setup.

 YCSB -

 Launched 16 YCSB clients on 16 nodes where each client uses 32 threads
 for execution and need to query ( 32 * 76896 keys ), i.e 100% reads

 The dataset is different in each case, but has

 1. same number of total records.
 2. same number of fields.
 3. field length is almost same.

 Could you please let me know, why I see this huge performance
 difference and is there any way I can improve the operations/second using
 pycassa client.

 thanks
 pradeep





 --
 Tyler Hobbs
 DataStax http://datastax.com/






pycassa_client.py
Description: Binary data


Re: Not enough replicas???

2013-02-04 Thread Tyler Hobbs
Sorry, to be more precise, the name of the datacenter is just the string
28, not DC28.


On Mon, Feb 4, 2013 at 12:07 PM, stephen.m.thomp...@wellsfargo.com wrote:

 Thanks Tyler … so I created my keyspace to explicitly indicate the
 datacenter and replication, as follows:

 ** **

 create *keyspace* KEYSPACE_NAME

   with placement_strategy =
 'org.apache.cassandra.locator.NetworkTopologyStrategy'

   and strategy_options={DC28:2};

 ** **

 And yet I still get the exact same error message:

 ** **

 *me.prettyprint.hector.api.exceptions.HUnavailableException*: : May not
 be enough replicas present to handle consistency level.

 ** **

 It certainly is showing that it took my change:

 ** **

 [default@KEYSPACE_NAME] describe;

 Keyspace: KEYSPACE_NAME:

   Replication Strategy:
 org.apache.cassandra.locator.NetworkTopologyStrategy

   Durable Writes: true

 Options: [DC28:2]

 ** **

 Looking at the ring ….

 ** **

 [root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost
 ring

 ** **

 Datacenter: 28

 ==

 Replicas: 0

 ** **

 Address RackStatus State   Load
 OwnsToken


 9187343239835811839

 10.28.205.126   205 Up Normal  95.89 KB
 0.00%   -9187343239835811840

 10.28.205.126   205 Up Normal  95.89 KB
 0.00%   -9151314442816847872

 10.28.205.126   205 Up Normal  95.89 KB
 0.00%   -9115285645797883904

 ** **

 ( HUGE SNIP )

 ** **

 10.28.205.127   205 Up Normal  84.63 KB
 0.00%   9115285645797883903

 10.28.205.127   205 Up Normal  84.63 KB
 0.00%   9151314442816847871

 10.28.205.127   205 Up Normal  84.63 KB
 0.00%   9187343239835811839

 ** **

 So both boxes are showing up in the ring.  

 ** **

 *Thank you guys SO MUCH for helping me figure this stuff out.*

 ** **

 ** **

 *From:* Tyler Hobbs [mailto:ty...@datastax.com]
 *Sent:* Monday, February 04, 2013 11:17 AM

 *To:* user@cassandra.apache.org
 *Subject:* Re: Not enough replicas???

 ** **

 RackInferringSnitch determines each node's DC and rack by looking at the
 second and third octets in its IP address (
 http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch),
 so your nodes are in DC 28.

 Your replication strategy says to put one replica in DC datacenter1, but
 doesn't mention DC 28 at all, so you don't have any replicas for your
 keyspace.

 ** **

 On Mon, Feb 4, 2013 at 7:55 AM, stephen.m.thomp...@wellsfargo.com wrote:
 

 Hi Edward - thanks for responding.   The keyspace could not have been
 created more simply:

  

 create keyspace KEYSPACE_NAME;

  

 According to the help, this should have created a replication factor of 1:
 

  

 Keyspace Attributes (all are optional):

 - placement_strategy: Class used to determine how replicas

   are distributed among nodes. Defaults to NetworkTopologyStrategy with***
 *

   one datacenter defined with a replication factor of 1
 ([datacenter1:1]).

  

 Steve

  

 -Original Message-
 From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
 Sent: Friday, February 01, 2013 5:49 PM
 To: user@cassandra.apache.org
 Subject: Re: Not enough replicas???

  

 Please include the information on how your keyspace was created. This may
 indicate you set the replication factor to 3, when you only have 1 node, or
 some similar condition.

  

 On Fri, Feb 1, 2013 at 4:57 PM,  stephen.m.thomp...@wellsfargo.com
 wrote:

  I need to offer my profound thanks to this community which has been so *
 ***

  helpful in trying to figure this system out.

  

  

  

  I’ve setup a simple ring with two nodes and I’m trying to insert data **
 **

  to them.  I get failures 100% with this error:

  

  

  

  me.prettyprint.hector.api.exceptions.HUnavailableException: : May not **
 **

  be enough replicas present to handle consistency level.

  

  

  

  I’m not doing anything fancy – this is just from setting up the 

  cluster following the basic instructions from datastax for a simple 

  one data center cluster.  My config is basically the default except 

  for the changes they discuss (except that I have configured for my IP **
 **

  addresses… my two boxes are

  .126 and .127)

  

  

  

  cluster_name: 'MyDemoCluster'

  

  num_tokens: 256

  

  seed_provider:

  

- class_name: org.apache.cassandra.locator.SimpleSeedProvider

  

  parameters:

  

   - seeds: 10.28.205.126

  

  listen_address: 10.28.205.126

  

  rpc_address: 0.0.0.0

  

  endpoint_snitch: RackInferringSnitch

  

  

  

  

RE: Not enough replicas???

2013-02-04 Thread Stephen.M.Thompson
Sweet!  That worked!  THANK YOU!

Stephen Thompson
Wells Fargo Corporation
Internet Authentication  Fraud Prevention
704.427.3137 (W) | 704.807.3431 (C)

This message may contain confidential and/or privileged information, and is 
intended for the use of the addressee only. If you are not the addressee or 
authorized to receive this for the addressee, you must not use, copy, disclose, 
or take any action based on this message or any information herein. If you have 
received this message in error, please advise the sender immediately by reply 
e-mail and delete this message. Thank you for your cooperation.

From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Monday, February 04, 2013 1:43 PM
To: user@cassandra.apache.org
Subject: Re: Not enough replicas???

Sorry, to be more precise, the name of the datacenter is just the string 28, 
not DC28.

On Mon, Feb 4, 2013 at 12:07 PM, 
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:
Thanks Tyler ... so I created my keyspace to explicitly indicate the datacenter 
and replication, as follows:

create keyspace KEYSPACE_NAME
  with placement_strategy = 
'org.apache.cassandra.locator.NetworkTopologyStrategy'
  and strategy_options={DC28:2};

And yet I still get the exact same error message:

me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough 
replicas present to handle consistency level.

It certainly is showing that it took my change:

[default@KEYSPACE_NAME] describe;
Keyspace: KEYSPACE_NAME:
  Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
  Durable Writes: true
Options: [DC28:2]

Looking at the ring 

[root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost ring

Datacenter: 28
==
Replicas: 0

Address RackStatus State   LoadOwns
Token
   
9187343239835811839
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9187343239835811840
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9151314442816847872
10.28.205.126   205 Up Normal  95.89 KB0.00%   
-9115285645797883904

( HUGE SNIP )

10.28.205.127   205 Up Normal  84.63 KB0.00%   
9115285645797883903
10.28.205.127   205 Up Normal  84.63 KB0.00%   
9151314442816847871
10.28.205.127   205 Up Normal  84.63 KB0.00%   
9187343239835811839

So both boxes are showing up in the ring.

Thank you guys SO MUCH for helping me figure this stuff out.


From: Tyler Hobbs [mailto:ty...@datastax.commailto:ty...@datastax.com]
Sent: Monday, February 04, 2013 11:17 AM

To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Not enough replicas???

RackInferringSnitch determines each node's DC and rack by looking at the second 
and third octets in its IP address 
(http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch),
 so your nodes are in DC 28.

Your replication strategy says to put one replica in DC datacenter1, but 
doesn't mention DC 28 at all, so you don't have any replicas for your 
keyspace.

On Mon, Feb 4, 2013 at 7:55 AM, 
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

Hi Edward - thanks for responding.   The keyspace could not have been created 
more simply:



create keyspace KEYSPACE_NAME;



According to the help, this should have created a replication factor of 1:



Keyspace Attributes (all are optional):

- placement_strategy: Class used to determine how replicas

  are distributed among nodes. Defaults to NetworkTopologyStrategy with

  one datacenter defined with a replication factor of 1 ([datacenter1:1]).



Steve



-Original Message-
From: Edward Capriolo 
[mailto:edlinuxg...@gmail.commailto:edlinuxg...@gmail.com]
Sent: Friday, February 01, 2013 5:49 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Not enough replicas???



Please include the information on how your keyspace was created. This may 
indicate you set the replication factor to 3, when you only have 1 node, or 
some similar condition.



On Fri, Feb 1, 2013 at 4:57 PM,  
stephen.m.thomp...@wellsfargo.commailto:stephen.m.thomp...@wellsfargo.com 
wrote:

 I need to offer my profound thanks to this community which has been so

 helpful in trying to figure this system out.







 I've setup a simple ring with two nodes and I'm trying to insert data

 to them.  I get failures 100% with this error:







 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not

 be enough replicas present to handle consistency level.







 I'm not doing anything fancy - this is just from setting up the

 cluster following the basic instructions from datastax for a simple

 one data center cluster.  My config is 

RE: cassandra cqlsh error

2013-02-04 Thread Dave Brosius
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k  That is not an error, that is just 
'debugging' information output to the command line.  - Original Message 
-From: quot;Kumar, Anjaniquot; ;anjani.ku...@infogroup.com 

RE: cassandra cqlsh error

2013-02-04 Thread Kumar, Anjani
I installed Sun JDK 6 but I am getting the following error. Please see below. 
Thanks!

anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 ./cassandra
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
  INFO 13:39:23,259 Logging initialized
 INFO 13:39:23,276 JVM vendor/version: Java HotSpot(TM) Server VM/1.6.0_38
 INFO 13:39:23,277 Heap size: 1032847360/1033895936
 INFO 13:39:23,277 Classpath: 
./../conf:./../build/classes/main:./../build/classes/thrift:./../lib/antlr-3.2.jar:./../lib/apache-cassandra-1.2.1.jar:./../lib/apache-cassandra-clientutil-1.2.1.jar:./../lib/apache-cassandra-thrift-1.2.1.jar:./../lib/avro-1.4.0-fixes.jar:./../lib/avro-1.4.0-sources-fixes.jar:./../lib/commons-cli-1.1.jar:./../lib/commons-codec-1.2.jar:./../lib/commons-lang-2.6.jar:./../lib/compress-lzf-0.8.4.jar:./../lib/concurrentlinkedhashmap-lru-1.3.jar:./../lib/guava-13.0.1.jar:./../lib/high-scale-lib-1.1.2.jar:./../lib/jackson-core-asl-1.9.2.jar:./../lib/jackson-mapper-asl-1.9.2.jar:./../lib/jamm-0.2.5.jar:./../lib/jline-1.0.jar:./../lib/json-simple-1.1.jar:./../lib/libthrift-0.7.0.jar:./../lib/log4j-1.2.16.jar:./../lib/metrics-core-2.0.3.jar:./../lib/netty-3.5.9.Final.jar:./../lib/servlet-api-2.5-20081211.jar:./../lib/slf4j-api-1.7.2.jar:./../lib/slf4j-log4j12-1.7.2.jar:./../lib/snakeyaml-1.6.jar:./../lib/snappy-java-1.0.4.1.jar:./../lib/snaptree-0.1.jar:./../lib/jamm-0.2.5.jar
 INFO 13:39:23,279 JNA not found. Native methods will be disabled.
 INFO 13:39:23,293 Loading settings from 
file:/home/anjani/apache-cassandra-1.2.1/conf/cassandra.yaml
 INFO 13:39:23,665 32bit JVM detected.  It is recommended to run Cassandra on a 
64bit JVM for better performance.
 INFO 13:39:23,665 DiskAccessMode 'auto' determined to be standard, 
indexAccessMode is standard
 INFO 13:39:23,665 disk_failure_policy is stop
 INFO 13:39:23,670 Global memtable threshold is enabled at 328MB
 INFO 13:39:24,306 Initializing key cache with capacity of 49 MBs.
 INFO 13:39:24,324 Scheduling key cache save to each 14400 seconds (going to 
save all keys).
 INFO 13:39:24,325 Initializing row cache with capacity of 0 MBs and provider 
org.apache.cassandra.cache.SerializingCacheProvider
 INFO 13:39:24,332 Scheduling row cache save to each 0 seconds (going to save 
all keys).
 INFO 13:39:24,450 Opening 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-5 (14976 bytes)
ERROR 13:39:24,456 Cannot open 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-5; partitioner 
org.apache.cassandra.dht.RandomPartitioner does not match system partitioner 
org.apache.cassandra.dht.Murmur3Partitioner.  Note that the default partitioner 
starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit 
that to match your old partitioner if upgrading.
 INFO 13:39:24,459 Opening 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-6 (4236 bytes)
ERROR 13:39:24,460 Cannot open 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-6; partitioner 
org.apache.cassandra.dht.RandomPartitioner does not match system partitioner 
org.apache.cassandra.dht.Murmur3Partitioner.  Note that the default partitioner 
starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit 
that to match your old partitioner if upgrading.

anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 java -version
java version 1.6.0_38
Java(TM) SE Runtime Environment (build 1.6.0_38-b05)
Java HotSpot(TM) Server VM (build 20.13-b02, mixed mode)
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$


On Mon, Feb 4, 2013 at 12:02 PM, Anjani Kumar 
anjani...@gmail.commailto:anjani...@gmail.com wrote:
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 ./cassandra
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k




From: Brian Jeltema [mailto:brian.jelt...@digitalenvoy.net]
Sent: Monday, February 04, 2013 12:14 PM
To: user@cassandra.apache.org
Subject: Re: cassandra cqlsh error

I had this problem using a rather old version of Open JDK. I downloaded the Sun 
JDK and its working now.

Brian

On Feb 4, 2013, at 1:04 PM, Kumar, Anjani wrote:


Thank you Aaron! I uninstalled older version of Cassandra and then brought 
1.2.1 version of apache Cassandra as per your mail below. However, I am 
receiving the following error while starting Cassandra.
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 ./cassandra
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar 

RE: cassandra cqlsh error

2013-02-04 Thread Kumar, Anjani
It looks like an error. Since I checked for “cassandra”  in ps –ef and didn’t 
see anything.

Anjani Kumar
Sr. Software Engineer

Infogroup
office: 402.836.3337
www.infogroup.comhttp://www.infogroup.com

Powering Business Growth

Find us here:  Twitterhttp://twitter.com/infogroup  |  
Facebookhttp://www.facebook.com/Infogroup

From: Dave Brosius [mailto:dbros...@mebigfatguy.com]
Sent: Monday, February 04, 2013 1:05 PM
To: user@cassandra.apache.org; user@cassandra.apache.org
Subject: RE: cassandra cqlsh error

xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k


That is not an error, that is just 'debugging' information output to the 
command line.


- Original Message -
From: quo t;Kumar, Anjani 
anjani.ku...@infogroup.commailto:anjani.ku...@infogroup.com
Sent: Mon, February 4, 2013 13:04
Subject: RE: cassandra cqlsh error
Thank you Aaron! I uninstalled older version of Cassandra and then brought 
1.2.1 version of apache Cassandra as per your mail below. However, I am 
receiving the following error while starting Cassandra.
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 ./cassandra
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k


Thanks,
Anjani

From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Monday, February 04, 2013 11:34 AM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: cassandra cqlsh error


Grab 1.2.1, it's fixed there http://cassandra.apache.org/download/

Cheers

-
Aaron Morton
Freelance Cassandra Developer
New Zealand


@aaronmorton
http://www.thelastpickle.com

On 5/02/2013, at 4:37 AM, Kumar, Anjani 
anjani.ku...@infogroup.commailto:anjani.ku...@infogroup.com wrote:


I am facing problem while trying to run cqlsh. Here is what I did:

1.   I brought the tar ball files for both 1.1.7 and 1.2.0 version.
2.   Unzipped and untarred it
3.   Started Cassandra
4.   And then tried starti ng cqlsh but I am getting the following error in 
both the versions:
Connection error: Invalid method name: ‘set_cql_version’

Before installing Datastax 1.1.7 and 1.2.0 cassandra, I had installed Cassandra 
through “sudo apt-get install Cassandra” on my ubuntu. Since it doesn’t have 
CQL support(at least I cant find it) so I thought of installing Datastax 
version of Cassandra but still no luck starting cqlsh so far. Any suggestion?

Thanks,
Anjani






RE: cassandra cqlsh error

2013-02-04 Thread Kumar, Anjani
Update:

After removing old /var/log/Cassandra and /var/lib/Cassandra, now I am seeing 
the below error:
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 ./cassandra
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /var/log/cassandra/system.log (Permission denied)
at java.io.FileOutputStream.openAppend(Native Method)
at java.io.FileOutputStream.init(FileOutputStream.java:192)
at java.io.FileOutputStream.init(FileOutputStream.java:116)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at 
org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
at 
org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at 
org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at 
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at 
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at 
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809)
at 
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)
at 
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:395)
at 
org.apache.log4j.PropertyWatchdog.doOnChange(PropertyConfigurator.java:922)
at 
org.apache.log4j.helpers.FileWatchdog.checkAndConfigure(FileWatchdog.java:89)
at 
org.apache.log4j.helpers.FileWatchdog.init(FileWatchdog.java:58)
at 
org.apache.log4j.PropertyWatchdog.init(PropertyConfigurator.java:914)
at 
org.apache.log4j.PropertyConfigurator.configureAndWatch(PropertyConfigurator.java:461)
at 
org.apache.cassandra.service.CassandraDaemon.initLog4j(CassandraDaemon.java:100)
at 
org.apache.cassandra.service.CassandraDaemon.clinit(CassandraDaemon.java:58)
 INFO 14:11:21,923 Logging initialized
 INFO 14:11:21,937 JVM vendor/version: Java HotSpot(TM) Server VM/1.6.0_38
 INFO 14:11:21,938 Heap size: 1032847360/1033895936
 INFO 14:11:21,938 Classpath: 
./../conf:./../build/classes/main:./../build/classes/thrift:./../lib/antlr-3.2.jar:./../lib/apache-cassandra-1.2.1.jar:./../lib/apache-cassandra-clientutil-1.2.1.jar:./../lib/apache-cassandra-thrift-1.2.1.jar:./../lib/avro-1.4.0-fixes.jar:./../lib/avro-1.4.0-sources-fixes.jar:./../lib/commons-cli-1.1.jar:./../lib/commons-codec-1.2.jar:./../lib/commons-lang-2.6.jar:./../lib/compress-lzf-0.8.4.jar:./../lib/concurrentlinkedhashmap-lru-1.3.jar:./../lib/guava-13.0.1.jar:./../lib/high-scale-lib-1.1.2.jar:./../lib/jackson-core-asl-1.9.2.jar:./../lib/jackson-mapper-asl-1.9.2.jar:./../lib/jamm-0.2.5.jar:./../lib/jline-1.0.jar:./../lib/json-simple-1.1.jar:./../lib/libthrift-0.7.0.jar:./../lib/log4j-1.2.16.jar:./../lib/metrics-core-2.0.3.jar:./../lib/netty-3.5.9.Final.jar:./../lib/servlet-api-2.5-20081211.jar:./../lib/slf4j-api-1.7.2.jar:./../lib/slf4j-log4j12-1.7.2.jar:./../lib/snakeyaml-1.6.jar:./../lib/snappy-java-1.0.4.1.jar:./../lib/snaptree-0.1.jar:./../lib/jamm-0.2.5.jar
 INFO 14:11:21,939 JNA not found. Native methods will be disabled.
 INFO 14:11:21,959 Loading settings from 
file:/home/anjani/apache-cassandra-1.2.1/conf/cassandra.yaml
 INFO 14:11:22,342 32bit JVM detected.  It is recommended to run Cassandra on a 
64bit JVM for better performance.
 INFO 14:11:22,342 DiskAccessMode 'auto' determined to be standard, 
indexAccessMode is standard
 INFO 14:11:22,342 disk_failure_policy is stop
 INFO 14:11:22,347 Global memtable threshold is enabled at 328MB
 INFO 14:11:22,979 Initializing key cache with capacity of 49 MBs.
 INFO 14:11:22,989 Scheduling key cache save to each 14400 seconds (going to 
save all keys).
 INFO 14:11:22,990 Initializing row cache with capacity of 0 MBs and provider 
org.apache.cassandra.cache.SerializingCacheProvider
 INFO 14:11:22,996 Scheduling row cache save to each 0 seconds (going to save 
all keys).
ERROR 14:11:23,026 Stopping the gossiper and the RPC server
ERROR 14:11:23,026 Exception encountered during startup
java.lang.IllegalStateException: No configured daemon
at 
org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:314)
at 
org.apache.cassandra.io.util.FileUtils.handleFSError(FileUtils.java:375)
at 

RE: cassandra cqlsh error

2013-02-04 Thread Dave Brosius
 This part, ERROR 13:39:24,456 Cannot open 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-5; partitioner 
org.apache.cassandra.dht.RandomPartitioner does not match system partitioner 
org.apache.cassandra.dht.Murmur3Partitioner.  Note that the default partitioner 
starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit 
that to match your old partitioner if upgrading.is a problem.In 1.2 the default 
partitioner was changed, so if you are using 1.2 against old files, you will 
need to edit the cassandra.yaml to have 
org.apache.cassandra.dht.RandomPartitioneras the specified partitioner.  - 
Original Message -From: quot;Kumar, Anjaniquot; 
;anjani.ku...@infogroup.com 

RE: cassandra cqlsh error

2013-02-04 Thread Kumar, Anjani
This is fixed.

Thanks,


From: Dave Brosius [mailto:dbros...@mebigfatguy.com]
Sent: Monday, February 04, 2013 2:27 PM
To: user@cassandra.apache.org; user@cassandra.apache.org
Subject: RE: cassandra cqlsh error


This part,

ERROR 13:39:24,456 Cannot open 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-5; partitioner 
org.apache.cassandra.dht.RandomPartitioner does not match system partitioner 
org.apache.cassandra.dht.Murmur3Partitioner.  Note that the default partitioner 
starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit 
that to match your old partitioner if upgrading.

is a problem.

In 1.2 the default partitioner was changed, so if you are using 1.2 against old 
files, you will need to edit the cassandra.yaml to have

org.apache.cassandra.dht.RandomPartitioner

as the specified partitioner.


- Original Message -
From: Kumar, Anjani 
anjani.ku...@infogroup.commailto:anjani.ku...@infogroup.com
Sent: Mon, February 4, 2013 14:43
Subject: RE: cassandra cqlsh error
I installed Sun JDK 6 but I am getting the following error. Please see below. 
Thanks!

anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 ./cassandra
xss =  -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -Xms1005M -Xmx1005M -Xmn200M 
-XX:+HeapDumpOnOutOfMemoryError -Xss180k
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
  INFO 13:39:23,259 Logging initialized
 INFO 13:39:23,276 JVM vendor/version: Java HotSpot(TM) Server VM/1.6.0_38
 INFO 13:39:23,277 Heap size: 1032847360/1033895936
 INF O 13:39:23,277 Classpath: 
./../conf:./../build/classes/main:./../build/classes/thrift:./../lib/antlr-3.2.jar:./../lib/apache-cassandra-1.2.1.jar:./../lib/apache-cassandra-clientutil-1.2.1.jar:./../lib/apache-cassandra-thrift-1.2.1.jar:./../lib/avro-1.4.0-fixes.jar:./../lib/avro-1.4.0-sources-fixes.jar:./../lib/commons-cli-1.1.jar:./../lib/commons-codec-1.2.jar:./../lib/commons-lang-2.6.jar:./../lib/compress-lzf-0.8.4.jar:./../lib/concurrentlinkedhashmap-lru-1.3.jar:./../lib/guava-13.0.1.jar:./../lib/high-scale-lib-1.1.2.jar:./../lib/jackson-core-asl-1.9.2.jar:./../lib/jackson-mapper-asl-1.9.2.jar:./../lib/jamm-0.2.5.jar:./../lib/jline-1.0.jar:./../lib/json-simple-1.1.jar:./../lib/libthrift-0.7.0.jar:./../lib/log4j-1.2.16.jar:./../lib/metrics-core-2.0.3.jar:./../lib/netty-3.5.9.Final.jar:./../lib/servlet-api-2.5-20081211.jar:./../lib/slf4j-api-1.7.2.jar:./../lib/slf4j-log4j12-1.7.2.jar:./../lib/snakeyaml-1.6.jar:./../lib/snappy-java-1.0.4.1.jar:./../lib/snaptree-0.1.jar:./../lib/jamm
 -0.2.5.jar
 INFO 13:39:23,279 JNA not found. Native methods will be disabled.
 INFO 13:39:23,293 Loading settings from 
file:/home/anjani/apache-cassandra-1.2.1/conf/cassandra.yaml
 INFO 13:39:23,665 32bit JVM detected.  It is recommended to run Cassandra on a 
64bit JVM for better performance.
 INFO 13:39:23,665 DiskAccessMode 'auto' determined to be standard, 
indexAccessMode is standard
 INFO 13:39:23,665 disk_failure_policy is stop
 INFO 13:39:23,670 Global memtable threshold is enabled at 328MB
 INFO 13:39:24,306 Initializing key cache with capacity of 49 MBs.
 INFO 13:39:24,324 Scheduling key cache save to each 14400 seconds (going to 
save all keys).
 INFO 13:39:24,325 Initializing row cache with capacity of 0 MBs and provider 
org.apache.cassandra.cache.SerializingCacheProvider
 INFO 13:39:24,332 Scheduling row cache save to each 0 seconds (going to save 
all keys).
 INFO 13:39:24,450 Opening 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-5 (14976 bytes)
ERROR 13:39:24,456 Cannot open 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-5; partitioner 
org.apache.cassandra.dht.RandomPartitioner does not match system partitioner 
org.apache.cassandra.dht.Murmur3Partitioner.  Note that the default partitioner 
starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit 
that to match your old partitioner if upgrading.
 INFO 13:39:24,459 Opening 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-6 (4236 bytes)
ERROR 13:39:24,460 Cannot open 
/var/lib/cassandra/data/system/Schema/system-Schema-hd-6; partitioner 
org.apache.ca ssandra.dht.RandomPartitioner does not match system partitioner 
org.apache.cassandra.dht.Murmur3Partitioner.  Note that the default partitioner 
starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit 
that to match your old partitioner if upgrading.

anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$
 java -version
java version 1.6.0_38
Java(TM) SE Runtime Environment (build 1.6.0_38-b05)
Java HotSpot(TM) Server VM (build 20.13-b02, mixed mode)
anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$mailto:anjani@anjani-laptop:~/apache-cassandra-1.2.1/bin$


On Mon, Feb 4, 2013 at 12:02 PM, Anjan i Kumar