Re: simple data movement ?

2014-12-19 Thread Langston, Jim
Thanks, this looks uglier , I double checked my production cluster ( I have a 
staging and development cluster as well ) and
production is on 1.2.8. A copy of the data resulted in a mssage :

Exception encountered during startup: Incompatible SSTable found. Current 
version ka is unable to read file: 
/cassandra/apache-cassandra-2.1.2/bin/../data/data/system/schema_keyspaces/system-schema_keyspaces-ic-150.
 Please run upgradesstables.

Is the move going to to be 1.2.8 -- 1.2.9 -- 2.0.x -- 2.1.2 ??

Can I just dump the data and import it into 2.1.2 ??


Jim

From: Ryan Svihla rsvi...@datastax.commailto:rsvi...@datastax.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 18 Dec 2014 06:00:09 -0600
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: simple data movement ?

I'm not sure that'll work with that many version moves in the middle, upgrades 
are to my knowledge only tested between specific steps, namely from 1.2.9 to 
the latest 2.0.x

http://www.datastax.com/documentation/upgrade/doc/upgrade/cassandra/upgradeC_c.html
Specifically:

Cassandra 2.0.x 
restrictions¶http://www.datastax.com/documentation/upgrade/doc/upgrade/cassandra/upgradeC_c.html?scroll=concept_ds_yqj_5xr_ck__section_ubt_nwr_54

After downloading DataStax Communityhttp://planetcassandra.org/cassandra/, 
upgrade to Cassandra directly from Cassandra 1.2.9 or later. Cassandra 2.0 is 
not network- or SSTable-compatible with versions older than 1.2.9. If your 
version of Cassandra is earlier than 1.2.9 and you want to perform a rolling 
restarthttp://www.datastax.com/documentation/cassandra/1.2/cassandra/glossary/gloss_rolling_restart.html,
 first upgrade the entire cluster to 1.2.9, and then to Cassandra 2.0.

Cassandra 2.1.x 
restrictions¶http://www.datastax.com/documentation/upgrade/doc/upgrade/cassandra/upgradeC_c.html?scroll=concept_ds_yqj_5xr_ck__section_qzx_pwr_54

Upgrade to Cassandra 2.1 from Cassandra 2.0.7 or later.

Cassandra 2.1 is not compatible with Cassandra 1.x SSTables. First upgrade the 
nodes to Cassandra 2.0.7 or later, start the cluster, upgrade the SSTables, 
stop the cluster, and then upgrade to Cassandra 2.1.

On Wed, Dec 17, 2014 at 10:55 PM, Ben Bromhead 
b...@instaclustr.commailto:b...@instaclustr.com wrote:
Just copy the data directory from each prod node to your test node (and 
relevant configuration files etc).

If your IP addresses are different between test and prod, follow 
https://engineering.eventbrite.com/changing-the-ip-address-of-a-cassandra-node-with-auto_bootstrapfalse/


On 18 December 2014 at 09:10, Langston, Jim 
jim.langs...@dynatrace.commailto:jim.langs...@dynatrace.com wrote:
Hi all,

I have set up a test environment with C* 2.1.2, wanting to test our
applications against it. I currently have C* 1.2.9 in production and want
to use that data for testing. What would be a good approach for simply
taking a copy of the production data and moving it into the test env and
having the test env C* use that data ?

The test env. is identical is size, with the difference being the versions
of C*.

Thanks,

Jim
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it


--

Ben Bromhead

Instaclustr | www.instaclustr.comhttps://www.instaclustr.com/ | 
@instaclustrhttp://twitter.com/instaclustr | +61 415 936 
359tel:%2B61%20415%20936%20359


--

[datastax_logo.png]http://www.datastax.com/

Ryan Svihla

Solution Architect

[twitter.png]https://twitter.com/foundev [linkedin.png] 
http://www.linkedin.com/pub/ryan-svihla/12/621/727/


DataStax is the fastest, most scalable distributed database technology, 
delivering Apache Cassandra to the world’s most innovative enterprises. 
Datastax is built to be agile, always-on, and predictably scalable to any size. 
With more than 500 customers in 45 countries, DataStax is the database 
technology and transactional backbone of choice for the worlds most innovative 
companies such as Netflix, Adobe, Intuit, and eBay.

The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it


simple data movement ?

2014-12-17 Thread Langston, Jim
Hi all,

I have set up a test environment with C* 2.1.2, wanting to test our
applications against it. I currently have C* 1.2.9 in production and want
to use that data for testing. What would be a good approach for simply
taking a copy of the production data and moving it into the test env and
having the test env C* use that data ?

The test env. is identical is size, with the difference being the versions
of C*.

Thanks,

Jim
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it


Re: Cassandra 2 Upgrade

2014-07-30 Thread Langston, Jim
Hi Rob,

Did you every create this blog post ?

Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Wed, 11 Sep 2013 10:38:58 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Cassandra 2 Upgrade

On Wed, Sep 11, 2013 at 2:59 AM, Christopher Wirt 
chris.w...@struq.commailto:chris.w...@struq.com wrote:

I’m keen on moving to 2.0. The new thrift server implementation and other 
performance improvements are getting me excited.

I’m currently running 1.2.8  in 3 DC’s with 3-3-9 nodes 64GB RAM, 3x200GB SSDs, 
thrift, LCS, Snappy,  Vnodes,

History indicates that you should not run a Cassandra version x.y.z where z  5 
in production. Unless your bosses have tasked you with finding potentially 
serious bugs in your database software in production.

Is anyone using 2.0 in production yet? Had any issues? I haven’t seen anything 
popup here or on JIRA, so either there are none/few, or nobody is using it yet.

I'm sure some brave/foolish souls must be...

How important is it to move to 1.2.9 before 2.0? To me it looks like 1.2.8 to 
2.0 will be fine.

All upgrades to 2.0 must pass through 1.2.9. I'll be doing a blog post on the 
upgrade path from 1.2.x to 2.0.x soon, but for now you can refer to this thread 
:

http://mail-archives.apache.org/mod_mbox/cassandra-user/201308.mbox/%3ccalehuf-wjuuoe_7ytqkxdd+bvxhukluccjnnec4kpbmoxs8...@mail.gmail.com%3E

=Rob
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it


Re: cassandra woes ----- why cant cassandra work with the outside world with port 9160

2014-04-17 Thread Langston, Jim
The port isn't open to the outside, only to the localhost

tcp0  0 ip6-localhost:9160  *:* LISTEN

you should see something like:

tcp0  0 0.0.0.0:91600.0.0.0:*   LISTEN  
20480/jsvc.exec


Jim

From: David Montgomery 
davidmontgom...@gmail.commailto:davidmontgom...@gmail.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 17 Apr 2014 19:32:43 +0800
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: cassandra woes - why cant cassandra work with the outside world 
with port 9160



Why can no server connect to 9160?  I dont understand?  Yes...i have the port 
open.  I am using ip address as listen_address on cassandra.yaml.  I can telnet 
to 7199but 9160 is a total disaster.  I can telnet from localhost but fro 
remote?  no way.

How do I open up 9160?  I have both brodcast and listen_address set to the 
server ip address.

E.g. this is the error I get when trying ti use cassandra from titan

Caused by: com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException: 
PoolTimeoutException: 
[host=cassandra.do.development.sf..com(cassandra.do.development.sf..com):9160, 
latency=10001(10001), attempts=1]Timed out waiting for connection



tcp0  0 *:7199  *:* LISTEN
tcp0  0 *:4772  *:* LISTEN
tcp0  0 ip6-localhost:9160  *:* LISTEN
tcp0  0 ip6-localhost:9042  *:* LISTEN
tcp0  0 *:12662 *:* LISTEN
tcp0  0 *:ssh   *:* LISTEN
tcp0  0 107.170:afs3-fileserver *:* LISTEN
tcp6   0  0 [::]:ssh[::]:*  LISTEN
udp0  0 107.170.236.189:ntp *:*
udp0  0 ip6-localhost:ntp   *:*
udp0  0 *:ntp   *:*
udp6   0  0 fe80::601:16ff:fea8:ntp [::]:*
udp6   0  0 ip6-localhost:ntp   [::]:*
udp6   0  0 [::]:ntp[::]:*
Why
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it


db file missing error

2013-11-14 Thread Langston, Jim
Hi all,

When I run nodetool repair, I'm getting an error that indicates
that several of the Data.db files are missing. Is there a way to
correct this error ? The files that the error message is referencing
are indeed missing, I'm not sure why it is looking for them to begin
with. AFAIK nothing has been deleted, but there are several apps
that run against Cass.

Caused by: java.io.FileNotFoundException: 
/raid0/cassandra/data/OTester/OTester_one/OTester-OTester_one-ic-46-Data.db (No 
such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.init(RandomAccessFile.java:216)
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:67)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.init(CompressedRandomAccessReader.java:75)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:42)
... 20 more


Thanks,

Jim


Re: db file missing error

2013-11-14 Thread Langston, Jim
Found it, had a second repair running which was generating the
error.

Jim

From: Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 14 Nov 2013 18:34:19 +
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: db file missing error

Hi all,

When I run nodetool repair, I'm getting an error that indicates
that several of the Data.db files are missing. Is there a way to
correct this error ? The files that the error message is referencing
are indeed missing, I'm not sure why it is looking for them to begin
with. AFAIK nothing has been deleted, but there are several apps
that run against Cass.

Caused by: java.io.FileNotFoundException: 
/raid0/cassandra/data/OTester/OTester_one/OTester-OTester_one-ic-46-Data.db (No 
such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.init(RandomAccessFile.java:216)
at 
org.apache.cassandra.io.util.RandomAccessReader.init(RandomAccessReader.java:67)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.init(CompressedRandomAccessReader.java:75)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:42)
... 20 more


Thanks,

Jim


cassandra error on restart

2013-09-10 Thread Langston, Jim
Hi all,

I restarted my cassandra ring this morning, but it is refusing to
start. Everything was fine, but now I get this error in the log:

….
 INFO 14:05:14,420 Compacting 
[SSTableReader(path='/raid0/cassandra/data/system/local/system-local-ic-20-Data.db'),
 
SSTableReader(path='/raid0/cassandra/data/system/local/system-local-ic-21-Data.db'),
 
SSTableReader(path='/raid0/cassandra/data/system/local/system-local-ic-23-Data.db'),
 
SSTableReader(path='/raid0/cassandra/data/system/local/system-local-ic-22-Data.db')]
 INFO 14:05:14,493 Compacted 4 sstables to 
[/raid0/cassandra/data/system/local/system-local-ic-24,].  1,086 bytes to 486 
(~44% of original) in 66ms = 0.007023MB/s.  4 total rows, 1 unique.  Row merge 
counts were {1:0, 2:0, 3:0, 4:1, }
 INFO 14:05:14,543 Starting Messaging Service on port 7000
java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:745)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:554)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:451)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:348)
at org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:381)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:212)
Cannot load daemon


and cassandra will not start. I get the same error on all the nodes in the ring.

Thoughts?

Thanks,

Jim


Re: cassandra error on restart

2013-09-10 Thread Langston, Jim
Thanks Mina,

That was it exactly …

Jim

From: Mina Naguib mina.nag...@adgear.commailto:mina.nag...@adgear.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Tue, 10 Sep 2013 10:16:17 -0400
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: cassandra error on restart


There was mention of a similar crash on the mailing list.  Does this apply to 
your case ?

http://mail-archives.apache.org/mod_mbox/cassandra-user/201306.mbox/%3ccdecfcfa.11e95%25agundabatt...@threatmetrix.com%3Ehttp://mail-archives.apache.org/mod_mbox/cassandra-user/201306.mbox/cdecfcfa.11e95%agundabatt...@threatmetrix.com


--
Mina Naguib
AdGear Technologies Inc.
http://adgear.com/

On 2013-09-10, at 10:09 AM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:

Hi all,

I restarted my cassandra ring this morning, but it is refusing to
start. Everything was fine, but now I get this error in the log:

….
 INFO 14:05:14,420 Compacting 
[SSTableReader(path='/raid0/cassandra/data/system/local/system-local-ic-20-Data.db'),
 
SSTableReader(path='/raid0/cassandra/data/system/local/system-local-ic-21-Data.db'),
 
SSTableReader(path='/raid0/cassandra/data/system/local/system-local-ic-23-Data.db'),
 
SSTableReader(path='/raid0/cassandra/data/system/local/system-local-ic-22-Data.db')]
 INFO 14:05:14,493 Compacted 4 sstables to 
[/raid0/cassandra/data/system/local/system-local-ic-24,].  1,086 bytes to 486 
(~44% of original) in 66ms = 0.007023MB/s.  4 total rows, 1 unique.  Row merge 
counts were {1:0, 2:0, 3:0, 4:1, }
 INFO 14:05:14,543 Starting Messaging Service on port 7000
java.lang.NullPointerException
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:745)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:554)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:451)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:348)
at org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:381)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:212)
Cannot load daemon


and cassandra will not start. I get the same error on all the nodes in the ring.

Thoughts?

Thanks,

Jim



cluster rename ?

2013-09-10 Thread Langston, Jim
Hi all,

Following these instructions:

http://comments.gmane.org/gmane.comp.db.cassandra.user/29753


I am trying to change the name of the cluster, but I'm getting an error:

ERROR [main] 2013-09-10 17:52:43,250 CassandraDaemon.java (line 247) Fatal 
exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name 
tmpCassandra != configured name cassandra
at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:450)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:243)
at org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:381)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:212)


For step 3 in the instructions, I moved LocationInfo located in the system 
keyspace to another
directory and when I try to restart the node, the directory is re-created, but 
still get the error.

I'm running 1.2.8, is this still the correct system keyspace to move.

Also, although not indicated as a must do – in step 2, I did a drain, but did 
not remove the commitlog's

If I name the cluster back to its original name, the cluster will come back up 
without any problems.


Jim


moving all data to new cluster ?

2013-09-04 Thread Langston, Jim
Hi all,

I have built a new 4 node cluster and would like to move the data
from the current 2 node cluster to the new cluster. What would be
the best way to move the data and utilize it on the new cluster. I
have looked at snapshot and also just copying the entire tree from
the old cluster to the new cluster. Not sure what the best practice
would be. I'm testing that process now in preparation for moving the
current data (production) to the new larger ring (systems are bigger
and more memory) and decommission the older ring (smaller systems,
less memory).

Thanks,

Jim


Re: moving all data to new cluster ?

2013-09-04 Thread Langston, Jim
Thanks for the link Rob, but I did try earlier to
copy the SSTables over and then to refresh them,
but this is a brand new cluster and the error I got
back indicated that the keyspace didn't exist, and
then figured I needed to copy everything over in
the data directory.


Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Wed, 4 Sep 2013 12:44:12 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: moving all data to new cluster ?

On Wed, Sep 4, 2013 at 12:38 PM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:
I have built a new 4 node cluster and would like to move the data
from the current 2 node cluster to the new cluster. What would be
the best way to move the data and utilize it on the new cluster. I
have looked at snapshot and also just copying the entire tree from
the old cluster to the new cluster. Not sure what the best practice
would be. I'm testing that process now in preparation for moving the
current data (production) to the new larger ring (systems are bigger
and more memory) and decommission the older ring (smaller systems,
less memory).

http://www.palominodb.com/blog/2012/09/25/bulk-loading-options-cassandra

In your case, I would just copy all sstables to all target nodes and run 
cleanup.

=Rob


read ?

2013-09-03 Thread Langston, Jim
Hi all,

Quick question

I currently am looking at a 4 node cluster and I have currently stopped all 
writing to
Cassandra,  with the reads continuing. I'm trying to understand the utilization
of memory within the JVM. nodetool info on each of the nodes shows them all
growing in footprint, 2 of the three at a greater rate. On the restart of 
Cassandra
each were at about 100MB, after 2 days, each of the following are at:

Heap Memory (MB) : 798.41 / 3052.00

Heap Memory (MB) : 370.44 / 3052.00

Heap Memory (MB) : 549.73 / 3052.00

Heap Memory (MB) : 481.89 / 3052.00

Ring configuration:

Address RackStatus State   LoadOwns
Token
   
127605887595351923798765477786913079296
x 1d  Up Normal  4.38 GB 25.00%  0
x   1d  Up Normal  4.17 GB 25.00%  
42535295865117307932921825928971026432
x   1d  Up Normal  4.19 GB 25.00%  
85070591730234615865843651857942052864
x   1d  Up Normal  4.14 GB 25.00%  
127605887595351923798765477786913079296


What I'm not sure of is what the growth is different between each ? and why
that growth is being created by activity that is read only.

Is Cassandra caching and holding the read data ?

I currently have caching turned off for the key/row. Also as part of the info 
command

Key Cache: size 0 (bytes), capacity 0 (bytes), 0 hits, 0 requests, NaN 
recent hit rate, 14400 save period in seconds
Row Cache: size 0 (bytes), capacity 0 (bytes), 0 hits, 0 requests, NaN 
recent hit rate, 0 save period in seconds



Thanks,

Jim


Re: read ?

2013-09-03 Thread Langston, Jim
Thanks Chris,

I have about 8 heap dumps that I have been looking at. I have been trying to 
isolate
as to why I have be dumping heap, I've started by removing the apps that write 
to
cassandra and eliminating work that would entail. I am left with just the apps 
that
are reading the data and from the heap dumps it looks like Cassandra Column 
methods
being called, because there are so many objects, it is difficult to ascertain 
exactly what
the problem may be. That prompted my query, trying to quickly determine if 
Cassandra
holds objects that have been used for reading, and if so, why, and more 
importantly if
something can be done.

Jim

From: Lohfink, Chris chris.lohf...@digi.commailto:chris.lohf...@digi.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Tue, 3 Sep 2013 11:12:19 -0500
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: RE: read ?

To get an accurate picture you should force a full GC on each node, the heap 
utilization can be misleading since there can be a lot of things in the heap 
with no strong references.

There is a number of factors that can lead to this.  For a true comparison I 
would recommend using jconsole and call dumpHeap on 
com.sun.management:type=HotSpotDiagnostic with the 2nd param true (force GC).  
Then open the heap dump up in a tool like yourkit and you will get a better 
comparison and also it will tell you what it is that’s taking the space.

Chris

From: Langston, Jim [mailto:jim.langs...@compuware.com]
Sent: Tuesday, September 03, 2013 8:20 AM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: read ?

Hi all,

Quick question

I currently am looking at a 4 node cluster and I have currently stopped all 
writing to
Cassandra,  with the reads continuing. I'm trying to understand the utilization
of memory within the JVM. nodetool info on each of the nodes shows them all
growing in footprint, 2 of the three at a greater rate. On the restart of 
Cassandra
each were at about 100MB, after 2 days, each of the following are at:

Heap Memory (MB) : 798.41 / 3052.00

Heap Memory (MB) : 370.44 / 3052.00

Heap Memory (MB) : 549.73 / 3052.00

Heap Memory (MB) : 481.89 / 3052.00

Ring configuration:

Address RackStatus State   LoadOwns
Token
   
127605887595351923798765477786913079296
x 1d  Up Normal  4.38 GB 25.00%  0
x   1d  Up Normal  4.17 GB 25.00%  
42535295865117307932921825928971026432
x   1d  Up Normal  4.19 GB 25.00%  
85070591730234615865843651857942052864
x   1d  Up Normal  4.14 GB 25.00%  
127605887595351923798765477786913079296


What I'm not sure of is what the growth is different between each ? and why
that growth is being created by activity that is read only.

Is Cassandra caching and holding the read data ?

I currently have caching turned off for the key/row. Also as part of the info 
command

Key Cache: size 0 (bytes), capacity 0 (bytes), 0 hits, 0 requests, NaN 
recent hit rate, 14400 save period in seconds
Row Cache: size 0 (bytes), capacity 0 (bytes), 0 hits, 0 requests, NaN 
recent hit rate, 0 save period in seconds



Thanks,

Jim


nodetool cfstats write count ?

2013-07-29 Thread Langston, Jim
Hi all,

Running nodetool and looking at the cfstats output, for the
counters such as write count and read count, do those numbers
reflect any replication ?

For instance, if write count shows 3000 and the replication factor
is 3, is that really 1000 writes ?

Thanks,

Jim


Re: sstable size ?

2013-07-18 Thread Langston, Jim
I saw that msg in the thread, I pulled the git files and it looks
like a suite of tools, do I install them on their own ? do I replace the
current ones ? its production data but I can copy the data to where
I want and experiment.

Jim

From: aaron morton aa...@thelastpickle.commailto:aa...@thelastpickle.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 18 Jul 2013 21:41:24 +1200
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: sstable size ?

Does this help ? 
http://www.mail-archive.com/user@cassandra.apache.org/msg30973.html

Can you pull the data off the node so you can test it somewhere safe ?

Cheers

-
Aaron Morton
Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 18/07/2013, at 2:20 PM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:

Thanks, this does look like what I'm experiencing. Can someone
post a walkthrough ? The README and the sstablesplit script
don't seem to cover it use in any detail.

Jim

From: Colin Blower cblo...@barracuda.commailto:cblo...@barracuda.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Wed, 17 Jul 2013 16:49:59 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: sstable size ?

Take a look at the very recent thread called 'Alternate major compaction'. 
There are some ideas in there about splitting up a large SSTable.

http://www.mail-archive.com/user@cassandra.apache.org/msg30956.html


On 07/17/2013 04:17 PM, Langston, Jim wrote:
Hi all,

Is there a way to get an SSTable to a smaller size ? By this I mean that I
currently have an SSTable that is nearly 1.2G, so that subsequent SSTables
when they compact are trying to grow to that size. The result is that when
the min_compaction_threshold reaches it value and a compaction is needed,
the compaction is taking a long time as the file grows (it is currently at 52MB 
and
takes ~22s to compact).

I'm not sure how the SSTable initially grew to its current size of 1.2G, since 
the
servers have been up for a couple of years. I hadn't noticed until I just 
upgraded to 1.2.6,
but now I see it affects everything.


Jim


--
Colin Blower
Software Engineer
Barracuda Networks Inc.
+1 408-342-5576 (o)



Re: sstable size ?

2013-07-18 Thread Langston, Jim
I have been looking at the stuff in the zip file, and also the
sstablesplit command script. This script is looking for a java
class StandaloneSplitter located in the package org.apache.cassandra.tools.

Where is this package located ? I looked in the lib directory but nothing 
contains
the class. Is this something I need to get as well ?

Thanks,

Jim

From: Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 18 Jul 2013 10:28:39 +
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: sstable size ?

I saw that msg in the thread, I pulled the git files and it looks
like a suite of tools, do I install them on their own ? do I replace the
current ones ? its production data but I can copy the data to where
I want and experiment.

Jim

From: aaron morton aa...@thelastpickle.commailto:aa...@thelastpickle.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 18 Jul 2013 21:41:24 +1200
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: sstable size ?

Does this help ? 
http://www.mail-archive.com/user@cassandra.apache.org/msg30973.html

Can you pull the data off the node so you can test it somewhere safe ?

Cheers

-
Aaron Morton
Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 18/07/2013, at 2:20 PM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:

Thanks, this does look like what I'm experiencing. Can someone
post a walkthrough ? The README and the sstablesplit script
don't seem to cover it use in any detail.

Jim

From: Colin Blower cblo...@barracuda.commailto:cblo...@barracuda.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Wed, 17 Jul 2013 16:49:59 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: sstable size ?

Take a look at the very recent thread called 'Alternate major compaction'. 
There are some ideas in there about splitting up a large SSTable.

http://www.mail-archive.com/user@cassandra.apache.org/msg30956.html


On 07/17/2013 04:17 PM, Langston, Jim wrote:
Hi all,

Is there a way to get an SSTable to a smaller size ? By this I mean that I
currently have an SSTable that is nearly 1.2G, so that subsequent SSTables
when they compact are trying to grow to that size. The result is that when
the min_compaction_threshold reaches it value and a compaction is needed,
the compaction is taking a long time as the file grows (it is currently at 52MB 
and
takes ~22s to compact).

I'm not sure how the SSTable initially grew to its current size of 1.2G, since 
the
servers have been up for a couple of years. I hadn't noticed until I just 
upgraded to 1.2.6,
but now I see it affects everything.


Jim


--
Colin Blower
Software Engineer
Barracuda Networks Inc.
+1 408-342-5576 (o)



Re: sstable size ?

2013-07-18 Thread Langston, Jim
Thanks, was heading down that path .. after the build it
creates a 1.1.6 cassandra snapshot, I'm currently on 1.2.6 - will I
be able to use the tool ?

Jim

On 7/18/13 3:45 PM, Nate McCall zznat...@gmail.com wrote:

https://github.com/pcmanus/cassandra/tree/sstable_split/src/java/org/apach
e/cassandra/tools

You'll have to clone Sylvain's 'sstable_split' branch and build from
there.

(Commiter folks: this is helpful. @Sylvain - can you commit a patch
under this ticket (or wherever):
https://issues.apache.org/jira/browse/CASSANDRA-4766 - happy to
review).

On Thu, Jul 18, 2013 at 1:59 PM, Langston, Jim
jim.langs...@compuware.com wrote:
 I have been looking at the stuff in the zip file, and also the
 sstablesplit command script. This script is looking for a java
 class StandaloneSplitter located in the package
org.apache.cassandra.tools.

 Where is this package located ? I looked in the lib directory but
nothing
 contains
 the class. Is this something I need to get as well ?

 Thanks,

 Jim

 From: Langston, Jim jim.langs...@compuware.com
 Reply-To: user@cassandra.apache.org
 Date: Thu, 18 Jul 2013 10:28:39 +

 To: user@cassandra.apache.org user@cassandra.apache.org
 Subject: Re: sstable size ?

 I saw that msg in the thread, I pulled the git files and it looks
 like a suite of tools, do I install them on their own ? do I replace the
 current ones ? its production data but I can copy the data to where
 I want and experiment.

 Jim

 From: aaron morton aa...@thelastpickle.com
 Reply-To: user@cassandra.apache.org
 Date: Thu, 18 Jul 2013 21:41:24 +1200
 To: user@cassandra.apache.org
 Subject: Re: sstable size ?

 Does this help ?
 http://www.mail-archive.com/user@cassandra.apache.org/msg30973.html

 Can you pull the data off the node so you can test it somewhere safe ?

 Cheers

 -
 Aaron Morton
 Cassandra Consultant
 New Zealand

 @aaronmorton
 http://www.thelastpickle.com

 On 18/07/2013, at 2:20 PM, Langston, Jim jim.langs...@compuware.com
 wrote:

 Thanks, this does look like what I'm experiencing. Can someone
 post a walkthrough ? The README and the sstablesplit script
 don't seem to cover it use in any detail.

 Jim

 From: Colin Blower cblo...@barracuda.com
 Reply-To: user@cassandra.apache.org
 Date: Wed, 17 Jul 2013 16:49:59 -0700
 To: user@cassandra.apache.org user@cassandra.apache.org
 Subject: Re: sstable size ?

 Take a look at the very recent thread called 'Alternate major
compaction'.
 There are some ideas in there about splitting up a large SSTable.

 http://www.mail-archive.com/user@cassandra.apache.org/msg30956.html


 On 07/17/2013 04:17 PM, Langston, Jim wrote:

 Hi all,

 Is there a way to get an SSTable to a smaller size ? By this I mean
that I
 currently have an SSTable that is nearly 1.2G, so that subsequent
SSTables
 when they compact are trying to grow to that size. The result is that
when
 the min_compaction_threshold reaches it value and a compaction is
needed,
 the compaction is taking a long time as the file grows (it is currently
at
 52MB and
 takes ~22s to compact).

 I'm not sure how the SSTable initially grew to its current size of 1.2G,
 since the
 servers have been up for a couple of years. I hadn't noticed until I
just
 upgraded to 1.2.6,
 but now I see it affects everything.


 Jim



 --
 Colin Blower
 Software Engineer
 Barracuda Networks Inc.
 +1 408-342-5576 (o)







sstable size ?

2013-07-17 Thread Langston, Jim
Hi all,

Is there a way to get an SSTable to a smaller size ? By this I mean that I
currently have an SSTable that is nearly 1.2G, so that subsequent SSTables
when they compact are trying to grow to that size. The result is that when
the min_compaction_threshold reaches it value and a compaction is needed,
the compaction is taking a long time as the file grows (it is currently at 52MB 
and
takes ~22s to compact).

I'm not sure how the SSTable initially grew to its current size of 1.2G, since 
the
servers have been up for a couple of years. I hadn't noticed until I just 
upgraded to 1.2.6,
but now I see it affects everything.


Jim


Re: sstable size ?

2013-07-17 Thread Langston, Jim
Thanks, this does look like what I'm experiencing. Can someone
post a walkthrough ? The README and the sstablesplit script
don't seem to cover it use in any detail.

Jim

From: Colin Blower cblo...@barracuda.commailto:cblo...@barracuda.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Wed, 17 Jul 2013 16:49:59 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: sstable size ?

Take a look at the very recent thread called 'Alternate major compaction'. 
There are some ideas in there about splitting up a large SSTable.

http://www.mail-archive.com/user@cassandra.apache.org/msg30956.html


On 07/17/2013 04:17 PM, Langston, Jim wrote:
Hi all,

Is there a way to get an SSTable to a smaller size ? By this I mean that I
currently have an SSTable that is nearly 1.2G, so that subsequent SSTables
when they compact are trying to grow to that size. The result is that when
the min_compaction_threshold reaches it value and a compaction is needed,
the compaction is taking a long time as the file grows (it is currently at 52MB 
and
takes ~22s to compact).

I'm not sure how the SSTable initially grew to its current size of 1.2G, since 
the
servers have been up for a couple of years. I hadn't noticed until I just 
upgraded to 1.2.6,
but now I see it affects everything.


Jim


--
Colin Blower
Software Engineer
Barracuda Networks Inc.
+1 408-342-5576 (o)


Re: alter column family ?

2013-07-17 Thread Langston, Jim
As a follow up – I did upgrade the cluster to 1.2.6 and that
did take care of the issue. The upgrade went very smoothly,
the longest part was being thorough on the configuration
files, but I was able to able to quickly update the schema's
after restarting the cluster.


Jim

From: Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 11 Jul 2013 18:25:42 +
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: alter column family ?

Was just looking at a bug with uppercase , could that be the error ?

And, yes, definitely saved off the original system keyspaces.

I'm tailing the logs when running the cassandra-cli, but I do not
see anything in the logs ..

Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 11 Jul 2013 11:07:55 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: alter column family ?

On Thu, Jul 11, 2013 at 11:00 AM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:
I went through the whole sequence again and now have gotten to the point of
being able to try and pull in the schema, but now getting this error from the 
one
node I'm executing on.
[default@unknown] create keyspace OTracker
9209ec36-3b3f-3e24-9dfb-8a45a5b29a2a
Waiting for schema agreement...
... schemas agree across the cluster
NotFoundException()

This is pretty unusual.

All the nodes see each other and are available, all only contain a system
schema, none have a OTracker schema

If you look in the logs for schema related stuff when you try to create 
OTracker, what do you see?

Do you see the above UUID schema version in the logs?

At this point I am unable to suggest anything other than upgrading to the head 
of 1.1 line and try to create your keyspace there. There should be no chance of 
old state being implicated in your now stuck schema, so it seems likely that 
the problem has re-occured due to the version of Cassandra you are running.

Sorry I am unable to be of more assistance and that my advice appears to have 
resulted in your cluster being in worse condition than when you started. I 
probably mentioned but will do so again that if you have the old system 
keyspace directories, you can stop cassandra on all nodes and then revert to 
them.

=Rob



Re: alter column family ?

2013-07-11 Thread Langston, Jim
Hi Rob,

Are the schema's held somewhere else ? Going through the
process that you sent, when I restart the nodes, the original
schema's show up (btw, you were correct on your assessment,
even though the schema shows they are the same with the
gossipinfo command, they are not the same when looking
at them with cassandra-cli, not even close on 2 of the nodes).
So, I went through the process of clearing out the system CF's,
in steps 4 and 5, when the cassandra's restarted two of them (the
ones with the incorrect schema's), complained about the schema
and loaded what looks like a generic one. But, all of them have
schemas and 2 are correct and one is not.

This means I cannot execute step 7 , since the schema now exists
with the name on all the nodes. For example, the incorrect schema is
called MySchema, after the restart and the messages complaining
about CF's not existing, there is a schema called MySchema, on 2 nodes
they are correct, on 2 nodes they are not.

I have also tried to force the node with the incorrect schema to come
up on its own by shutting down the cluster except for a node with a
correct schema. I went through the same steps and brought that
node down and back up, same results.

Thoughts ? ideas ?

Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Tue, 9 Jul 2013 17:10:53 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: alter column family ?

On Tue, Jul 9, 2013 at 11:52 AM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:

 On the command (4 node cluster):

 nodetool gossipinfo -h localhost |grep SCHEMA |sort | uniq -c | sort -n
   4   SCHEMA:60edeaa8-70a4-3825-90a5-d7746ffa8e4d

If your schemas actually agree (and given that you're in 1.1.2) you probably 
are encountering :

https://issues.apache.org/jira/browse/CASSANDRA-4432

Which is one of the 1.1.2 era schema stuck issues I was referring to earlier.

 On the second part, I have the same Cassandra version in staging and
 production, with staging being a smaller cluster. Not sure what you mean
 by nuking schema's (ie. delete directories ?)

I like when googling things returns related threads in which I have previously 
advised people to do a detailed list of things, heh :

http://mail-archives.apache.org/mod_mbox/cassandra-user/201208.mbox/%3CCAN1VBD-01aD7wT2w1eyY2KpHwcj+CoMjvE4=j5zaswybmw_...@mail.gmail.com%3E

Here's a slightly clarified version of these steps...

0. Dump your existing schema to schema_definition_file
1. Take all nodes out of service;
2. Run nodetool drain on each and verify that they have drained (grep -i 
DRAINED system.log)
3. Stop cassandra on each node;
4. Move /var/lib/cassandra/data/system out of the way
5. Move /var/lib/cassandra/saved_caches/system-* out of the way
6. Start all nodes;
7. cassandra-cli  schema_definition_file on one node only. (includes
create keyspace and create column familiy entries)

Note: you should not literally do this, you should break your 
schema_definition_file into individual statements and wait until schema 
agreement between each DDL statement.

8. Put the nodes back in service.
9. Done.

=Rob


Re: alter column family ?

2013-07-11 Thread Langston, Jim
Yes, I got the gist of what you were after, even making sure I broke
out the schema dump and load them in individually, but I haven't
gotten that far. It feels like the 2 node that are not coming up with
the right schema are not seeing the nodes with the correct ones.

And yes, I hear the beat of the upgrade drum, I was hoping to
do one step at a time so I don't carry my problem over.

Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 11 Jul 2013 09:43:43 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: alter column family ?

On Thu, Jul 11, 2013 at 9:17 AM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:
Are the schema's held somewhere else ? Going through the
process that you sent, when I restart the nodes, the original
schema's show up

If you do not stop all nodes at once and then remove the system CFs, the 
existing schema will re-propogate via Gossip.

To be clear, I was suggesting that you dump the schema with cassandra-cli, 
erase the current schema with the cluster down, bring the cluster back up (NOW 
WITH NO SCHEMA) and then load the schema from the dump via cassandra-cli.

Also, in case I didn't mention it before, you should upgrade your version of 
Cassandra ASAP. :)

=Rob




Re: alter column family ?

2013-07-11 Thread Langston, Jim
Thanks Rob,

I went through the whole sequence again and now have gotten to the point of
being able to try and pull in the schema, but now getting this error from the 
one
node I'm executing on.

[default@unknown] create keyspace OTracker
...  with placement_strategy = 'SimpleStrategy'
...  and strategy_options = {replication_factor : 3}
...  and durable_writes = true;
9209ec36-3b3f-3e24-9dfb-8a45a5b29a2a
Waiting for schema agreement...
... schemas agree across the cluster
NotFoundException()
[default@unknown]


All the nodes see each other and are available, all only contain a system
schema, none have a OTracker schema


Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 11 Jul 2013 10:35:43 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: alter column family ?

On Thu, Jul 11, 2013 at 10:16 AM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:
It feels like the 2 node that are not coming up with
the right schema are not seeing the nodes with the correct ones.

At the time that the nodes come up, they should have no schema other than the 
system columnfamilies. Only once all 3 nodes see each other should you be 
re-creating the schema. I'm not understanding your above sentence in light of 
this?

=Rob



Re: alter column family ?

2013-07-11 Thread Langston, Jim
Was just looking at a bug with uppercase , could that be the error ?

And, yes, definitely saved off the original system keyspaces.

I'm tailing the logs when running the cassandra-cli, but I do not
see anything in the logs ..

Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thu, 11 Jul 2013 11:07:55 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: alter column family ?

On Thu, Jul 11, 2013 at 11:00 AM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:
I went through the whole sequence again and now have gotten to the point of
being able to try and pull in the schema, but now getting this error from the 
one
node I'm executing on.
[default@unknown] create keyspace OTracker
9209ec36-3b3f-3e24-9dfb-8a45a5b29a2a
Waiting for schema agreement...
... schemas agree across the cluster
NotFoundException()

This is pretty unusual.

All the nodes see each other and are available, all only contain a system
schema, none have a OTracker schema

If you look in the logs for schema related stuff when you try to create 
OTracker, what do you see?

Do you see the above UUID schema version in the logs?

At this point I am unable to suggest anything other than upgrading to the head 
of 1.1 line and try to create your keyspace there. There should be no chance of 
old state being implicated in your now stuck schema, so it seems likely that 
the problem has re-occured due to the version of Cassandra you are running.

Sorry I am unable to be of more assistance and that my advice appears to have 
resulted in your cluster being in worse condition than when you started. I 
probably mentioned but will do so again that if you have the old system 
keyspace directories, you can stop cassandra on all nodes and then revert to 
them.

=Rob



alter column family ?

2013-07-09 Thread Langston, Jim
Hi all,

I am trying to alter a column family to change gc_grace_seconds, and now,
any of the properties

The sequence:

use ks ;
alter table CF with gc_grace_seconds=864000 ;

When listing the CF, gc_grace_seconds is set to 0, after
running the CLI, gc_grace_seconds is still set to 0.

I tried change the comment property, but this did not
change either.

Using the same keyspace, I created another table
and executed both the gc_grace_seconds change and
the comments change. Both of these successfully changed.

I do not know why I cannot change the value on a table that
was created earlier. I have tried shutting down the other nodes
in the cluster and attempted to change just one of the nodes,
also, I tried to stop/start Cassandra on that node as well to
see if the change would take effect that way.

Is there a permission ?

Where are the property values kept?

Thanks,

Jim


Re: alter column family ?

2013-07-09 Thread Langston, Jim
I'm on version 1.1.2

The nodetool command by itself

# nodetool netstats -h localhost
Mode: NORMAL
Not sending any streams.
Not receiving any streams.
Pool NameActive   Pending  Completed
Commandsn/a 0   5909
Responses   n/a 0   1203

Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Tue, 9 Jul 2013 10:26:45 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: alter column family ?

On Tue, Jul 9, 2013 at 10:08 AM, Langston, Jim 
jim.langs...@compuware.commailto:jim.langs...@compuware.com wrote:
I am trying to alter a column family to change gc_grace_seconds, and now,
any of the properties

The sequence:

use ks ;
alter table CF with gc_grace_seconds=864000 ;

When listing the CF, gc_grace_seconds is set to 0, after
running the CLI, gc_grace_seconds is still set to 0.

I tried change the comment property, but this did not
change either.

Using the same keyspace, I created another table
and executed both the gc_grace_seconds change and
the comments change. Both of these successfully changed.

I'm surprised to hear that other columnfamilies work in the same keyspace. In 
general when people end up in this case, no schema migrations work.

If you do :

nodetool -h localhost netstats |grep SCHEMA |sort | uniq -c | sort -n

Do you see that all nodes in the cluster have the same schema version?

What version of Cassandra?

=Rob


Re: alter column family ?

2013-07-09 Thread Langston, Jim
On the command (4 node cluster):

nodetool gossipinfo -h localhost |grep SCHEMA |sort | uniq -c | sort -n
  4   SCHEMA:60edeaa8-70a4-3825-90a5-d7746ffa8e4d

On the second part, I have the same Cassandra version in staging and
production, with staging being a smaller cluster. Not sure what you mean
by nuking schema's (ie. delete directories ?)

Jim

From: Robert Coli rc...@eventbrite.commailto:rc...@eventbrite.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Tue, 9 Jul 2013 11:35:35 -0700
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: alter column family ?

On Tue, Jul 9, 2013 at 10:26 AM, Robert Coli 
rc...@eventbrite.commailto:rc...@eventbrite.com wrote:
nodetool -h localhost netstats |grep SCHEMA |sort | uniq -c | sort -n

Sorry, I meant gossipinfo and not netstats.

With the right command, do you see that all nodes in the cluster have the same 
schema version?

I'm on version 1.1.2

1) Hinted Handoff is broken in 1.1.2, upgrade ASAP.
2) I believe the particular case you are encountering may be a more specific 
bug from the 1.1.2 timeframe.
3) Desynched schema like you seem to be encountering is very common in 1.1.2 
timeframe. In most cases the best/only solution is :
   a) drain all nodes, and stop them
   b) nuke schema on all nodes (optionally nuke/move aside entire system 
keyspace)
   c) start nodes, waiting for cluster to completely coalesce
   d) re-load schema one statement at a time, BEING SURE TO WAIT FOR SCHEMA 
AGREEMENT ON ***ALL NODES*** before running the next schema altering statement

If you are unable to take an outage on this cluster, there are other ways to 
resolve issues like this but generally they will both be complex and error 
prone and will take much more time and effort than doing the above.

=Rob