Re: column with TTL of 10 seconds lives very long...

2013-05-26 Thread Tamar Fraenkel
Hi!
Just an update, the weekly repair seemed to solve it. The column is no
longer there.
Still strange...
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Sat, May 25, 2013 at 10:19 PM, Tamar Fraenkel ta...@tok-media.comwrote:

 Yes.. still there.
 Tamar

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




 On Sat, May 25, 2013 at 8:09 PM, Jeremiah Jordan jerem...@datastax.comwrote:

 If you do that same get again, is the column still being returned? (days
 later)

 -Jeremiah


 On Thu, May 23, 2013 at 6:16 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!

 TTL was set:

 [default@HLockingManager] get
 HLocks['/LockedTopic/31a30c12-652d-45b3-9ac2-0401cce85517'];
 = (column=69b057d4-3578-4326-a9d9-c975cb8316d2,
 value=36396230353764342d333537382d343332362d613964392d633937356362383331366432,
 timestamp=1369307815049000, ttl=10)


 Also, all other lock columns expire as expected.

 Thanks,
 Tamar

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




 On Thu, May 23, 2013 at 1:58 PM, moshe.kr...@barclays.com wrote:

 Maybe you didn’t set the TTL correctly.

 Check the TTL of the column using CQL, e.g.:

 SELECT TTL (colName) from colFamilyName WHERE condition;

 ** **

 *From:* Felipe Sere [mailto:felipe.s...@1und1.de]
 *Sent:* Thursday, May 23, 2013 1:28 PM
 *To:* user@cassandra.apache.org
 *Subject:* AW: column with TTL of 10 seconds lives very long...

 ** **

 This is interesting as it might affect me too :)
 I have been observing deadlocks with HLockManagerImpl which dont get
 resolved for a long time
 even though the columns with the locks should only live for about
 5-10secs.

 Any ideas how to investigate this further from the Cassandra-side?
 --

 *Von:* Tamar Fraenkel [ta...@tok-media.com]
 *Gesendet:* Donnerstag, 23. Mai 2013 11:58
 *An:* user@cassandra.apache.org
 *Betreff:* Re: column with TTL of 10 seconds lives very long...

 Thanks for the response.
 Running date simultaneously on all nodes (using parallel ssh) shows
 that they are synced.

 Tamar


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

 ** **

 ** **

 ** **

 On Thu, May 23, 2013 at 12:29 PM, Nikolay Mihaylov n...@nmmm.nu
 wrote:

 Did you synchronized the clocks between servers?

 ** **

 On Thu, May 23, 2013 at 9:32 AM, Tamar Fraenkel ta...@tok-media.com
 wrote:

 Hi!
 I have Cassandra cluster with 3 node running version 1.0.11.

 I am using Hector HLockManagerImpl, which creates a keyspace named
 HLockManagerImpl and CF HLocks.

 For some reason I have a row with single column that should have
 expired yesterday who is still there.
 I tried deleting it using cli, but it is stuck...
 Any ideas how to delete it?

 Thanks,


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

 ** **

 ** **

 ** **

 ** **

 ___

 This message is for information purposes only, it is not a
 recommendation, advice, offer or solicitation to buy or sell a product or
 service nor an official confirmation of any transaction. It is directed at
 persons who are professionals and is not intended for retail customer use.
 Intended for recipient only. This message is subject to the terms at:
 www.barclays.com/emaildisclaimer.

 For important disclosures, please see:
 www.barclays.com/salesandtradingdisclaimer regarding market commentary
 from Barclays Sales and/or Trading, who are active market participants; and
 in respect of Barclays Research, including disclosures relating to specific
 issuers, please see http://publicresearch.barclays.com.

 ___





tokLogo.pngtokLogo.pngimage001.png

Re: column with TTL of 10 seconds lives very long...

2013-05-25 Thread Tamar Fraenkel
Yes.. still there.
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Sat, May 25, 2013 at 8:09 PM, Jeremiah Jordan jerem...@datastax.comwrote:

 If you do that same get again, is the column still being returned? (days
 later)

 -Jeremiah


 On Thu, May 23, 2013 at 6:16 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!

 TTL was set:

 [default@HLockingManager] get
 HLocks['/LockedTopic/31a30c12-652d-45b3-9ac2-0401cce85517'];
 = (column=69b057d4-3578-4326-a9d9-c975cb8316d2,
 value=36396230353764342d333537382d343332362d613964392d633937356362383331366432,
 timestamp=1369307815049000, ttl=10)


 Also, all other lock columns expire as expected.

 Thanks,
 Tamar

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




 On Thu, May 23, 2013 at 1:58 PM, moshe.kr...@barclays.com wrote:

 Maybe you didn’t set the TTL correctly.

 Check the TTL of the column using CQL, e.g.:

 SELECT TTL (colName) from colFamilyName WHERE condition;

 ** **

 *From:* Felipe Sere [mailto:felipe.s...@1und1.de]
 *Sent:* Thursday, May 23, 2013 1:28 PM
 *To:* user@cassandra.apache.org
 *Subject:* AW: column with TTL of 10 seconds lives very long...

 ** **

 This is interesting as it might affect me too :)
 I have been observing deadlocks with HLockManagerImpl which dont get
 resolved for a long time
 even though the columns with the locks should only live for about
 5-10secs.

 Any ideas how to investigate this further from the Cassandra-side?
 --

 *Von:* Tamar Fraenkel [ta...@tok-media.com]
 *Gesendet:* Donnerstag, 23. Mai 2013 11:58
 *An:* user@cassandra.apache.org
 *Betreff:* Re: column with TTL of 10 seconds lives very long...

 Thanks for the response.
 Running date simultaneously on all nodes (using parallel ssh) shows that
 they are synced.

 Tamar


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

 ** **

 ** **

 ** **

 On Thu, May 23, 2013 at 12:29 PM, Nikolay Mihaylov n...@nmmm.nu wrote:
 

 Did you synchronized the clocks between servers?

 ** **

 On Thu, May 23, 2013 at 9:32 AM, Tamar Fraenkel ta...@tok-media.com
 wrote:

 Hi!
 I have Cassandra cluster with 3 node running version 1.0.11.

 I am using Hector HLockManagerImpl, which creates a keyspace named
 HLockManagerImpl and CF HLocks.

 For some reason I have a row with single column that should have expired
 yesterday who is still there.
 I tried deleting it using cli, but it is stuck...
 Any ideas how to delete it?

 Thanks,


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

 ** **

 ** **

 ** **

 ** **

 ___

 This message is for information purposes only, it is not a
 recommendation, advice, offer or solicitation to buy or sell a product or
 service nor an official confirmation of any transaction. It is directed at
 persons who are professionals and is not intended for retail customer use.
 Intended for recipient only. This message is subject to the terms at:
 www.barclays.com/emaildisclaimer.

 For important disclosures, please see:
 www.barclays.com/salesandtradingdisclaimer regarding market commentary
 from Barclays Sales and/or Trading, who are active market participants; and
 in respect of Barclays Research, including disclosures relating to specific
 issuers, please see http://publicresearch.barclays.com.

 ___




tokLogo.pngtokLogo.pngimage001.png

Re: column with TTL of 10 seconds lives very long...

2013-05-24 Thread Tamar Fraenkel
By it is still there I mean that when I do get request in Cassandra cli I
get the column, as well as when I try to read the column using Hector.
I don't think it is a matter of tombstone. I have the default
gc_grace_seconds and I run repair weekly (will run on Sunday).
Other columns for same CF expire as expected after their 10 seconds pass.

Thanks,
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Thu, May 23, 2013 at 11:20 PM, Robert Coli rc...@eventbrite.com wrote:

 On Wed, May 22, 2013 at 11:32 PM, Tamar Fraenkel ta...@tok-media.comwrote:

 I am using Hector HLockManagerImpl, which creates a keyspace named
 HLockManagerImpl and CF HLocks.
 For some reason I have a row with single column that should have expired
 yesterday who is still there.
 I tried deleting it using cli, but it is stuck...
 Any ideas how to delete it?


 is still there is sorta ambiguous. Do you mean that clients see it or
 that it is still in the (immutable) data file it was previously in?

 If the latter, what is gc_grace_seconds set to? Make sure it's set to a
 low value and then make sure that your TTL-expired key is compacted?

 =Rob

tokLogo.png

column with TTL of 10 seconds lives very long...

2013-05-23 Thread Tamar Fraenkel
Hi!
I have Cassandra cluster with 3 node running version 1.0.11.

I am using Hector HLockManagerImpl, which creates a keyspace named
HLockManagerImpl and CF HLocks.
For some reason I have a row with single column that should have expired
yesterday who is still there.
I tried deleting it using cli, but it is stuck...
Any ideas how to delete it?

Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: column with TTL of 10 seconds lives very long...

2013-05-23 Thread Tamar Fraenkel
Thanks for the response.
Running date simultaneously on all nodes (using parallel ssh) shows that
they are synced.
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Thu, May 23, 2013 at 12:29 PM, Nikolay Mihaylov n...@nmmm.nu wrote:

 Did you synchronized the clocks between servers?


 On Thu, May 23, 2013 at 9:32 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 I have Cassandra cluster with 3 node running version 1.0.11.

 I am using Hector HLockManagerImpl, which creates a keyspace named
 HLockManagerImpl and CF HLocks.
 For some reason I have a row with single column that should have expired
 yesterday who is still there.
 I tried deleting it using cli, but it is stuck...
 Any ideas how to delete it?

 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




tokLogo.pngtokLogo.png

Re: column with TTL of 10 seconds lives very long...

2013-05-23 Thread Tamar Fraenkel
Hi!

TTL was set:

[default@HLockingManager] get
HLocks['/LockedTopic/31a30c12-652d-45b3-9ac2-0401cce85517'];
= (column=69b057d4-3578-4326-a9d9-c975cb8316d2,
value=36396230353764342d333537382d343332362d613964392d633937356362383331366432,
timestamp=1369307815049000, ttl=10)


Also, all other lock columns expire as expected.

Thanks,
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Thu, May 23, 2013 at 1:58 PM, moshe.kr...@barclays.com wrote:

 Maybe you didn’t set the TTL correctly.

 Check the TTL of the column using CQL, e.g.:

 SELECT TTL (colName) from colFamilyName WHERE condition;

 ** **

 *From:* Felipe Sere [mailto:felipe.s...@1und1.de]
 *Sent:* Thursday, May 23, 2013 1:28 PM
 *To:* user@cassandra.apache.org
 *Subject:* AW: column with TTL of 10 seconds lives very long...

 ** **

 This is interesting as it might affect me too :)
 I have been observing deadlocks with HLockManagerImpl which dont get
 resolved for a long time
 even though the columns with the locks should only live for about 5-10secs.

 Any ideas how to investigate this further from the Cassandra-side?
 --

 *Von:* Tamar Fraenkel [ta...@tok-media.com]
 *Gesendet:* Donnerstag, 23. Mai 2013 11:58
 *An:* user@cassandra.apache.org
 *Betreff:* Re: column with TTL of 10 seconds lives very long...

 Thanks for the response.
 Running date simultaneously on all nodes (using parallel ssh) shows that
 they are synced.

 Tamar


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

 ** **

 ** **

 ** **

 On Thu, May 23, 2013 at 12:29 PM, Nikolay Mihaylov n...@nmmm.nu wrote:**
 **

 Did you synchronized the clocks between servers?

 ** **

 On Thu, May 23, 2013 at 9:32 AM, Tamar Fraenkel ta...@tok-media.com
 wrote:

 Hi!
 I have Cassandra cluster with 3 node running version 1.0.11.

 I am using Hector HLockManagerImpl, which creates a keyspace named
 HLockManagerImpl and CF HLocks.

 For some reason I have a row with single column that should have expired
 yesterday who is still there.
 I tried deleting it using cli, but it is stuck...
 Any ideas how to delete it?

 Thanks,


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

 ** **

 ** **

 ** **

 ** **

 ___

 This message is for information purposes only, it is not a recommendation,
 advice, offer or solicitation to buy or sell a product or service nor an
 official confirmation of any transaction. It is directed at persons who are
 professionals and is not intended for retail customer use. Intended for
 recipient only. This message is subject to the terms at:
 www.barclays.com/emaildisclaimer.

 For important disclosures, please see:
 www.barclays.com/salesandtradingdisclaimer regarding market commentary
 from Barclays Sales and/or Trading, who are active market participants; and
 in respect of Barclays Research, including disclosures relating to specific
 issuers, please see http://publicresearch.barclays.com.

 ___

tokLogo.pngimage001.png

Re: column with TTL of 10 seconds lives very long...

2013-05-23 Thread Tamar Fraenkel
good point!

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Thu, May 23, 2013 at 2:25 PM, moshe.kr...@barclays.com wrote:

 (Probably will not solve your problem, but worth mentioning): It’s not
 enough to check that the clocks of all the servers are synchronized – I
 believe that the client node sets the timestamp for a record being written.
 So, you should also check the timestamp on your Hector client nodes.

 ** **

 *From:* Tamar Fraenkel [mailto:ta...@tok-media.com]
 *Sent:* Thursday, May 23, 2013 2:17 PM
 *To:* user@cassandra.apache.org
 *Subject:* Re: column with TTL of 10 seconds lives very long...

 ** **

 Hi!

 TTL was set:

 [default@HLockingManager] get
 HLocks['/LockedTopic/31a30c12-652d-45b3-9ac2-0401cce85517'];
 = (column=69b057d4-3578-4326-a9d9-c975cb8316d2,
 value=36396230353764342d333537382d343332362d613964392d633937356362383331366432,
 timestamp=1369307815049000, ttl=10)

 

 Also, all other lock columns expire as expected.

 Thanks,
 Tamar


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

 ** **

 ** **

 ** **

 On Thu, May 23, 2013 at 1:58 PM, moshe.kr...@barclays.com wrote:

 Maybe you didn’t set the TTL correctly.

 Check the TTL of the column using CQL, e.g.:

 SELECT TTL (colName) from colFamilyName WHERE condition;

  

 *From:* Felipe Sere [mailto:felipe.s...@1und1.de]
 *Sent:* Thursday, May 23, 2013 1:28 PM
 *To:* user@cassandra.apache.org
 *Subject:* AW: column with TTL of 10 seconds lives very long...

  

 This is interesting as it might affect me too :)
 I have been observing deadlocks with HLockManagerImpl which dont get
 resolved for a long time
 even though the columns with the locks should only live for about 5-10secs.

 Any ideas how to investigate this further from the Cassandra-side?
 --

 *Von:* Tamar Fraenkel [ta...@tok-media.com]
 *Gesendet:* Donnerstag, 23. Mai 2013 11:58
 *An:* user@cassandra.apache.org
 *Betreff:* Re: column with TTL of 10 seconds lives very long...

 Thanks for the response.
 Running date simultaneously on all nodes (using parallel ssh) shows that
 they are synced.

 Tamar


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

  

  

  

 On Thu, May 23, 2013 at 12:29 PM, Nikolay Mihaylov n...@nmmm.nu wrote:**
 **

 Did you synchronized the clocks between servers?

  

 On Thu, May 23, 2013 at 9:32 AM, Tamar Fraenkel ta...@tok-media.com
 wrote:

 Hi!
 I have Cassandra cluster with 3 node running version 1.0.11.

 I am using Hector HLockManagerImpl, which creates a keyspace named
 HLockManagerImpl and CF HLocks.

 For some reason I have a row with single column that should have expired
 yesterday who is still there.
 I tried deleting it using cli, but it is stuck...
 Any ideas how to delete it?

 Thanks,


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

  

  

  

  

 ___

 This message is for information purposes only, it is not a recommendation,
 advice, offer or solicitation to buy or sell a product or service nor an
 official confirmation of any transaction. It is directed at persons who are
 professionals and is not intended for retail customer use. Intended for
 recipient only. This message is subject to the terms at:
 www.barclays.com/emaildisclaimer.

 For important disclosures, please see:
 www.barclays.com/salesandtradingdisclaimer regarding market commentary
 from Barclays Sales and/or Trading, who are active market participants; and
 in respect of Barclays Research, including disclosures relating to specific
 issuers, please see http://publicresearch.barclays.com.

 ___

 ** **

 ___

 This message is for information purposes only, it is not a recommendation,
 advice, offer or solicitation to buy or sell a product or service nor an
 official confirmation of any transaction. It is directed at persons who are
 professionals and is not intended for retail customer use. Intended for
 recipient only. This message is subject to the terms at:
 www.barclays.com/emaildisclaimer.

 For important disclosures, please see:
 www.barclays.com/salesandtradingdisclaimer regarding market commentary
 from Barclays Sales and/or Trading, who are active market participants; and
 in respect of Barclays

Re: High CPU usage during repair

2013-02-11 Thread Tamar Fraenkel
Thank you very much! Due to monetary limitations I will keep the m1.large
for now, but try the throughput modification.
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Mon, Feb 11, 2013 at 11:30 AM, aaron morton aa...@thelastpickle.comwrote:

  What machine size?

 m1.large

 If you are seeing high CPU move to an m1.xlarge, that's the sweet spot.

 That's normally ok. How many are waiting?

 I have seen 4 this morning

 That's not really abnormal.
 The pending task count goes when when a file *may* be eligible for
 compaction, not when there is a compaction task waiting.

 If you suddenly create a number of new SSTables for a CF the pending count
 will rise, however one of the tasks may compact all the sstables waiting
 for compaction. So the count will suddenly drop as well.

 Just to make sure I understand you correctly, you suggest that I change
 throughput to 12 regardless of whether repair is ongoing or not. I will do
 it using nodetool and change the yaml file in case a restart will occur in
 the future?

 Yes.
 If you are seeing performance degrade during compaction or repair try
 reducing the throughput.

 I would attribute most of the problems you have described to using
 m1.large.

 Cheers


-
 Aaron Morton
 Freelance Cassandra Developer
 New Zealand

 @aaronmorton
 http://www.thelastpickle.com

 On 11/02/2013, at 9:16 AM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 Thanks for the response.
 See my answers and questions below.
 Thanks!
 Tamar

  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




 On Sun, Feb 10, 2013 at 10:04 PM, aaron morton aa...@thelastpickle.comwrote:

 During repair I see high CPU consumption,

 Repair reads the data and computes a hash, this is a CPU intensive
 operation.
 Is the CPU over loaded or is just under load?

  Usually just load, but in the past two weeks I have seen CPU of over 90%!

 I run Cassandra  version 1.0.11, on 3 node setup on EC2 instances.

 What machine size?

 m1.large


 there are compactions waiting.

 That's normally ok. How many are waiting?

 I have seen 4 this morning

 I thought of adding a call to my repair script, before repair starts to
 do:
 nodetool setcompactionthroughput 0
 and then when repair finishes call
 nodetool setcompactionthroughput 16

 That will remove throttling on compaction and the validation compaction
 used for the repair. Which may in turn add additional IO load, CPU load and
 GC pressure. You probably do not want to do this.

 Try reducing the compaction throughput to say 12 normally and see the
 effect.

 Just to make sure I understand you correctly, you suggest that I change
 throughput to 12 regardless of whether repair is ongoing or not. I will do
 it using nodetool and change the yaml file in case a restart will occur in
 the future?

 Cheers


-
 Aaron Morton
 Freelance Cassandra Developer
 New Zealand

 @aaronmorton
 http://www.thelastpickle.com

 On 11/02/2013, at 1:01 AM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 I run repair weekly, using a scheduled cron job.
 During repair I see high CPU consumption, and messages in the log file
 INFO [ScheduledTasks:1] 2013-02-10 11:48:06,396 GCInspector.java (line
 122) GC for ParNew: 208 ms for 1 collections, 1704786200 used; max is
 3894411264
 From time to time, there are also messages of the form
 INFO [ScheduledTasks:1] 2012-12-04 13:34:52,406 MessagingService.java
 (line 607) 1 READ messages dropped in last 5000ms

 Using opscenter, jmx and nodetool compactionstats I can see that during
 the time the CPU consumption is high, there are compactions waiting.

 I run Cassandra  version 1.0.11, on 3 node setup on EC2 instances.
 I have the default settings:
 compaction_throughput_mb_per_sec: 16
 in_memory_compaction_limit_in_mb: 64
 multithreaded_compaction: false
 compaction_preheat_key_cache: true

 I am thinking on the following solution, and wanted to ask if I am on the
 right track:
 I thought of adding a call to my repair script, before repair starts to
 do:
 nodetool setcompactionthroughput 0
 and then when repair finishes call
 nodetool setcompactionthroughput 16

 Is this a right solution?
 Thanks,
 Tamar

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956






tokLogo.png

High CPU usage during repair

2013-02-10 Thread Tamar Fraenkel
Hi!
I run repair weekly, using a scheduled cron job.
During repair I see high CPU consumption, and messages in the log file
INFO [ScheduledTasks:1] 2013-02-10 11:48:06,396 GCInspector.java (line
122) GC for ParNew: 208 ms for 1 collections, 1704786200 used; max is
3894411264
From time to time, there are also messages of the form
INFO [ScheduledTasks:1] 2012-12-04 13:34:52,406 MessagingService.java
(line 607) 1 READ messages dropped in last 5000ms

Using opscenter, jmx and nodetool compactionstats I can see that during the
time the CPU consumption is high, there are compactions waiting.

I run Cassandra  version 1.0.11, on 3 node setup on EC2 instances.
I have the default settings:
compaction_throughput_mb_per_sec: 16
in_memory_compaction_limit_in_mb: 64
multithreaded_compaction: false
compaction_preheat_key_cache: true

I am thinking on the following solution, and wanted to ask if I am on the
right track:
I thought of adding a call to my repair script, before repair starts to do:
nodetool setcompactionthroughput 0
and then when repair finishes call
nodetool setcompactionthroughput 16

Is this a right solution?
Thanks,
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: High CPU usage during repair

2013-02-10 Thread Tamar Fraenkel
Hi!
Thanks for the response.
See my answers and questions below.
Thanks!
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Sun, Feb 10, 2013 at 10:04 PM, aaron morton aa...@thelastpickle.comwrote:

 During repair I see high CPU consumption,

 Repair reads the data and computes a hash, this is a CPU intensive
 operation.
 Is the CPU over loaded or is just under load?

 Usually just load, but in the past two weeks I have seen CPU of over 90%!

 I run Cassandra  version 1.0.11, on 3 node setup on EC2 instances.

 What machine size?

m1.large


 there are compactions waiting.

 That's normally ok. How many are waiting?

 I have seen 4 this morning

 I thought of adding a call to my repair script, before repair starts to do:
 nodetool setcompactionthroughput 0
 and then when repair finishes call
 nodetool setcompactionthroughput 16

 That will remove throttling on compaction and the validation compaction
 used for the repair. Which may in turn add additional IO load, CPU load and
 GC pressure. You probably do not want to do this.

 Try reducing the compaction throughput to say 12 normally and see the
 effect.

 Just to make sure I understand you correctly, you suggest that I change
throughput to 12 regardless of whether repair is ongoing or not. I will do
it using nodetool and change the yaml file in case a restart will occur in
the future?

 Cheers


-
 Aaron Morton
 Freelance Cassandra Developer
 New Zealand

 @aaronmorton
 http://www.thelastpickle.com

 On 11/02/2013, at 1:01 AM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 I run repair weekly, using a scheduled cron job.
 During repair I see high CPU consumption, and messages in the log file
 INFO [ScheduledTasks:1] 2013-02-10 11:48:06,396 GCInspector.java (line
 122) GC for ParNew: 208 ms for 1 collections, 1704786200 used; max is
 3894411264
 From time to time, there are also messages of the form
 INFO [ScheduledTasks:1] 2012-12-04 13:34:52,406 MessagingService.java
 (line 607) 1 READ messages dropped in last 5000ms

 Using opscenter, jmx and nodetool compactionstats I can see that during
 the time the CPU consumption is high, there are compactions waiting.

 I run Cassandra  version 1.0.11, on 3 node setup on EC2 instances.
 I have the default settings:
 compaction_throughput_mb_per_sec: 16
 in_memory_compaction_limit_in_mb: 64
 multithreaded_compaction: false
 compaction_preheat_key_cache: true

 I am thinking on the following solution, and wanted to ask if I am on the
 right track:
 I thought of adding a call to my repair script, before repair starts to do:
 nodetool setcompactionthroughput 0
 and then when repair finishes call
 nodetool setcompactionthroughput 16

 Is this a right solution?
 Thanks,
 Tamar

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




tokLogo.png

Re: DataModel Question

2013-02-05 Thread Tamar Fraenkel
Hi!
I have couple of questions regarding your model:

 1. What Cassandra version are you using? I am still working with 1.0 and
this seems to make sense, but 1.2 gives you much more power I think.
 2. Maybe I don't understand your model, but I think you need
DynamicComposite columns, as user columns are different in number of
components and maybe type.
 3. How do you associate between the SMS or MMS and the user you are
chating with. Is it done by a separate CF?

Thanks,
Tamar


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956




On Wed, Feb 6, 2013 at 8:23 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Avoid super columns. If you need Sorted, wide rows then go for Composite
 columns.

 -Vivek


 On Wed, Feb 6, 2013 at 7:09 AM, Kanwar Sangha kan...@mavenir.com wrote:

  Hi –  We are designing a Cassandra based storage for the following use
 cases-

 ** **

 **·**Store SMS messages

 **·**Store MMS messages

 **·**Store Chat history

 ** **

 What would be the ideal was to design the data model for this kind of
 application ? I am thinking on these lines ..

 ** **

 Row-Key :  Composite key [ PhoneNum : Day]

 ** **

 **·**Example:   19876543456:05022013

 ** **

 Dynamic Column Families

 ** **

 **·**Composite column key for SMS [SMS:MessageId:TimeUUID]

 **·**Composite column key for MMS [MMS:MessageId:TimeUUID]

 **·**Composite column key for user I am chatting with
 [UserId:198765432345] – This can have multiple values since each chat conv
 can have many messages. Should this be a super column ?

 ** **

 ** **

 198:05022013

 SMS::ttt

 SMS:xxx12:ttt

 MMS::ttt

 :19

 198:05022013

 ** **

 ** **

 ** **

 ** **

 1987888:05022013

 ** **

 ** **

 ** **

 ** **

 ** **

 ** **

 Thanks,

 Kanwar

 ** **



tokLogo.png

Re: Upgrade 1.1.2 - 1.1.6

2012-11-20 Thread Tamar Fraenkel
Hi!
I had the same problem (over counting due to replay of commit log, which
ignored drain) after upgrading my cluster from 1.0.9 to 1.0.11.
I updated the Cassandra tickets mentioned in this thread.
Regards,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Nov 20, 2012 at 11:03 PM, Mike Heffner m...@librato.com wrote:


 On Tue, Nov 20, 2012 at 2:49 PM, Rob Coli rc...@palominodb.com wrote:

 On Mon, Nov 19, 2012 at 7:18 PM, Mike Heffner m...@librato.com wrote:
  We performed a 1.1.3 - 1.1.6 upgrade and found that all the logs
 replayed
  regardless of the drain.

 Your experience and desire for different (expected) behavior is welcomed
 on :

 https://issues.apache.org/jira/browse/CASSANDRA-4446

 nodetool drain sometimes doesn't mark commitlog fully flushed

 If every production operator who experiences this issue shares their
 experience on this bug, perhaps the project will acknowledge and
 address it.


 Well in this case I think our issue was that upgrading from
 nanotime-epoch seconds, by definition, replays all commit logs. That's not
 due to any specific problem with nodetool drain not marking commitlog's
 flushed, but a safety to ensure data is not lost due to buggy nanotime
 implementations.

 For us, it was that the upgrade instructions pre-1.1.5-1.1.6 didn't
 mention that CL's should be removed if successfully drained. On the other
 hand, we do not use counters so replaying them was merely a much longer
 MTT-Return after restarting with 1.1.6.

 Mike

 --

   Mike Heffner m...@librato.com
   Librato, Inc.



tokLogo.png

hector - cassandra versions compatibility

2012-11-18 Thread Tamar Fraenkel
Hi!
I posted this question on hector users list but no one answered, so I am
trying here as well.

I have production cluster running Cassandra 1.0.8 and a test cluster with
Cassandra 1.1.6.
In my Java app I do not user maven, but rather have my lib directory with
the jar files I use.

When I ran my client code, currently using
cassandra-all-1.0.8.jar
cassandra-clientutil-1.0.8.jar
cassandra-thrift-1.0.9.jar
hector-core-1.0-5.jar
*it worked fine with both Cassandra 1.0.8 and 1.1.6.*

When I changed only hector to be hector-core-1.1-2.jar, *it also worked
fine with both Cassandra 1.0.8 and 1.1.6.

*
When I switched to
cassandra-all-1.1.5.jar
cassandra-clientutil-1.1.5.jar
cassandra-thrift-1.1.5.jar
hector-core-1.1-2.jar
*it didn't work, WITH EITHER Cassandra version...*

I had exceptions below.
Anyone can help or have an idea?

Thanks,
Tamar

java.lang.IncompatibleClassChangeError:
org/apache/cassandra/thrift/Cassandra$Client
at
me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:88)
at
me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:97)
at
me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:251)
at
me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:132)
at
me.prettyprint.cassandra.service.KeyspaceServiceImpl.getColumn(KeyspaceServiceImpl.java:858)
at
me.prettyprint.cassandra.model.thrift.ThriftColumnQuery$1.doInKeyspace(ThriftColumnQuery.java:57)
at
me.prettyprint.cassandra.model.thrift.ThriftColumnQuery$1.doInKeyspace(ThriftColumnQuery.java:52)
at
me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
at
me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:101)
at
me.prettyprint.cassandra.model.thrift.ThriftColumnQuery.execute(ThriftColumnQuery.java:51)



*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: can't start cqlsh on new Amazon node

2012-11-08 Thread Tamar Fraenkel
Hi
A bit more info on that
I have one working setup with
python-cql1.0.9-1
python-thrift  0.6.0-2~riptano1
cassandra1.0.8

The setup where cqlsh is not working has:
python-cql1.0.10-1
python-thrift  0.6.0-2~riptano1
cassandra1.0.11

Maybe this will give someone a hint of what the problem may be and how to
solve it.
Thanks!

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Nov 8, 2012 at 9:38 AM, Tamar Fraenkel ta...@tok-media.com wrote:

 Nope...
 Same error:

 *cqlsh --debug --cql3 localhost 9160*

 Using CQL driver: module 'cql' from
 '/usr/lib/pymodules/python2.6/cql/__init__.pyc'
 Using thrift lib: module 'thrift' from
 '/usr/lib/pymodules/python2.6/thrift/__init__.pyc'
 Connection error: Invalid method name: 'set_cql_version'

 I believe it is some version mismatch. But this was DataStax AMI, I
 thought all should be coordinated, and I am not sure what to check for.


 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Thu, Nov 8, 2012 at 4:56 AM, Jason Wee peich...@gmail.com wrote:

 should it be --cql3 ?
 http://www.datastax.com/docs/1.1/dml/using_cql#start-cql3



 On Wed, Nov 7, 2012 at 11:16 PM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 I installed new cluster using DataStax AMI with --release 1.0.11, so I
 have cassandra 1.0.11 installed.
 Nodes have python-cql 1.0.10-1 and python2.6

 Cluster works well, BUT when I try to connect to the cqlsh I get:
 *cqlsh --debug --cqlversion=2 localhost 9160*
 Using CQL driver: module 'cql' from
 '/usr/lib/pymodules/python2.6/cql/__init__.pyc'
 Using thrift lib: module 'thrift' from
 '/usr/lib/pymodules/python2.6/thrift/__init__.pyc'
 Connection error: Invalid method name: 'set_cql_version'
 *
 *This is the same if I chose cqlversion=3*

 *Any idea how to solve?*

 *Thanks,*

 Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956






tokLogo.png

can't start cqlsh on new Amazon node

2012-11-07 Thread Tamar Fraenkel
Hi!
I installed new cluster using DataStax AMI with --release 1.0.11, so I have
cassandra 1.0.11 installed.
Nodes have python-cql 1.0.10-1 and python2.6

Cluster works well, BUT when I try to connect to the cqlsh I get:
*cqlsh --debug --cqlversion=2 localhost 9160*
Using CQL driver: module 'cql' from
'/usr/lib/pymodules/python2.6/cql/__init__.pyc'
Using thrift lib: module 'thrift' from
'/usr/lib/pymodules/python2.6/thrift/__init__.pyc'
Connection error: Invalid method name: 'set_cql_version'
*
*This is the same if I chose cqlversion=3*

*Any idea how to solve?*

*Thanks,*

Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: can't start cqlsh on new Amazon node

2012-11-07 Thread Tamar Fraenkel
Nope...
Same error:

*cqlsh --debug --cql3 localhost 9160*
Using CQL driver: module 'cql' from
'/usr/lib/pymodules/python2.6/cql/__init__.pyc'
Using thrift lib: module 'thrift' from
'/usr/lib/pymodules/python2.6/thrift/__init__.pyc'
Connection error: Invalid method name: 'set_cql_version'

I believe it is some version mismatch. But this was DataStax AMI, I thought
all should be coordinated, and I am not sure what to check for.

Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Nov 8, 2012 at 4:56 AM, Jason Wee peich...@gmail.com wrote:

 should it be --cql3 ?
 http://www.datastax.com/docs/1.1/dml/using_cql#start-cql3



 On Wed, Nov 7, 2012 at 11:16 PM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 I installed new cluster using DataStax AMI with --release 1.0.11, so I
 have cassandra 1.0.11 installed.
 Nodes have python-cql 1.0.10-1 and python2.6

 Cluster works well, BUT when I try to connect to the cqlsh I get:
 *cqlsh --debug --cqlversion=2 localhost 9160*
 Using CQL driver: module 'cql' from
 '/usr/lib/pymodules/python2.6/cql/__init__.pyc'
 Using thrift lib: module 'thrift' from
 '/usr/lib/pymodules/python2.6/thrift/__init__.pyc'
 Connection error: Invalid method name: 'set_cql_version'
 *
 *This is the same if I chose cqlversion=3*

 *Any idea how to solve?*

 *Thanks,*

 Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





tokLogo.pngtokLogo.png

Re: compression

2012-10-29 Thread Tamar Fraenkel
Hi!
Thanks Aaron!
Today I restarted Cassandra on that node and ran scrub again, now it is
fine.

I am worried though that if I decide to change another CF to use
compression I will have that issue again. Any clue how to avoid it?

Thanks.

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, Sep 26, 2012 at 3:40 AM, aaron morton aa...@thelastpickle.comwrote:

 Check the logs on  nodes 2 and 3 to see if the scrub started. The logs on
 1 will be a good help with that.

 Cheers

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 24/09/2012, at 10:31 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 I ran
 UPDATE COLUMN FAMILY cf_name WITH
 compression_options={sstable_compression:SnappyCompressor,
 chunk_length_kb:64};

 I then ran on all my nodes (3)
 sudo nodetool -h localhost scrub tok cf_name

 I have replication factor 3. The size of the data on disk was cut in half
 in the first node and in the jmx I can see that indeed the compression
 ration is 0.46. But on nodes 2 and 3 nothing happened. In the jmx I can see
 that compression ratio is 0 and the size of the files of disk stayed the
 same.

 In cli

 ColumnFamily: cf_name
   Key Validation Class: org.apache.cassandra.db.marshal.UUIDType
   Default column value validator:
 org.apache.cassandra.db.marshal.UTF8Type
   Columns sorted by:
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)
   Row cache size / save period in seconds / keys to save : 0.0/0/all
   Row Cache Provider:
 org.apache.cassandra.cache.SerializingCacheProvider
   Key cache size / save period in seconds: 20.0/14400
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 1.0
   Replicate on write: true
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy:
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
   Compression Options:
 chunk_length_kb: 64
 sstable_compression:
 org.apache.cassandra.io.compress.SnappyCompressor

 Can anyone help?
 Thanks

  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Mon, Sep 24, 2012 at 8:37 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Thanks all, that helps. Will start with one - two CFs and let you know
 the effect


 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Sun, Sep 23, 2012 at 8:21 PM, Hiller, Dean dean.hil...@nrel.govwrote:

 As well as your unlimited column names may all have the same prefix,
 right? Like accounts.rowkey56, accounts.rowkey78, etc. etc.  so the
 accounts gets a ton of compression then.

 Later,
 Dean

 From: Tyler Hobbs ty...@datastax.commailto:ty...@datastax.com
 Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
 user@cassandra.apache.orgmailto:user@cassandra.apache.org
 Date: Sunday, September 23, 2012 11:46 AM
 To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
 user@cassandra.apache.orgmailto:user@cassandra.apache.org
 Subject: Re: compression

  column metadata, you're still likely to get a reasonable amount of
 compression.  This is especially true if there is some amount of repetition
 in the column names, values, or TTLs in wide rows.  Compression will almost
 always be beneficial unless you're already somehow CPU bound or are using
 large column values that are high in entropy, such as pre-compressed or
 encrypted data.





tokLogo.png

Re: how to stop hinted handoff

2012-10-25 Thread Tamar Fraenkel
Thanks, that did the trick!

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Oct 11, 2012 at 3:42 AM, Roshan codeva...@gmail.com wrote:

 Hello

 You can delete the hints from JConsole by using HintedHadOffManager MBean.

 Thanks.






 --
 View this message in context:
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/how-to-stop-hinted-handoff-tp7583060p7583086.html
 Sent from the cassandra-u...@incubator.apache.org mailing list archive at
 Nabble.com.

tokLogo.png

Re: Hinted Handoff runs every ten minutes

2012-10-24 Thread Tamar Fraenkel
Is there a walk around other than upgrade?
Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, Oct 24, 2012 at 1:56 PM, Brandon Williams dri...@gmail.com wrote:

 On Sun, Oct 21, 2012 at 6:44 PM, aaron morton aa...@thelastpickle.com
 wrote:
  I *think* this may be ghost rows which have not being compacted.

 You would be correct in the case of 1.0.8:
 https://issues.apache.org/jira/browse/CASSANDRA-3955

 -Brandon

tokLogo.png

Re: Hinted Handoff runs every ten minutes

2012-10-22 Thread Tamar Fraenkel
Hi!
I am having the same issue on 1.0.8.
Checked number of SSTables, on two nodes I have 1 (on each) and on 1 node I
have none.
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, Oct 22, 2012 at 1:44 AM, aaron morton aa...@thelastpickle.comwrote:

 I *think* this may be ghost rows which have not being compacted.

 How many SSTables are on disk for the HintedHandoff CF ?

 Cheers

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 19/10/2012, at 7:16 AM, David Daeschler david.daesch...@gmail.com
 wrote:

 Hi Steve,

 Also confirming this. After having a node go down on Cassandra 1.0.8
 there seems to be hinted handoff between two of our 4 nodes every 10
 minutes. Our setup also shows 0 rows. It does not appear to have any
 effect on the operation of the ring, just fills up the log files.

 - David



 On Thu, Oct 18, 2012 at 2:10 PM, Stephen Pierce spie...@verifyle.com
 wrote:

 I installed Cassandra on three nodes. I then ran a test suite against them
 to generate load. The test suite is designed to generate the same type of
 load that we plan to have in production. As one of many tests, I reset one
 of the nodes to check the failure/recovery modes.  Cassandra worked just
 fine.



 I stopped the load generation, and got distracted with some other
 project/problem. A few days later, I noticed something strange on one of
 the
 nodes. On this node hinted handoff starts every ten minutes, and while it
 seems to finish without any errors, it will be started again in ten
 minutes.
 None of the nodes has any traffic, and hasn’t for several days. I checked
 the logs, and this goes back to the initial failure/recovery testing:



 INFO [HintedHandoff:1] 2012-10-18 10:19:26,618 HintedHandOffManager.java
 (line 294) Started hinted handoff for token:
 113427455640312821154458202477256070484 with IP: /192.168.128.136

 INFO [HintedHandoff:1] 2012-10-18 10:19:26,779 HintedHandOffManager.java
 (line 390) Finished hinted handoff of 0 rows to endpoint /192.168.128.136

 INFO [HintedHandoff:1] 2012-10-18 10:29:26,622 HintedHandOffManager.java
 (line 294) Started hinted handoff for token:
 113427455640312821154458202477256070484 with IP: /192.168.128.136

 INFO [HintedHandoff:1] 2012-10-18 10:29:26,735 HintedHandOffManager.java
 (line 390) Finished hinted handoff of 0 rows to endpoint /192.168.128.136

 INFO [HintedHandoff:1] 2012-10-18 10:39:26,624 HintedHandOffManager.java
 (line 294) Started hinted handoff for token:
 113427455640312821154458202477256070484 with IP: /192.168.128.136

 INFO [HintedHandoff:1] 2012-10-18 10:39:26,751 HintedHandOffManager.java
 (line 390) Finished hinted handoff of 0 rows to endpoint /192.168.128.136



 The other nodes are happy and don’t show this behavior. All the test data
 is
 readable, and everything is fine, but I’m curious why hinted handoff is
 running on one node all the time.



 I searched the bug database, and I found a bug that seems to have the same
 symptoms:

 https://issues.apache.org/jira/browse/CASSANDRA-3733

 Although it’s been marked fixed in 0.6, this describes my problem exactly.



 I’m running Cassandra 1.1.5 from Datastax on Centos 6.0:


 http://rpm.datastax.com/community/noarch/apache-cassandra11-1.1.5-1.noarch.rpm



 Is anyone else seeing this behavior? What can I do to provide more
 information?



 Steve





 --
 David Daeschler



tokLogo.png

Performance problems Unbalanced disk io on my cassandra cluster

2012-10-16 Thread Tamar Fraenkel
Hi!

*Problem*
I have one node which seems to be in a bad situation, with lots of dropped
reads for a long time.

*My cluster*
I have 3 node cluster on Amazon m1.large DataStax AMI with cassandra 1.08.
RF=3, RCL=WCL=QUORUM
I use Hector which should be doing round robin of the requests between the
node.
Cluster is not under much load:

*Info*
Using OpsCenter I can see that:

Number of read \ write request is distributed evenly between nodes.
Disk Latency of both read and write and Disk Throughput are much worse
on one of the nodes.

*This is also visible in iostats*
Good node
Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvdb  0.58 0.03   42.811.31  2710.90   104.62
63.82 0.025.96   0.48   2.14
xvdc  0.57 0.00   42.851.30  2712.72   104.83
63.81 0.204.60   0.48   2.12

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvdb  5.60 0.10  456.500.40 32729.6028.00
71.7019.65   43.00   0.36  16.50
xvdc  4.10 0.00  460.000.80 33342.4060.80
72.4917.55   38.09   0.35  16.00

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvdb  4.70 0.10  608.201.10 39217.6077.70
64.4926.04   42.73   0.39  23.50
xvdc  5.70 0.00  606.800.60 38645.6024.00
63.6622.89   37.69   0.38  23.10


Bad Node
Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvdb  0.67 0.03   51.721.02  3330.2180.62
64.67 0.061.19   0.60   3.16
xvdc  0.67 0.00   51.661.02  3329.2380.85
64.73 0.152.84   0.60   3.17

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvdb 16.50 0.10 1484.700.80 88937.6052.90
59.91   115.07   77.11   0.58  86.00
xvdc 16.20 0.00 1492.800.60 89701.6043.20
60.09   102.80   69.06   0.58  86.10

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
xvdb 14.00 0.10 1260.000.70 81632.0033.70
64.7876.96   61.56   0.54  68.10
xvdc 15.50 0.10 1257.600.90 80932.0063.20
64.3688.94   70.90   0.53  67.10


*Question*
This does not make sense to me, why would one node do much more read \
writes, reading more sectors with higher utilization and wait time.
Can it be Amazon issue, I don't think so.
This of course may be the result of flushing and compactions, but it
persists for a long time, even when no compaction is happening.
What would you do to further explore or fix the problem?


Thank you very much!!
*
Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: READ messages dropped

2012-10-12 Thread Tamar Fraenkel
Hi!
Thanks for the response. My cluster is in a bad state those recent days.

I have 29 CFs, and my disk is 5% full... So I guess the VMs still have more
space to go, and I am not sure this is considered many CFs.

But maybe I have memory issues. I enlarge cassandra memory from about ~2G
to ~4G (out of ~8G). This was done because at that stage I had lots of key
caches. I then reduced them to almost 0 on all CF. I guess now I can reduce
the memory back to ~2 or ~3 G. Will that help?
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Oct 11, 2012 at 10:46 PM, Tyler Hobbs ty...@datastax.com wrote:

 On Wed, Oct 10, 2012 at 3:10 PM, Tamar Fraenkel ta...@tok-media.comwrote:


 What I did noticed while looking at the logs (which are also running
 OpsCenter), is that there is some correlation between the dropped reads and
 flushes of OpsCenter column families to disk and or compactions. What are
 the rollups CFs? why is there so much traffic in them?


 The rollups CFs hold the performance metric data that OpsCenter stores
 about your cluster.  Typically these aren't actually very high traffic
 column families, but that depends on how many column families you have
 (more CFs require more metrics to be stored).  If you have a lot of column
 families, you have a couple of options for reducing the amount of metric
 data that's stored:
 http://www.datastax.com/docs/opscenter/trouble_shooting_opsc#limiting-the-metrics-collected-by-opscenter

 Assuming you don't have a large number of CFs, your nodes may legitimately
 be nearing capacity.

 --
 Tyler Hobbs
 DataStax http://datastax.com/


tokLogo.png

Re: unbalanced ring

2012-10-10 Thread Tamar Fraenkel
Hi!
I am re-posting this, now that I have more data and still *unbalanced ring*:

3 nodes,
RF=3, RCL=WCL=QUORUM

Address DC  RackStatus State   Load
OwnsToken

113427455640312821154458202477256070485
x.x.x.xus-east 1c  Up Normal  24.02 GB33.33%  0
y.y.y.y us-east 1c  Up Normal  33.45 GB33.33%
56713727820156410577229101238628035242
z.z.z.zus-east 1c  Up Normal  29.85 GB33.33%
113427455640312821154458202477256070485

repair runs weekly.
I don't run nodetool compact as I read that this may cause the minor
regular compactions not to run and then I will have to run compact
manually. Is that right?

Any idea if this means something wrong, and if so, how to solve?

Thanks,
*
Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Mar 27, 2012 at 9:12 AM, Tamar Fraenkel ta...@tok-media.com wrote:

 Thanks, I will wait and see as data accumulates.
 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Mar 27, 2012 at 9:00 AM, R. Verlangen ro...@us2.nl wrote:

 Cassandra is built to store tons and tons of data. In my opinion roughly
 ~ 6MB per node is not enough data to allow it to become a fully balanced
 cluster.


 2012/3/27 Tamar Fraenkel ta...@tok-media.com

 This morning I have
  nodetool ring -h localhost
 Address DC  RackStatus State   Load
  OwnsToken

113427455640312821154458202477256070485
 10.34.158.33us-east 1c  Up Normal  5.78 MB
 33.33%  0
 10.38.175.131   us-east 1c  Up Normal  7.23 MB
 33.33%  56713727820156410577229101238628035242
  10.116.83.10us-east 1c  Up Normal  5.02 MB
 33.33%  113427455640312821154458202477256070485

 Version is 1.0.8.


  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Mar 27, 2012 at 4:05 AM, Maki Watanabe 
 watanabe.m...@gmail.comwrote:

 What version are you using?
 Anyway try nodetool repair  compact.

 maki


 2012/3/26 Tamar Fraenkel ta...@tok-media.com

 Hi!
 I created Amazon ring using datastax image and started filling the db.
 The cluster seems un-balanced.

 nodetool ring returns:
 Address DC  RackStatus State   Load
  OwnsToken

  113427455640312821154458202477256070485
 10.34.158.33us-east 1c  Up Normal  514.29 KB
 33.33%  0
 10.38.175.131   us-east 1c  Up Normal  1.5 MB
  33.33%  56713727820156410577229101238628035242
 10.116.83.10us-east 1c  Up Normal  1.5 MB
  33.33%  113427455640312821154458202477256070485

 [default@tok] describe;
 Keyspace: tok:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:2]

 [default@tok] describe cluster;
 Cluster Information:
Snitch: org.apache.cassandra.locator.Ec2Snitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
 4687d620-7664-11e1--1bcb936807ff: [10.38.175.131,
 10.34.158.33, 10.116.83.10]


 Any idea what is the cause?
 I am running similar code on local ring and it is balanced.

 How can I fix this?

 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956








 --
 With kind regards,

 Robin Verlangen
 www.robinverlangen.nl



tokLogo.png

Re: unbalanced ring

2012-10-10 Thread Tamar Fraenkel
Hi!
Apart from being heavy load (the compact), will it have other effects?
Also, will cleanup help if I have replication factor = number of nodes?
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, Oct 10, 2012 at 6:12 PM, B. Todd Burruss bto...@gmail.com wrote:

 major compaction in production is fine, however it is a heavy operation on
 the node and will take I/O and some CPU.

 the only time i have seen this happen is when i have changed the tokens in
 the ring, like nodetool movetoken.  cassandra does not auto-delete data
 that it doesn't use anymore just in case you want to move the tokens again
 or otherwise undo.

 try nodetool cleanup


 On Wed, Oct 10, 2012 at 2:01 AM, Alain RODRIGUEZ arodr...@gmail.comwrote:

 Hi,

 Same thing here:

 2 nodes, RF = 2. RCL = 1, WCL = 1.
 Like Tamar I never ran a major compaction and repair once a week each
 node.

 10.59.21.241eu-west 1b  Up Normal  133.02 GB
 50.00%  0
 10.58.83.109eu-west 1b  Up Normal  98.12 GB
  50.00%  85070591730234615865843651857942052864

 What phenomena could explain the result above ?

 By the way, I have copy the data and import it in a one node dev cluster.
 There I have run a major compaction and the size of my data has been
 significantly reduced (to about 32 GB instead of 133 GB).

 How is that possible ?
 Do you think that if I run major compaction in both nodes it will balance
 the load evenly ?
 Should I run major compaction in production ?

 2012/10/10 Tamar Fraenkel ta...@tok-media.com

 Hi!
 I am re-posting this, now that I have more data and still *unbalanced
 ring*:

 3 nodes,
 RF=3, RCL=WCL=QUORUM


 Address DC  RackStatus State   Load
 OwnsToken

 113427455640312821154458202477256070485
 x.x.x.xus-east 1c  Up Normal  24.02 GB
 33.33%  0
 y.y.y.y us-east 1c  Up Normal  33.45 GB
 33.33%  56713727820156410577229101238628035242
 z.z.z.zus-east 1c  Up Normal  29.85 GB
 33.33%  113427455640312821154458202477256070485

 repair runs weekly.
 I don't run nodetool compact as I read that this may cause the minor
 regular compactions not to run and then I will have to run compact
 manually. Is that right?

 Any idea if this means something wrong, and if so, how to solve?


 Thanks,
 *
 Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Mar 27, 2012 at 9:12 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Thanks, I will wait and see as data accumulates.
 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Mar 27, 2012 at 9:00 AM, R. Verlangen ro...@us2.nl wrote:

 Cassandra is built to store tons and tons of data. In my opinion
 roughly ~ 6MB per node is not enough data to allow it to become a fully
 balanced cluster.


 2012/3/27 Tamar Fraenkel ta...@tok-media.com

 This morning I have
  nodetool ring -h localhost
 Address DC  RackStatus State   Load
  OwnsToken

  113427455640312821154458202477256070485
 10.34.158.33us-east 1c  Up Normal  5.78 MB
   33.33%  0
 10.38.175.131   us-east 1c  Up Normal  7.23 MB
   33.33%  56713727820156410577229101238628035242
  10.116.83.10us-east 1c  Up Normal  5.02 MB
   33.33%  113427455640312821154458202477256070485

 Version is 1.0.8.


  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Mar 27, 2012 at 4:05 AM, Maki Watanabe 
 watanabe.m...@gmail.com wrote:

 What version are you using?
 Anyway try nodetool repair  compact.

 maki


 2012/3/26 Tamar Fraenkel ta...@tok-media.com

 Hi!
 I created Amazon ring using datastax image and started filling the
 db.
 The cluster seems un-balanced.

 nodetool ring returns:
 Address DC  RackStatus State   Load
OwnsToken

113427455640312821154458202477256070485
 10.34.158.33us-east 1c  Up Normal  514.29 KB
 33.33%  0
 10.38.175.131   us-east 1c  Up Normal  1.5 MB
33.33%  56713727820156410577229101238628035242
 10.116.83.10us-east 1c  Up Normal  1.5 MB
33.33%  113427455640312821154458202477256070485

 [default@tok] describe;
 Keyspace: tok:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:2]

 [default@tok] describe cluster;
 Cluster Information:
Snitch: org.apache.cassandra.locator.Ec2Snitch

Re: READ messages dropped

2012-10-10 Thread Tamar Fraenkel
Hi!
Thanks for the answer.
I don't see much change in the load this Cassandra cluster is under, so why
is the sudden surge of such messages?
What I did noticed while looking at the logs (which are also running
OpsCenter), is that there is some correlation between the dropped reads and
flushes of OpsCenter column families to disk and or compactions. What are
the rollups CFs? why is there so much traffic in them?
Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, Oct 10, 2012 at 1:00 AM, aaron morton aa...@thelastpickle.comwrote:

 or how to solve it?

 Simple solution is move to m1.xlarge :)

 In the last 3 days I see many messages of READ messages dropped in last
 5000ms on one of my 3 nodes cluster.

 The node is not able to keep up with the load.

 Possible causes include excessive GC, aggressive compaction, or simply too
 many requests.

 it also a good idea to take a look at iostat to see if the disk is keeping
 up.

 Hope that helps

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 9/10/2012, at 9:08 AM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 In the last 3 days I see many messages of READ messages dropped in last
 5000ms on one of my 3 nodes cluster.
 I see no errors in the log.
 There are also messages of Finished hinted handoff of 0 rows to endpoint
 but I had those for a while now, so I don't know if they are related.
 I am running Cassandra 1.0.8 on a 3 node cluster on EC2 m1.large
 instances. Rep factor 3 (Quorum read and write)

 Does anyone have a clue what I should be looking for, or how to solve it?
 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





tokLogo.png

READ messages dropped

2012-10-08 Thread Tamar Fraenkel
Hi!
In the last 3 days I see many messages of READ messages dropped in last
5000ms on one of my 3 nodes cluster.
I see no errors in the log.
There are also messages of Finished hinted handoff of 0 rows to endpoint
but I had those for a while now, so I don't know if they are related.
I am running Cassandra 1.0.8 on a 3 node cluster on EC2 m1.large instances.
Rep factor 3 (Quorum read and write)

Does anyone have a clue what I should be looking for, or how to solve it?
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: cassandra key cache question

2012-10-01 Thread Tamar Fraenkel
Created https://issues.apache.org/jira/browse/CASSANDRA-4742
Any clue regarding the first question in this thread (key cache  number of
keys in CF, and not many deletes on that CF)?

Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, Oct 1, 2012 at 2:28 AM, aaron morton aa...@thelastpickle.comwrote:

 I had a quick look at the code (1.1X) and could not see a facility to
 purge dropped cf's from key or row caches.

 Could you create a ticket on
 https://issues.apache.org/jira/browse/CASSANDRA ?

 Cheers

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 27/09/2012, at 9:58 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!

 One more question:
 I have couple of dropped column families, and in the JMX console I don't
 see them under org.apache.cassandra.db.ColumnFamilies, *BUT *I do see
 them under org.apache.cassandra.db.Caches, and the cache is not empty!
 Does it mean that Cassandra still keep memory busy doing caching for a
 non-existing column family? If so, how do I remove those caches?

 Thanks!

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Thu, Sep 27, 2012 at 11:45 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Is it possible that in JMX and cfstats the Key cache size is much bigger
 than the number of keys in the CF?
 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956






tokLogo.png

Re: compression

2012-09-28 Thread Tamar Fraenkel
Hi!
The situation didn't resolve, does anyone has a clue?
Thanks

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Sep 27, 2012 at 10:42 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 First, the problem is still there, altough I checked and all node agree on
 the schema.
 This is from ls -l
 Good Node
 -rw-r--r-- 1 cassandra cassandra606 2012-09-27 08:01
 tk_usus_user-hc-269-CompressionInfo.db
 -rw-r--r-- 1 cassandra cassandra2246431 2012-09-27 08:01
 tk_usus_user-hc-269-Data.db
 -rw-r--r-- 1 cassandra cassandra  11056 2012-09-27 08:01
 tk_usus_user-hc-269-Filter.db
 -rw-r--r-- 1 cassandra cassandra 129792 2012-09-27 08:01
 tk_usus_user-hc-269-Index.db
 -rw-r--r-- 1 cassandra cassandra   4336 2012-09-27 08:01
 tk_usus_user-hc-269-Statistics.db

 Node 2
 -rw-r--r-- 1 cassandra cassandra4592393 2012-09-27 08:01
 tk_usus_user-hc-268-Data.db
 -rw-r--r-- 1 cassandra cassandra 69 2012-09-27 08:01
 tk_usus_user-hc-268-Digest.sha1
 -rw-r--r-- 1 cassandra cassandra  11056 2012-09-27 08:01
 tk_usus_user-hc-268-Filter.db
 -rw-r--r-- 1 cassandra cassandra 129792 2012-09-27 08:01
 tk_usus_user-hc-268-Index.db
 -rw-r--r-- 1 cassandra cassandra   4336 2012-09-27 08:01
 tk_usus_user-hc-268-Statistics.db

 Node 3
 -rw-r--r-- 1 cassandra cassandra   4592393 2012-09-27 08:01
 tk_usus_user-hc-278-Data.db
 -rw-r--r-- 1 cassandra cassandra69 2012-09-27 08:01
 tk_usus_user-hc-278-Digest.sha1
 -rw-r--r-- 1 cassandra cassandra 11056 2012-09-27 08:01
 tk_usus_user-hc-278-Filter.db
 -rw-r--r-- 1 cassandra cassandra129792 2012-09-27 08:01
 tk_usus_user-hc-278-Index.db
 -rw-r--r-- 1 cassandra cassandra  4336 2012-09-27 08:01
 tk_usus_user-hc-278-Statistics.db

 Looking at the logs, on the good node I can see

  INFO [MigrationStage:1] 2012-09-24 10:08:16,511 Migration.java (line 119)
 Applying migration c22413b0-062f-11e2--1bcb936807db Update column
 family to org.apache.cassandra.config.CFMetaData@1dbdcde9
 [cfId=1016,ksName=tok,cfName=tk_usus_user,cfType=Standard,comparator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),subcolumncomparator=null,comment=,rowCacheSize=0.0,keyCacheSize=20.0,readRepairChance=1.0,replicateOnWrite=true,gcGraceSeconds=864000,defaultValidator=org.apache.cassandra.db.marshal.UTF8Type,keyValidator=org.apache.cassandra.db.marshal.UUIDType,minCompactionThreshold=4,maxCompactionThreshold=32,rowCacheSavePeriodInSeconds=0,keyCacheSavePeriodInSeconds=14400,rowCacheKeysToSave=2147483647,rowCacheProvider=org.apache.cassandra.cache.SerializingCacheProvider@3505231c,mergeShardsChance=0.1,keyAlias=java.nio.HeapByteBuffer[pos=485
 lim=488 cap=653],column_metadata={},compactionStrategyClass=class
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,compactionStrategyOptions={},compressionOptions={sstable_compression=org.apache.cassandra.io.compress.SnappyCompressor,
 chunk_length_kb=64},bloomFilterFpChance=null]

 But same can be seen in the logs of the two other nodes:
  INFO [MigrationStage:1] 2012-09-24 10:08:16,767 Migration.java (line 119)
 Applying migration c22413b0-062f-11e2--1bcb936807db Update column
 family to org.apache.cassandra.config.CFMetaData@24fbb95d
 [cfId=1016,ksName=tok,cfName=tk_usus_user,cfType=Standard,comparator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),subcolumncomparator=null,comment=,rowCacheSize=0.0,keyCacheSize=20.0,readRepairChance=1.0,replicateOnWrite=true,gcGraceSeconds=864000,defaultValidator=org.apache.cassandra.db.marshal.UTF8Type,keyValidator=org.apache.cassandra.db.marshal.UUIDType,minCompactionThreshold=4,maxCompactionThreshold=32,rowCacheSavePeriodInSeconds=0,keyCacheSavePeriodInSeconds=14400,rowCacheKeysToSave=2147483647,rowCacheProvider=org.apache.cassandra.cache.SerializingCacheProvider@a469ba3,mergeShardsChance=0.1,keyAlias=java.nio.HeapByteBuffer[pos=0
 lim=3 cap=3],column_metadata={},compactionStrategyClass=class
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,compactionStrategyOptions={},compressionOptions={sstable_compression=org.apache.cassandra.io.compress.SnappyCompressor,
 chunk_length_kb=64},bloomFilterFpChance=null]

  INFO [MigrationStage:1] 2012-09-24 10:08:16,705 Migration.java (line 119)
 Applying migration c22413b0-062f-11e2--1bcb936807db Update column
 family to org.apache.cassandra.config.CFMetaData@216b6a58
 [cfId=1016,ksName=tok,cfName=tk_usus_user,cfType=Standard,comparator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),subcolumncomparator=null,comment=,rowCacheSize=0.0,keyCacheSize=20.0,readRepairChance=1.0,replicateOnWrite=true,gcGraceSeconds=864000

Re: compression

2012-09-27 Thread Tamar Fraenkel
=java.nio.HeapByteBuffer[pos=0
lim=3 cap=3],column_metadata={},compactionStrategyClass=class
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,compactionStrategyOptions={},compressionOptions={sstable_compression=org.apache.cassandra.io.compress.SnappyCompressor,
chunk_length_kb=64},bloomFilterFpChance=null]


I can also see scrub messages in logs
Good node:
 INFO [CompactionExecutor:1774] 2012-09-24 10:09:05,402
CompactionManager.java (line 476) Scrubbing
SSTableReader(path='/raid0/cassandra/data/tok/tk_usus_user-hc-264-Data.db')
 INFO [CompactionExecutor:1774] 2012-09-24 10:09:05,934
CompactionManager.java (line 658) Scrub of
SSTableReader(path='/raid0/cassandra/data/tok/tk_usus_user-hc-264-Data.db')
complete: 4868 rows in new sstable and 0 empty (tombstoned) rows dropped

Other nodes

 INFO [CompactionExecutor:1800] 2012-09-24 10:09:11,789
CompactionManager.java (line 476) Scrubbing
SSTableReader(path='/raid0/cassandra/data/tok/tk_usus_user-hc-260-Data.db')
 INFO [CompactionExecutor:1800] 2012-09-24 10:09:12,464
CompactionManager.java (line 658) Scrub of
SSTableReader(path='/raid0/cassandra/data/tok/tk_usus_user-hc-260-Data.db')
complete: 4868 rows in new sstable and 0 empty (tombstoned) rows dropped

 INFO [CompactionExecutor:1687] 2012-09-24 10:09:16,235
CompactionManager.java (line 476) Scrubbing
SSTableReader(path='/raid0/cassandra/data/tok/tk_usus_user-hc-271-Data.db')
 INFO [CompactionExecutor:1687] 2012-09-24 10:09:16,898
CompactionManager.java (line 658) Scrub of
SSTableReader(path='/raid0/cassandra/data/tok/tk_usus_user-hc-271-Data.db')
compete: 4868 rows in new sstable and 0 empty (tombstoned) rows dropped

Any idea?
Thanks!!

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, Sep 26, 2012 at 3:40 AM, aaron morton aa...@thelastpickle.comwrote:

 Check the logs on  nodes 2 and 3 to see if the scrub started. The logs on
 1 will be a good help with that.

 Cheers

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 24/09/2012, at 10:31 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 I ran
 UPDATE COLUMN FAMILY cf_name WITH
 compression_options={sstable_compression:SnappyCompressor,
 chunk_length_kb:64};

 I then ran on all my nodes (3)
 sudo nodetool -h localhost scrub tok cf_name

 I have replication factor 3. The size of the data on disk was cut in half
 in the first node and in the jmx I can see that indeed the compression
 ration is 0.46. But on nodes 2 and 3 nothing happened. In the jmx I can see
 that compression ratio is 0 and the size of the files of disk stayed the
 same.

 In cli

 ColumnFamily: cf_name
   Key Validation Class: org.apache.cassandra.db.marshal.UUIDType
   Default column value validator:
 org.apache.cassandra.db.marshal.UTF8Type
   Columns sorted by:
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)
   Row cache size / save period in seconds / keys to save : 0.0/0/all
   Row Cache Provider:
 org.apache.cassandra.cache.SerializingCacheProvider
   Key cache size / save period in seconds: 20.0/14400
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 1.0
   Replicate on write: true
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy:
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
   Compression Options:
 chunk_length_kb: 64
 sstable_compression:
 org.apache.cassandra.io.compress.SnappyCompressor

 Can anyone help?
 Thanks

  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Mon, Sep 24, 2012 at 8:37 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Thanks all, that helps. Will start with one - two CFs and let you know
 the effect


 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Sun, Sep 23, 2012 at 8:21 PM, Hiller, Dean dean.hil...@nrel.govwrote:

 As well as your unlimited column names may all have the same prefix,
 right? Like accounts.rowkey56, accounts.rowkey78, etc. etc.  so the
 accounts gets a ton of compression then.

 Later,
 Dean

 From: Tyler Hobbs ty...@datastax.commailto:ty...@datastax.com
 Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
 user@cassandra.apache.orgmailto:user@cassandra.apache.org
 Date: Sunday, September 23, 2012 11:46 AM
 To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
 user@cassandra.apache.orgmailto:user@cassandra.apache.org
 Subject: Re: compression

  column metadata, you're still likely to get a reasonable amount of
 compression.  This is especially true

cassandra key cache question

2012-09-27 Thread Tamar Fraenkel
Hi!
Is it possible that in JMX and cfstats the Key cache size is much bigger
than the number of keys in the CF?
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: cassandra key cache question

2012-09-27 Thread Tamar Fraenkel
Hi!
One more question:
I have couple of dropped column families, and in the JMX console I don't
see them under org.apache.cassandra.db.ColumnFamilies, *BUT *I do see them
under org.apache.cassandra.db.Caches, and the cache is not empty!
Does it mean that Cassandra still keep memory busy doing caching for a
non-existing column family? If so, how do I remove those caches?

Thanks!

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Sep 27, 2012 at 11:45 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Is it possible that in JMX and cfstats the Key cache size is much bigger
 than the number of keys in the CF?
 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




tokLogo.pngtokLogo.png

Re: compression

2012-09-24 Thread Tamar Fraenkel
Thanks all, that helps. Will start with one - two CFs and let you know the
effect

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Sun, Sep 23, 2012 at 8:21 PM, Hiller, Dean dean.hil...@nrel.gov wrote:

 As well as your unlimited column names may all have the same prefix,
 right? Like accounts.rowkey56, accounts.rowkey78, etc. etc.  so the
 accounts gets a ton of compression then.

 Later,
 Dean

 From: Tyler Hobbs ty...@datastax.commailto:ty...@datastax.com
 Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
 user@cassandra.apache.orgmailto:user@cassandra.apache.org
 Date: Sunday, September 23, 2012 11:46 AM
 To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
 user@cassandra.apache.orgmailto:user@cassandra.apache.org
 Subject: Re: compression

  column metadata, you're still likely to get a reasonable amount of
 compression.  This is especially true if there is some amount of repetition
 in the column names, values, or TTLs in wide rows.  Compression will almost
 always be beneficial unless you're already somehow CPU bound or are using
 large column values that are high in entropy, such as pre-compressed or
 encrypted data.

tokLogo.png

compression

2012-09-23 Thread Tamar Fraenkel
Hi!
In datastax 
documentationhttp://www.datastax.com/docs/1.0/ddl/column_familythere
is an explanation of what CFs are a good fit for compression:

When to Use Compression

Compression is best suited for column families where there are many rows,
with each row having the same columns, or at least many columns in common.
For example, a column family containing user data such as username, email,
etc., would be a good candidate for compression. The more similar the data
across rows, the greater the compression ratio will be, and the larger the
gain in read performance.

Compression is not as good a fit for column families where each row has a
different set of columns, or where there are just a few very wide rows.
Dynamic column families such as this will not yield good compression ratios.

I have many column families where rows share some of the columns and have
varied number of unique columns per row.
For example, I have a CF where each row has ~13 shared columns, but between
0 to many unique columns. Will such CF be a good fit for compression?

More generally, is there a rule of thumb for how many shared columns (or
percentage of columns which are shared) is considered a good fit for
compression?

Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: Heap size question

2012-08-22 Thread Tamar Fraenkel
Hi!
I am running 1.0.8.
So if I understand correctly both Memtable and  Key cache are stored in the
heap. (I don't have row cache)
SSTables are mapped to operating system's virtual memory system, so if I
increase heap I guess there will be less memory for this?

I have seen the changes in 1.1, but we are not there yet.
Thanks,


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, Aug 22, 2012 at 9:23 AM, Thomas Spengler 
thomas.speng...@toptarif.de wrote:

 Thats not right

 since 1.1.X it will used

 my hint:
 take a look at

 commitlog_total_space_in_mb
 it seams to be maped to off heap usage
 the fix for taking the whole off-heap-memory is included in 1.1.3

 the next parameter you have to take a look

 disk_access_mode: mmap_index_only
 #disk_access_mode: auto
 #disk_access_mode: standard

 and the third

 row_cache_provider:

 the default has changed from 1.0.X to 1.1.X
 the new one takes off-heap-cache (with and also without jna)

 Regards
 Tom

 On 08/22/2012 06:37 AM, aaron morton wrote:
  How do I know if my off-heap memory is not used?
 
 
  If you are using the default memory mapped file access memory not used
 by the cassandra JVM will be used to cache files.
 
  Cheers
 
  -
  Aaron Morton
  Freelance Developer
  @aaronmorton
  http://www.thelastpickle.com
 
  On 22/08/2012, at 5:17 AM, Tamar Fraenkel ta...@tok-media.com wrote:
 
  Much appreciated.
  What you described makes a lot of sense from all my readings :)
 
  Thanks!
  Tamar Fraenkel
  Senior Software Engineer, TOK Media
 
  tokLogo.png
 
  ta...@tok-media.com
  Tel:   +972 2 6409736
  Mob:  +972 54 8356490
  Fax:   +972 2 5612956
 
 
 
 
 
  On Tue, Aug 21, 2012 at 6:43 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:
  You're welcome. I'll answer to your new questions but keep in mind that
 I am not a cassandra commiter nor even a cassandra specialist.
 
  you mean that key cache is not in heap? I am using cassandra 1.0.8 and
 I was under the expression it was, see
 http://www.datastax.com/docs/1.0/operations/tuning, Tuning Java Heap
 Size.
 
 
 http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-memory-and-disk-space-management
 
  If I understood this correctly, It seems that  only the row cache is
 off-heap. So it's not an issue for us as far as we don't use row cache.
 
  I thought that key-cache-size + 1GB + memtable space should not exceed
 heap size. Am I wrong?
 
  I don't know if this is a good formula. Datastax gives it so it
 shouldn't be that bad :). However I would say that key-cache-size + 1GB +
 memtable space  should not exceed 0.75 * Max Heap (where 0.75 is
 flush_largest_memtables_at). I keep default key-cache (which is 5% of max
 heap if I remember well on 1.1.x) and default memtable space (1/3 of max
 heap). I have enlarged my heap from 2 to 4 GB because I had some memory
 pressure (sometimes the Heap Used was greater than 0.75 * Max Heap)
 
  WARN [ScheduledTasks:1] 2012-08-20 12:31:46,506 GCInspector.java (line
 145) Heap is 0.7704251937535934 full.  You may need to reduce memtable
 and/or cache sizes.  Cassandra will now flush up to the two largest
 memtables to free up memory.  Adjust flush_largest_memtables_at threshold
 in cassandra.yaml if you don't want Cassandra to do this automatically
 
  This message is the memory pressure I was talking about just above.
 
  How do I know if my off-heap memory is not used?
 
  Well, if you got no row cache and your server is only used as a
 Cassandra node, I'm quite sure you can tune your heap to get 4GB. I guess a
 htop or any memory monitoring system is able to tell you how much your
 memory is used.
 
  I hope I didn't tell you too much bullshits :p.
 
  Alain
 
  2012/8/21 Tamar Fraenkel ta...@tok-media.com
  Thanks for you prompt response. Please see follow up questions below
  Thanks!!!
 
 
 
  Tamar Fraenkel
  Senior Software Engineer, TOK Media
 
  tokLogo.png
 
  ta...@tok-media.com
  Tel:   +972 2 6409736
  Mob:  +972 54 8356490
  Fax:   +972 2 5612956
 
 
 
 
 
  On Tue, Aug 21, 2012 at 12:57 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:
  I have the same configuration and I recently change  my
 cassandra-sh.yaml to :
 
  MAX_HEAP_SIZE=4G
  HEAP_NEWSIZE=200M
 
  I guess it depends on how much you use the cache (which is now in the
 off-heap memory).
 
  you mean that key cache is not in heap? I am using cassandra 1.0.8 and
 I was under the expression it was, see
 http://www.datastax.com/docs/1.0/operations/tuning, Tuning Java Heap Size.
  I thought that key-cache-size + 1GB + memtable space should not exceed
 heap size. Am I wrong?
 
 
  I don't use row cache and use the default key cache size.
  Me too, I have Key Cache capacity of 20 for all my CFs. Currently
 if my calculations are correct I have about 1.4GB of key cache.
 
  I have no more memory pressure nor OOM.
  I don't see OOM, but I do

Heap size question

2012-08-21 Thread Tamar Fraenkel
Hi!
I have a question regarding Cassandra heap size.
Cassandra calculates heap size in cassandra-env.sh according to the
following algorythm
# set max heap size based on the following
# max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB))
# calculate 1/2 ram and cap to 1024MB
# calculate 1/4 ram and cap to 8192MB
# pick the max

So, for
system_memory_in_mb=7468
half_system_memory_in_mb=3734
quarter_system_memory_in_mb=1867
This will result in
max(min(3734,1024), min(1867,8000)) = max(1024,1867)=*1867MB* or in other
words 1/4 of RAM.

In http://www.datastax.com/docs/1.0/operations/tuning it says: Cassandra's
default configuration opens the JVM with a heap size of 1/4 of the
available system memory (or a minimum 1GB and maximum of 8GB for systems
with a very low or very high amount of RAM). Heapspace should be a minimum
of 1/2 of your RAM, but a maximum of 8GB. The vast majority of deployments
do not benefit from larger heap sizes because (in most cases) the ability
of Java 6 to gracefully handle garbage collection above 8GB quickly
diminishes.
*If I understand this correctly, this means it is better if my heap size
will be 1/2 of RAM, 3734MB.*
I am running on EC2 m1.large instance (7.5 GB memory, 4 EC2 Compute Units
(2 virtual cores with 2 EC2 Compute Units each)).
My system seems to be suffering from lack of memory, and I should probably
increase heap or (and?) reduce key cache size.

Would you recommend changing the heap to half RAM?

If yes, should I hard-code it in acassandra-env.sh?

Thanks!

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: Heap size question

2012-08-21 Thread Tamar Fraenkel
Thanks for you prompt response. Please see follow up questions below
Thanks!!!


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Aug 21, 2012 at 12:57 PM, Alain RODRIGUEZ arodr...@gmail.comwrote:

 I have the same configuration and I recently change  my cassandra-sh.yaml
 to :

 MAX_HEAP_SIZE=4G
 HEAP_NEWSIZE=200M


 I guess it depends on how much you use the cache (which is now in the
 off-heap memory).


 you mean that key cache is not in heap? I am using cassandra 1.0.8 and I
was under the expression it was, see
http://www.datastax.com/docs/1.0/operations/tuning, Tuning Java Heap Size.
 I thought that key-cache-size + 1GB + memtable space should not exceed
heap size. Am I wrong?


 I don't use row cache and use the default key cache size.

Me too, I have Key Cache capacity of 20 for all my CFs. Currently if my
calculations are correct I have about 1.4GB of key cache.


 I have no more memory pressure nor OOM.

I don't see OOM, but I do see messages like the following in my logs:
INFO [ScheduledTasks:1] 2012-08-20 12:31:46,506 GCInspector.java (line 122)
GC for ParNew: 219 ms for 1 collections, 1491982816 used; max is 1937768448
 WARN [ScheduledTasks:1] 2012-08-20 12:31:46,506 GCInspector.java (line
145) Heap is 0.7704251937535934 full.  You may need to reduce memtable
and/or cache sizes.  Cassandra will now flush up to the two largest
memtables to free up memory.  Adjust flush_largest_memtables_at threshold
in cassandra.yaml if you don't want Cassandra to do this automatically



 I think that if your off-heap memory is unused, it's better enlarging the
 heap (with a max limit of 8GB)

 How do I know if my off-heap memory is not used?


 Hope this will help.

 Alain

 2012/8/21 Tamar Fraenkel ta...@tok-media.com

 Hi!
 I have a question regarding Cassandra heap size.
 Cassandra calculates heap size in cassandra-env.sh according to the
 following algorythm
 # set max heap size based on the following
 # max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB))
 # calculate 1/2 ram and cap to 1024MB
 # calculate 1/4 ram and cap to 8192MB
 # pick the max

 So, for
 system_memory_in_mb=7468
 half_system_memory_in_mb=3734
 quarter_system_memory_in_mb=1867
 This will result in
 max(min(3734,1024), min(1867,8000)) = max(1024,1867)=*1867MB* or in
 other words 1/4 of RAM.

 In http://www.datastax.com/docs/1.0/operations/tuning it says: Cassandra's
 default configuration opens the JVM with a heap size of 1/4 of the
 available system memory (or a minimum 1GB and maximum of 8GB for systems
 with a very low or very high amount of RAM). Heapspace should be a minimum
 of 1/2 of your RAM, but a maximum of 8GB. The vast majority of deployments
 do not benefit from larger heap sizes because (in most cases) the ability
 of Java 6 to gracefully handle garbage collection above 8GB quickly
 diminishes.
 *If I understand this correctly, this means it is better if my heap size
 will be 1/2 of RAM, 3734MB.*
 I am running on EC2 m1.large instance (7.5 GB memory, 4 EC2 Compute
 Units (2 virtual cores with 2 EC2 Compute Units each)).
 My system seems to be suffering from lack of memory, and I should
 probably increase heap or (and?) reduce key cache size.

 Would you recommend changing the heap to half RAM?

 If yes, should I hard-code it in acassandra-env.sh?

 Thanks!

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





tokLogo.pngtokLogo.png

Re: Heap size question

2012-08-21 Thread Tamar Fraenkel
Much appreciated.
What you described makes a lot of sense from all my readings :)

Thanks!
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Aug 21, 2012 at 6:43 PM, Alain RODRIGUEZ arodr...@gmail.com wrote:

 You're welcome. I'll answer to your new questions but keep in mind that I
 am not a cassandra commiter nor even a cassandra specialist.

 you mean that key cache is not in heap? I am using cassandra 1.0.8 and I
 was under the expression it was, see
 http://www.datastax.com/docs/1.0/operations/tuning, Tuning Java Heap
 Size.


 http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-memory-and-disk-space-management

 If I understood this correctly, It seems that  only the row cache is
 off-heap. So it's not an issue for us as far as we don't use row cache.

 I thought that key-cache-size + 1GB + memtable space should not exceed
 heap size. Am I wrong?

 I don't know if this is a good formula. Datastax gives it so it shouldn't
 be that bad :). However I would say that key-cache-size + 1GB + memtable
 space  should not exceed 0.75 * Max Heap (where 0.75 is
 flush_largest_memtables_at). I keep default key-cache (which is 5% of max
 heap if I remember well on 1.1.x) and default memtable space (1/3 of max
 heap). I have enlarged my heap from 2 to 4 GB because I had some memory
 pressure (sometimes the Heap Used was greater than 0.75 * Max Heap)

 WARN [ScheduledTasks:1] 2012-08-20 12:31:46,506 GCInspector.java (line
 145) Heap is 0.7704251937535934 full.  You may need to reduce memtable
 and/or cache sizes.  Cassandra will now flush up to the two largest
 memtables to free up memory.  Adjust flush_largest_memtables_at threshold
 in cassandra.yaml if you don't want Cassandra to do this automatically

 This message is the memory pressure I was talking about just above.

 How do I know if my off-heap memory is not used?

 Well, if you got no row cache and your server is only used as a Cassandra
 node, I'm quite sure you can tune your heap to get 4GB. I guess a htop or
 any memory monitoring system is able to tell you how much your memory is
 used.

 I hope I didn't tell you too much bullshits :p.

 Alain

 2012/8/21 Tamar Fraenkel ta...@tok-media.com

 Thanks for you prompt response. Please see follow up questions below
 Thanks!!!



 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Aug 21, 2012 at 12:57 PM, Alain RODRIGUEZ arodr...@gmail.comwrote:

 I have the same configuration and I recently change  my
 cassandra-sh.yaml to :

 MAX_HEAP_SIZE=4G
 HEAP_NEWSIZE=200M


 I guess it depends on how much you use the cache (which is now in the
 off-heap memory).


  you mean that key cache is not in heap? I am using cassandra 1.0.8 and
 I was under the expression it was, see
 http://www.datastax.com/docs/1.0/operations/tuning, Tuning Java Heap
 Size.
  I thought that key-cache-size + 1GB + memtable space should not exceed
 heap size. Am I wrong?


 I don't use row cache and use the default key cache size.

 Me too, I have Key Cache capacity of 20 for all my CFs. Currently if
 my calculations are correct I have about 1.4GB of key cache.


 I have no more memory pressure nor OOM.

 I don't see OOM, but I do see messages like the following in my logs:
 INFO [ScheduledTasks:1] 2012-08-20 12:31:46,506 GCInspector.java (line
 122) GC for ParNew: 219 ms for 1 collections, 1491982816 used; max is
 1937768448
  WARN [ScheduledTasks:1] 2012-08-20 12:31:46,506 GCInspector.java (line
 145) Heap is 0.7704251937535934 full.  You may need to reduce memtable
 and/or cache sizes.  Cassandra will now flush up to the two largest
 memtables to free up memory.  Adjust flush_largest_memtables_at threshold
 in cassandra.yaml if you don't want Cassandra to do this automatically



 I think that if your off-heap memory is unused, it's better enlarging
 the heap (with a max limit of 8GB)

 How do I know if my off-heap memory is not used?


 Hope this will help.

 Alain

 2012/8/21 Tamar Fraenkel ta...@tok-media.com

 Hi!
 I have a question regarding Cassandra heap size.
 Cassandra calculates heap size in cassandra-env.sh according to the
 following algorythm
 # set max heap size based on the following
 # max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB))
 # calculate 1/2 ram and cap to 1024MB
 # calculate 1/4 ram and cap to 8192MB
 # pick the max

 So, for
 system_memory_in_mb=7468
 half_system_memory_in_mb=3734
 quarter_system_memory_in_mb=1867
 This will result in
 max(min(3734,1024), min(1867,8000)) = max(1024,1867)=*1867MB* or in
 other words 1/4 of RAM.

 In http://www.datastax.com/docs/1.0/operations/tuning it says: Cassandra's
 default configuration opens the JVM with a heap size of 1/4 of the
 available system memory (or a minimum 1GB

Re: GCInspector info messages in cassandra log

2012-08-16 Thread Tamar Fraenkel
Thank you very much!
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Aug 16, 2012 at 12:11 AM, aaron morton aa...@thelastpickle.comwrote:

 Is there anything to do before that? like drain or flush?

 For a clean shutdown I do

 nodetool -h localhost disablethrift
 nodetool -h localhost disablegossip  sleep 10
 nodetool -h localhost drain
 then kill

 Would you recommend that? If I do it, how often should I do a full
 snapshot, and how often should I backup the backup directory?

 Sounds like you could use Priam and be happier...
 http://techblog.netflix.com/2012/02/announcing-priam.html

  I just saw that there is an option global_snapshot, is it still supported?

 I cannot find it.

 Try Piram or the instructions here, which are pretty much what you have
 described http://www.datastax.com/docs/1.0/operations/backup_restore

 Cheers

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 15/08/2012, at 4:57 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 Aaron,
 Thank you very much. I will do as you suggested.

 One last question regarding restart:
 I assume, I should do it node by node.
 Is there anything to do before that? like drain or flush?

 I am also considering enabling incremental backups on my cluster.
 Currently I take a daily full snapshot of the cluster, tar it and load it
 to S3 (size now is 3.1GB). Would you recommend that? If I do it, how often
 should I do a full snapshot, and how often should I backup the backup
 directory?

 Another snapshot related question, currently I snapshot on each node and
 use parallel-slurp to copy the snapshot to one node where I tar them. I
 just saw that there is an option global_snapshot, is it still supported?
 Does that mean that if I run it on one node the snapshot will contain data
 from all cluster? How does it work in restore? Is it better than my current
 backup system?

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Aug 14, 2012 at 11:51 PM, aaron morton aa...@thelastpickle.comwrote:


1. According to cfstats there are the some CF with high Comacted row
maximum sizes (1131752, 4866323 and 25109160). Others max sizes are 
100. Are these considered to be problematic, what can I do to solve
that?
2.

 They are only 1, 4 and 25 MB. Not too big.

 What should be the values of  in_memory_compaction_limit_in_mb
  and concurrent_compactors and how do I change them?

 Sounds like you dont have very big CF's, so changing the
 in_memory_compaction_limit_in_mb may not make too much difference.

 Try changing concurrent_compactors to 2 in the yaml file. This change
 will let you know if GC and compaction are related.


  change yaml file and restart,

 yes

 What do I do about the long rows? What value is considered too big.

 They churn more memory during compaction. If you have a lot of rows +32
 MB I would think about it, does not look that way.

 Cheers


   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 15/08/2012, at 3:15 AM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 It helps, but before I do more actions I want to give you some more info,
 and ask some questions:

 *Related Info*

1. According to my yaml file (where do I see these parameters in the
jmx? I couldn't find them):
in_memory_compaction_limit_in_mb: 64
concurrent_compactors: 1, but it is commented out, so I guess it is
the default value
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 16
compaction_preheat_key_cache: true
2. According to cfstats there are the some CF with high Comacted row
maximum sizes (1131752, 4866323 and 25109160). Others max sizes are 
100. Are these considered to be problematic, what can I do to solve
that?
3. During compactions Cassandra is slower
4. Running Cassandra Version 1.0.8

 *Questions*
 What should be the values of  in_memory_compaction_limit_in_mb
  and concurrent_compactors and how do I change them? change yaml file
 and restart, or can it be done using jmx without restarting Cassandra?
 What do I do about the long rows? What value is considered too big.

 I appreciate your help! Thanks,



 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Aug 14, 2012 at 1:22 PM, aaron morton aa...@thelastpickle.comwrote:

 There are a couple of steps you can take if compaction is causing GC.

 - if you have a lot of wide rows consider reducing
 the in_memory_compaction_limit_in_mb yaml setting. This will slow down
 compaction but will reduce the memory usage.

 - reduce

Re: GCInspector info messages in cassandra log

2012-08-14 Thread Tamar Fraenkel
Hi!
It helps, but before I do more actions I want to give you some more info,
and ask some questions:

*Related Info*

   1. According to my yaml file (where do I see these parameters in the jmx?
   I couldn't find them):
   in_memory_compaction_limit_in_mb: 64
   concurrent_compactors: 1, but it is commented out, so I guess it is the
   default value
   multithreaded_compaction: false
   compaction_throughput_mb_per_sec: 16
   compaction_preheat_key_cache: true
   2. According to cfstats there are the some CF with high Comacted row
   maximum sizes (1131752, 4866323 and 25109160). Others max sizes are 
   100. Are these considered to be problematic, what can I do to solve
   that?
   3. During compactions Cassandra is slower
   4. Running Cassandra Version 1.0.8

*Questions*
What should be the values of  in_memory_compaction_limit_in_mb
 and concurrent_compactors and how do I change them? change yaml file and
restart, or can it be done using jmx without restarting Cassandra?
What do I do about the long rows? What value is considered too big.

I appreciate your help! Thanks,



*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Aug 14, 2012 at 1:22 PM, aaron morton aa...@thelastpickle.comwrote:

 There are a couple of steps you can take if compaction is causing GC.

 - if you have a lot of wide rows consider reducing
 the in_memory_compaction_limit_in_mb yaml setting. This will slow down
 compaction but will reduce the memory usage.

 - reduce concurrent_compactors

 Both of these may slow down compaction. Once you have GC under control
 you may want to play with memory settings.

 Hope that helps.
   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 14/08/2012, at 4:45 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 I have 3 nodes ring running on Amazon EC2.
 About once a week I see in the logs compaction messages and around the
 same time info messages about GC (see below) that I think means it is
 taking too long and happening too often.

 Does it mean I have to reduce my cache size?
 Thanks,
 Tamar

  INFO [ScheduledTasks:1] 2012-08-13 12:50:57,593 GCInspector.java (line
 122) GC for ParNew: 242 ms for 1 collections, 1541590352 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:27,740 GCInspector.java (line
 122) GC for ParNew: 291 ms for 1 collections, 1458227032 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:29,741 GCInspector.java (line
 122) GC for ParNew: 261 ms for 1 collections, 1228861368 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:30,833 GCInspector.java (line
 122) GC for ParNew: 319 ms for 1 collections, 1120131360 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:32,863 GCInspector.java (line
 122) GC for ParNew: 241 ms for 1 collections, 983144216 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:33,864 GCInspector.java (line
 122) GC for ParNew: 215 ms for 1 collections, 967702720 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:34,964 GCInspector.java (line
 122) GC for ParNew: 248 ms for 1 collections, 973803344 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:41,211 GCInspector.java (line
 122) GC for ParNew: 265 ms for 1 collections, 1071933560 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:43,212 GCInspector.java (line
 122) GC for ParNew: 326 ms for 1 collections, 1217367792 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:44,212 GCInspector.java (line
 122) GC for ParNew: 245 ms for 1 collections, 1203481536 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:45,213 GCInspector.java (line
 122) GC for ParNew: 209 ms for 1 collections, 1208819416 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:46,237 GCInspector.java (line
 122) GC for ParNew: 248 ms for 1 collections, 1338361648 used; max is
 1937768448


  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





tokLogo.png

Re: GCInspector info messages in cassandra log

2012-08-14 Thread Tamar Fraenkel
Aaron,
Thank you very much. I will do as you suggested.

One last question regarding restart:
I assume, I should do it node by node.
Is there anything to do before that? like drain or flush?

I am also considering enabling incremental backups on my cluster. Currently
I take a daily full snapshot of the cluster, tar it and load it to S3 (size
now is 3.1GB). Would you recommend that? If I do it, how often should I do
a full snapshot, and how often should I backup the backup directory?

Another snapshot related question, currently I snapshot on each node and
use parallel-slurp to copy the snapshot to one node where I tar them. I
just saw that there is an option global_snapshot, is it still supported?
Does that mean that if I run it on one node the snapshot will contain data
from all cluster? How does it work in restore? Is it better than my current
backup system?

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Aug 14, 2012 at 11:51 PM, aaron morton aa...@thelastpickle.comwrote:


1. According to cfstats there are the some CF with high Comacted row
maximum sizes (1131752, 4866323 and 25109160). Others max sizes are 
100. Are these considered to be problematic, what can I do to solve
that?
2.

 They are only 1, 4 and 25 MB. Not too big.

 What should be the values of  in_memory_compaction_limit_in_mb
  and concurrent_compactors and how do I change them?

 Sounds like you dont have very big CF's, so changing the
 in_memory_compaction_limit_in_mb may not make too much difference.

 Try changing concurrent_compactors to 2 in the yaml file. This change
 will let you know if GC and compaction are related.


  change yaml file and restart,

 yes

 What do I do about the long rows? What value is considered too big.

 They churn more memory during compaction. If you have a lot of rows +32 MB
 I would think about it, does not look that way.

 Cheers


   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 15/08/2012, at 3:15 AM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 It helps, but before I do more actions I want to give you some more info,
 and ask some questions:

 *Related Info*

1. According to my yaml file (where do I see these parameters in the
jmx? I couldn't find them):
in_memory_compaction_limit_in_mb: 64
concurrent_compactors: 1, but it is commented out, so I guess it is
the default value
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 16
compaction_preheat_key_cache: true
2. According to cfstats there are the some CF with high Comacted row
maximum sizes (1131752, 4866323 and 25109160). Others max sizes are 
100. Are these considered to be problematic, what can I do to solve
that?
3. During compactions Cassandra is slower
4. Running Cassandra Version 1.0.8

 *Questions*
 What should be the values of  in_memory_compaction_limit_in_mb
  and concurrent_compactors and how do I change them? change yaml file and
 restart, or can it be done using jmx without restarting Cassandra?
 What do I do about the long rows? What value is considered too big.

 I appreciate your help! Thanks,



 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Aug 14, 2012 at 1:22 PM, aaron morton aa...@thelastpickle.comwrote:

 There are a couple of steps you can take if compaction is causing GC.

 - if you have a lot of wide rows consider reducing
 the in_memory_compaction_limit_in_mb yaml setting. This will slow down
 compaction but will reduce the memory usage.

 - reduce concurrent_compactors

 Both of these may slow down compaction. Once you have GC under control
 you may want to play with memory settings.

 Hope that helps.
   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 14/08/2012, at 4:45 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 I have 3 nodes ring running on Amazon EC2.
 About once a week I see in the logs compaction messages and around the
 same time info messages about GC (see below) that I think means it is
 taking too long and happening too often.

 Does it mean I have to reduce my cache size?
 Thanks,
 Tamar

  INFO [ScheduledTasks:1] 2012-08-13 12:50:57,593 GCInspector.java (line
 122) GC for ParNew: 242 ms for 1 collections, 1541590352 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:27,740 GCInspector.java (line
 122) GC for ParNew: 291 ms for 1 collections, 1458227032 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:29,741 GCInspector.java (line
 122) GC for ParNew: 261 ms for 1 collections, 1228861368 used; max is
 1937768448
  INFO [ScheduledTasks:1] 2012-08-13 12:51:30,833 GCInspector.java (line
 122) GC

GCInspector info messages in cassandra log

2012-08-13 Thread Tamar Fraenkel
Hi!
I have 3 nodes ring running on Amazon EC2.
About once a week I see in the logs compaction messages and around the same
time info messages about GC (see below) that I think means it is taking too
long and happening too often.

Does it mean I have to reduce my cache size?
Thanks,
Tamar

 INFO [ScheduledTasks:1] 2012-08-13 12:50:57,593 GCInspector.java (line
122) GC for ParNew: 242 ms for 1 collections, 1541590352 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:27,740 GCInspector.java (line
122) GC for ParNew: 291 ms for 1 collections, 1458227032 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:29,741 GCInspector.java (line
122) GC for ParNew: 261 ms for 1 collections, 1228861368 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:30,833 GCInspector.java (line
122) GC for ParNew: 319 ms for 1 collections, 1120131360 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:32,863 GCInspector.java (line
122) GC for ParNew: 241 ms for 1 collections, 983144216 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:33,864 GCInspector.java (line
122) GC for ParNew: 215 ms for 1 collections, 967702720 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:34,964 GCInspector.java (line
122) GC for ParNew: 248 ms for 1 collections, 973803344 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:41,211 GCInspector.java (line
122) GC for ParNew: 265 ms for 1 collections, 1071933560 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:43,212 GCInspector.java (line
122) GC for ParNew: 326 ms for 1 collections, 1217367792 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:44,212 GCInspector.java (line
122) GC for ParNew: 245 ms for 1 collections, 1203481536 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:45,213 GCInspector.java (line
122) GC for ParNew: 209 ms for 1 collections, 1208819416 used; max is
1937768448
 INFO [ScheduledTasks:1] 2012-08-13 12:51:46,237 GCInspector.java (line
122) GC for ParNew: 248 ms for 1 collections, 1338361648 used; max is
1937768448


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: increased RF and repair, not working?

2012-07-30 Thread Tamar Fraenkel
Hi!
To clarify it a bit more,
Let's assume the setup is changed to
RF=3
W_CL=QUORUM (or two for that matter)
R_CL=ONE

The setup will now work for both read and write in case of one node failure.
What are the disadvantages, other than the disk space needed to replicate
everything trice instead of twice? Will it affect also performance?

Thanks

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Fri, Jul 27, 2012 at 11:29 PM, Riyad Kalla rka...@gmail.com wrote:

 Dave,

 What I was suggesting for Yan was to:

 WRITE: RF=2, CL=QUORUM
 READ: CL=ONE

 But you have a good pt... if he hits one of the replicas that didn't have
 the data, that would be bad.

 Thanks for clearing that up.


 On Fri, Jul 27, 2012 at 11:43 AM, Dave Brosius 
 dbros...@mebigfatguy.comwrote:

 You have RF=2, CL= Quorum but 3 nodes.

 So each row is represented on 2 of the 3 nodes.

 If you take a node down, one of two things can happen when you attempt to
 read a row.

 The row lives on the two nodes that are still up. In this case you will
 successfully read the data.

 The row lives on one node that is up, and one node that is down. In this
 case the read will fail because you haven't fulfilled the quorum (2 nodes
 in agreement) requirement.


 *- Original Message -*
 *From:* Riyad Kalla rka...@gmail.com
 *Sent:* Fri, July 27, 2012 8:08
 *Subject:* Re: increased RF and repair, not working?

 Dave, per my understanding of Yan's description he has 3 nodes and took
 one down manually to test; that should have worked, no?

 On Thu, Jul 26, 2012 at 11:00 PM, Dave Brosius 
 dbros...@mebigfatguy.comwrote:

 Quorum is defined as

 (replication_factor / 2) + 1
 therefore quorum when rf = 2 is 2! so in your case, both nodes must be up.  
 Really, using Quorum only starts making sense as a 'quorum' when RF=3






 On 07/26/2012 10:38 PM, Yan Chunlu wrote:

 I am using Cassandra 1.0.2, have a 3 nodes cluster. the consistency
 level of read  write are  both QUORUM.

 At first the RF=1, and I figured that one node down will cause the
 cluster unusable. so I changed RF to 2, and run nodetool repair on every
 node(actually I did it twice).

 After the operation I think my data should be in at least two nodes, and
 it would be okay if one of them is down.

 But when I tried to simulate the failure, by disablegossip of one node,
 and the cluster knows this node is dow n. then access data from the
 cluster, it returned  MaximumRetryException(pycassa).   as my experiences
 this is caused by UnavailableException, which is means the data it is
 requesting is on a node which is down.

 so I wonder my data might not be replicated right, what should I do?
 thanks for the help!

 here is the keyspace info:

 *
 *
 *Keyspace: comments:*
 *  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy*
  *  Durable Writes: true*
 *Options: [replication_factor:2]*



 the scheme version is okay:

 *[default@unknown] describe cluster;*
 *Cluster Information:*
 *   Snitch: org.apache.cassandra.locator.SimpleSnitch*
  *   Partitioner: org.apache.cassandra.dht.RandomPartitioner*
 *   Schema versions: *
 * f67d0d50-b923-11e1--4f7cf9240aef: [192.168.1.129, 192.168.1.40,
 192.168.1.50]*



 the loads are as below:

 *nodetool -h localhost ring*
 *Address DC  RackStatus State   Load
 nbsp ;OwnsToken   *
 *
  113427455640312821154458202477256070484 *
 *192.168.1.50datacenter1 rack1   Up Normal  28.77 GB
  33.33%  0   *
 *192.168.1.40datacenter1 rac k1   Up Normal  26.67 GB
  33.33%  56713727820156410577229101238628035242  *
 *192.168.1.129   datacenter1 rack1   Up Normal  33.25 GB
  33.33%  113427455640312821154458202477256070484*




tokLogo.png

Re: increased RF and repair, not working?

2012-07-30 Thread Tamar Fraenkel
How do you make this calculation?
Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, Jul 30, 2012 at 3:14 PM, Tim Wintle timwin...@gmail.com wrote:

 On Mon, 2012-07-30 at 14:40 +0300, Tamar Fraenkel wrote:
  Hi!
  To clarify it a bit more,
  Let's assume the setup is changed to
  RF=3
  W_CL=QUORUM (or two for that matter)
  R_CL=ONE

  The setup will now work for both read and write in case of one node
  failure.
  What are the disadvantages, other than the disk space needed to
  replicate everything trice instead of twice? Will it affect also
  performance?

 (I'm also running RF2, W_CL1, R_CL1 atm - so this is theoretical)

 As I understand it, the most significant performance hit will be to the
 variation in response time.

 For example with R_CL1, (roughly) 1% of requests will be take more than
 the worst 10% of server response times. With R_CL=QUORUM 2.8% of
 requests will have the same latency. (assuming I've just calculated that
 right)

 Tim


 
 



tokLogo.png

Re: increased RF and repair, not working?

2012-07-30 Thread Tamar Fraenkel
Thanks!
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, Jul 30, 2012 at 5:30 PM, Tim Wintle timwin...@gmail.com wrote:


 On Mon, 2012-07-30 at 15:16 +0300, Tamar Fraenkel wrote:
  How do you make this calculation?

 It seems I did make a mistake somewhere before (or I mistyped it) - it
 should have been 2.7%, not 2.8%.


 You're sending read requests to RF servers, and hoping for a response
 from CL of them within the time.

 For N=2, CL=1 - the probability of both hitting the worst 10% latency is
 0.1 * 0.1 = 1%

 For N=3, C=2 - the probability of two of the three servers hitting the
 worst 10% latency is (0.9 * (0.1 * 0.1) ) + (0.1 * ((0.1 * 0.9) + (0.9 *
 0.1))) = 2.7%


 Tim




 
  Thanks,
  Tamar Fraenkel
  Senior Software Engineer, TOK Media
 
  Inline image 1
 
  ta...@tok-media.com
  Tel:   +972 2 6409736
  Mob:  +972 54 8356490
  Fax:   +972 2 5612956

 

 
  On Mon, Jul 30, 2012 at 3:14 PM, Tim Wintle timwin...@gmail.com
  wrote:
  On Mon, 2012-07-30 at 14:40 +0300, Tamar Fraenkel wrote:
   Hi!
   To clarify it a bit more,
   Let's assume the setup is changed to
   RF=3
   W_CL=QUORUM (or two for that matter)
   R_CL=ONE
 
   The setup will now work for both read and write in case of
  one node
   failure.
   What are the disadvantages, other than the disk space needed
  to
   replicate everything trice instead of twice? Will it affect
  also
   performance?
 
 
  (I'm also running RF2, W_CL1, R_CL1 atm - so this is
  theoretical)
 
  As I understand it, the most significant performance hit will
  be to the
  variation in response time.
 
  For example with R_CL1, (roughly) 1% of requests will be take
  more than
  the worst 10% of server response times. With R_CL=QUORUM 2.8%
  of
  requests will have the same latency. (assuming I've just
  calculated that
  right)
 
  Tim
 
 
  
  
 
 
 
 



tokLogo.png

Re: Questions regarding DataStax AMI

2012-07-28 Thread Tamar Fraenkel
Thanks!
This worked fine!

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Sat, Jul 28, 2012 at 7:10 AM, Joaquin Casares joaq...@datastax.comwrote:

 Oh you're right, sorry about that. The concept of keeping older packages
 was recently implemented and while using --version community, you would
 need --release 1.0 in order to get 1.0.10.

 If you are using --version enterprise, you can use --release 2.0 to get
 DataStax Enterprise 2.0 which comes bundled with 1.0.8. As long as you
 don't include --analyticsnodes or --searchnodes, you will get vanilla
 Cassandra.

 As of now, those are the only options available.

 Thanks for pointing that out and sorry about the confusion,

 Joaquin Casares
 DataStax
 Software Engineer/Support



 On Fri, Jul 27, 2012 at 3:50 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 HI!
 I tried starting a cluster with
 Cluster started with these options:
 --clustername Name --totalnodes 3 --version community --release 1.0.8

 But Cassandra's version is 1.1.2
 Thanks

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Thu, Jul 26, 2012 at 9:56 PM, Tamar Fraenkel ta...@tok-media.comwrote:

 What should be the value to create it with Cassandra 1.0.8
 Tamar

 Sent from my iPod

 On Jul 26, 2012, at 7:06 PM, Joaquin Casares joaq...@datastax.com
 wrote:

 Yes, you can easily do this by using the --release version switch as
 found here:
 http://www.datastax.com/docs/1.0/install/install_ami

 Thanks,

 Joaquin Casares
 DataStax
 Software Engineer/Support



 On Thu, Jul 26, 2012 at 12:44 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Is there a way to launch EC2 cluster from DataStax latest community AMI
 that will run Cassandra 1.0.8 and not 1.1.2?
 Thanks
  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956








tokLogo.pngtokLogo.png

Re: Questions regarding DataStax AMI

2012-07-27 Thread Tamar Fraenkel
HI!
I tried starting a cluster with
Cluster started with these options:
--clustername Name --totalnodes 3 --version community --release 1.0.8

But Cassandra's version is 1.1.2
Thanks

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Jul 26, 2012 at 9:56 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 What should be the value to create it with Cassandra 1.0.8
 Tamar

 Sent from my iPod

 On Jul 26, 2012, at 7:06 PM, Joaquin Casares joaq...@datastax.com wrote:

 Yes, you can easily do this by using the --release version switch as
 found here:
 http://www.datastax.com/docs/1.0/install/install_ami

 Thanks,

 Joaquin Casares
 DataStax
 Software Engineer/Support



 On Thu, Jul 26, 2012 at 12:44 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Is there a way to launch EC2 cluster from DataStax latest community AMI
 that will run Cassandra 1.0.8 and not 1.1.2?
 Thanks
  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956






tokLogo.png

Re: Creating counter columns in cassandra

2012-07-26 Thread Tamar Fraenkel
Hi
To create:

ColumnFamilyDefinition counters = createBasicCfDef(
KEYSPACE, Consts.COUNTERS, ComparatorType.UTF8TYPE,
null, CounterColumnType, CompositeType(UTF8Type,UUIDType));
counters.setReplicateOnWrite(true);
cluster.addColumnFamily(counters, true);

to increment (add) counter

  public static void incrementCounter(Composite key,
  String columnName, long inc) {
MutatorComposite mutator =
HFactory.createMutator(keyspace,
CompositeSerializer.get());
mutator.incrementCounter(key,
Consts.COUNTERS, columnName, inc);
mutator.execute();
  }

Regards,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, Jul 25, 2012 at 8:24 PM, Amila Paranawithana amila1...@gmail.comwrote:


 Hi all,

 I want to create counter columns in a column family via a java module.
 These column families and counter columns need to be created dynamically.
 plz send me some example codes to refer. (with hector or any  other method
 )

 Thanks
 --
 Amila Iroshani Paranawithana
 CSE-University of Moratuwa.
 B-http://amilaparanawithana.blogspot.com
 T-https://twitter.com/#!/AmilaPara



tokLogo.png

Questions regarding DataStax AMI

2012-07-26 Thread Tamar Fraenkel
Hi!
Is there a way to launch EC2 cluster from DataStax latest community AMI
that will run Cassandra 1.0.8 and not 1.1.2?
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: Questions regarding DataStax AMI

2012-07-26 Thread Tamar Fraenkel
What should be the value to create it with Cassandra 1.0.8
Tamar

Sent from my iPod

On Jul 26, 2012, at 7:06 PM, Joaquin Casares joaq...@datastax.com wrote:

 Yes, you can easily do this by using the --release version switch as found 
 here:
 http://www.datastax.com/docs/1.0/install/install_ami
 
 Thanks,
 
 Joaquin Casares
 DataStax
 Software Engineer/Support
 
 
 
 On Thu, Jul 26, 2012 at 12:44 AM, Tamar Fraenkel ta...@tok-media.com wrote:
 Hi!
 Is there a way to launch EC2 cluster from DataStax latest community AMI that 
 will run Cassandra 1.0.8 and not 1.1.2?
 Thanks
 Tamar Fraenkel 
 Senior Software Engineer, TOK Media 
 
 tokLogo.png
 
 ta...@tok-media.com
 Tel:   +972 2 6409736 
 Mob:  +972 54 8356490 
 Fax:   +972 2 5612956 
 
 
 
 
 


Re: Lots of GCInspector.java on my cluster

2012-07-04 Thread Tamar Fraenkel
Thanks.
I thought my problems may be related to the second leap and I ran sudo date
-s `date -u` on all nodes. Things have improved much in the last 24 hours.

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, Jul 4, 2012 at 1:39 PM, aaron morton aa...@thelastpickle.comwrote:

 High CPU can be http://wiki.apache.org/cassandra/FAQ#ubuntu_hangs

 memory usage looks ok http://wiki.apache.org/cassandra/FAQ#mmap

 Cheers


   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 3/07/2012, at 6:49 PM, Tamar Fraenkel wrote:

 Hi!
 I have a Cassandra cluster on Amazon EC2 Datastax AMIs with 3 nodes and
 replication factor of 2.
 As of July 1st the cluster is very slow and seems to be loaded.

 Running top I get:

 top - 06:40:58 up 99 days, 21:30,  2 users,  load average: 12.45, 13.37,
 14.01
 Tasks: 102 total,   1 running, 101 sleeping,   0 stopped,   0 zombie
 Cpu(s): 21.0%us,  9.8%sy,  0.0%ni,  2.2%id,  0.0%wa,  0.4%hi,  0.4%si,
 66.3%st
 Mem:   7647812k total,  7135752k used,   512060k free,60668k buffers
 Swap:0k total,0k used,0k free,  4234008k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 18729 cassandr  20   0 8866m 2.5g 275m S  108 34.0   1415:37 jsvc
 21798 root  20   0  478m 217m 9760 S   44  2.9   1743:47 java
 3 root  20   0 000 S   20  0.0 564:59.09 ksoftirqd/0



 These are the cassandra processes
 *ps -ef | grep cassandra*
 root 18727 1  0 Jul02 ?00:00:00 jsvc.exec -user 
 cassandra-home /usr/lib/jvm/java-6-sun/jre/bin/../ -pidfile /var/run/
 cassandra.pid -errfile 1 -outfile /var/log/cassandra/output.log -cp
 /usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra
 /lib/avro-1.4.0-fixes.jar:/usr/share/cassandra
 /lib/avro-1.4.0-sources-fixes.jar:/usr/share/cassandra
 /lib/commons-cli-1.1.jar:/usr/share/cassandra
 /lib/commons-codec-1.2.jar:/usr/share/cassandra
 /lib/commons-lang-2.4.jar:/usr/share/cassandra
 /lib/compress-lzf-0.8.4.jar:/usr/share/cassandra
 /lib/concurrentlinkedhashmap-lru-1.2.jar:/usr/share/cassandra
 /lib/guava-r08.jar:/usr/share/cassandra
 /lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra
 /lib/jackson-core-asl-1.4.0.jar:/usr/share/cassandra
 /lib/jackson-mapper-asl-1.4.0.jar:/usr/share/cassandra
 /lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jline-0.9.94.jar:/usr/share/
 cassandra/lib/joda-time-1.6.2.jar:/usr/share/cassandra
 /lib/json-simple-1.1.jar:/usr/share/cassandra
 /lib/libthrift-0.6.jar:/usr/share/cassandra
 /lib/log4j-1.2.16.jar:/usr/share/cassandra
 /lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra
 /lib/slf4j-api-1.6.1.jar:/usr/share/cassandra
 /lib/slf4j-log4j12-1.6.1.jar:/usr/share/cassandra
 /lib/snakeyaml-1.6.jar:/usr/share/cassandra
 /lib/snappy-java-1.0.4.1.jar:/usr/share/cassandra
 /apache-cassandra-1.0.8.jar:/usr/share/cassandra
 /apache-cassandra-thrift-1.0.8.jar:/usr/share/cassandra
 /apache-cassandra.jar:/usr/share/java/jna.jar:/etc/cassandra:/usr/share/
 java/commons-daemon.jar -Dlog4j.configuration=log4j-server.properties
 -XX:HeapDumpPath=/var/lib/cassandra/java_1341216340.hprof
 -XX:ErrorFile=/var/lib/cassandra/hs_err_1341216341.log -ea
 -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1867M -Xmx1867M
 -Xmn200M -XX:+HeapDumpOnOutOfMemoryError -Xss128k -XX:+UseParNewGC
 -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
 -XX:+UseCMSInitiatingOccupancyOnly -Djava.net.preferIPv4Stack=true
 -Djava.rmi.server.hostname=10.34.158.33
 -Dcom.sun.management.jmxremote.port=7199
 -Dcom.sun.management.jmxremote.ssl=false
 -Dcom.sun.management.jmxremote.authenticate=false org.apache.cassandra
 .thrift.CassandraDaemon
 108  18729 18727 99 Jul02 ?23:26:48 jsvc.exec -user 
 cassandra-home /usr/lib/jvm/java-6-sun/jre/bin/../ -pidfile /var/run/
 cassandra.pid -errfile 1 -outfile /var/log/cassandra/output.log -cp
 /usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra
 /lib/avro-1.4.0-fixes.jar:/usr/share/cassandra
 /lib/avro-1.4.0-sources-fixes.jar:/usr/share/cassandra
 /lib/commons-cli-1.1.jar:/usr/share/cassandra
 /lib/commons-codec-1.2.jar:/usr/share/cassandra
 /lib/commons-lang-2.4.jar:/usr/share/cassandra
 /lib/compress-lzf-0.8.4.jar:/usr/share/cassandra
 /lib/concurrentlinkedhashmap-lru-1.2.jar:/usr/share/cassandra
 /lib/guava-r08.jar:/usr/share/cassandra
 /lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra
 /lib/jackson-core-asl-1.4.0.jar:/usr/share/cassandra
 /lib/jackson-mapper-asl-1.4.0.jar:/usr/share/cassandra
 /lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jline-0.9.94.jar:/usr/share/
 cassandra/lib/joda-time-1.6.2.jar:/usr/share/cassandra
 /lib/json-simple-1.1.jar:/usr/share/cassandra
 /lib/libthrift-0.6.jar:/usr

Re: select count(*) returns 10000

2012-06-13 Thread Tamar Fraenkel
Add limit N and it will count more than 1.
Of course it will be slow when you increase N.

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Jun 12, 2012 at 10:07 PM, Derek Williams de...@fyrie.net wrote:

 It's a known issue, here is a bit extra info on it:
 http://stackoverflow.com/questions/8795923/wrong-count-with-cassandra-cql


 On Tue, Jun 12, 2012 at 12:40 PM, Leonid Ilyevsky 
 lilyev...@mooncapital.com wrote:

 The select count(*) ... query returns correct count only if it is =
 1, otherwise it returns exactly 1.
 This happens in both Java API and cqlsh.
 Can somebody verify?

 This email, along with any attachments, is confidential and may be
 legally privileged or otherwise protected from disclosure. Any unauthorized
 dissemination, copying or use of the contents of this email is strictly
 prohibited and may be in violation of law. If you are not the intended
 recipient, any disclosure, copying, forwarding or distribution of this
 email is strictly prohibited and this email and any attachments should be
 deleted immediately.  This email and any attachments do not constitute an
 offer to sell or a solicitation of an offer to purchase any interest in any
 investment vehicle sponsored by Moon Capital Management LP (Moon
 Capital). Moon Capital does not provide legal, accounting or tax advice.
 Any statement regarding legal, accounting or tax matters was not intended
 or written to be relied upon by any person as advice. Moon Capital does not
 waive confidentiality or privilege as a result of this email.




 --
 Derek Williams


tokLogo.png

repair

2012-06-04 Thread Tamar Fraenkel
Hi!
I apologize if for this naive question.
When I run nodetool repair, is it enough to run on one of the nodes, or do
I need to run on each one of them?
Thanks

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: repair

2012-06-04 Thread Tamar Fraenkel
Thanks.

I actually did just that with cron jobs running on different hours.

I asked the question because I saw that when one of the logs was running
the repair, all nodes logged some repair related entries in /var/log/
cassandra/system.log

Thanks again,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, Jun 4, 2012 at 2:35 PM, Rishabh Agrawal 
rishabh.agra...@impetus.co.in wrote:

  Hello,



 As far as my knowledge goes, it works per node basis. So you have to run
 on different nodes. I would suggest you to not to execute it simultaneously
 on all nodes in a production environment.



 Regards

 Rishabh Agrawal



 *From:* Tamar Fraenkel [mailto:ta...@tok-media.com]
 *Sent:* Monday, June 04, 2012 4:25 AM
 *To:* user@cassandra.apache.org
 *Subject:* repair



 Hi!

 I apologize if for this naive question.

 When I run nodetool repair, is it enough to run on one of the nodes, or
 do I need to run on each one of them?

 Thanks


   *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956







 --

 Register for Impetus webinar ‘User Experience Design for iPad
 Applications’ June 8(10:00am PT). http://lf1.me/f9/

 Impetus’ Head of Labs to present on ‘Integrating Big Data technologies in
 your IT portfolio’ at Cloud Expo, NY (June 11-14). Contact us for a
 complimentary pass.Impetus also sponsoring the Yahoo Summit 2012.


 NOTE: This message may contain information that is confidential,
 proprietary, privileged or otherwise protected by law. The message is
 intended solely for the named addressee. If received in error, please
 destroy and notify the sender. Any use of this email is prohibited when
 received in error. Impetus does not represent, warrant and/or guarantee,
 that the integrity of this communication has been maintained nor that the
 communication is free of errors, virus, interception or interference.

tokLogo.pngimage001.png

Re: repair

2012-06-04 Thread Tamar Fraenkel
Thank you all!
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, Jun 4, 2012 at 3:16 PM, R. Verlangen ro...@us2.nl wrote:

 The repair -pr only repairs the nodes primary range: so is only usefull
 in day to day use. When you're recovering from a crash use it without -pr.


 2012/6/4 Romain HARDOUIN romain.hardo...@urssaf.fr


 Run repair -pr in your cron.

 Tamar Fraenkel ta...@tok-media.com a écrit sur 04/06/2012 13:44:32 :

  Thanks.
 
  I actually did just that with cron jobs running on different hours.
 
  I asked the question because I saw that when one of the logs was
  running the repair, all nodes logged some repair related entries in
  /var/log/cassandra/system.log
 
  Thanks again,
  Tamar Fraenkel
  Senior Software Engineer, TOK Media




 --
 With kind regards,

 Robin Verlangen
 *Software engineer*
 *
 *
 W www.robinverlangen.nl
 E ro...@us2.nl

 Disclaimer: The information contained in this message and attachments is
 intended solely for the attention and use of the named addressee and may be
 confidential. If you are not the intended recipient, you are reminded that
 the information remains the property of the sender. You must not use,
 disclose, distribute, copy, print or rely on this e-mail. If you have
 received this message in error, please contact the sender immediately and
 irrevocably delete this message and any copies.


tokLogo.png

Re: repair

2012-06-04 Thread Tamar Fraenkel
Thanks, one more question. On regular basis, should I run repair for the
system keyspace?

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, Jun 4, 2012 at 5:02 PM, Viktor Jevdokimov 
viktor.jevdoki...@adform.com wrote:

  Why without –PR when recovering from crash?

 ** **

 Repair without –PR runs full repair of the cluster, the node which
 receives a command is a repair controller, ALL nodes synchronizesreplicas at 
 the same time, streaming data between each other.
 

 The problems may arise:

 **· **When streaming hangs (it tends to hang even on a stable
 network), repair session hangs (any version does re-stream?)

 **· **Network will be highly saturated

 **· **In case of high inconsistency some nodes may receive a lot
 of data, disk usage much more than 2x (depends on RF)

 **· **A lot of compactions will be pending

 ** **

 IMO, best way to run repair is from script with –PR for single CF from
 single node at a time and monitoring progress, like:

 repair -pr node1 ks1 cf1

 repair -pr node2 ks1 cf1

 repair -pr node3 ks1 cf1

 repair -pr node1 ks1 cf2

 repair -pr node2 ks1 cf2

 repair -pr node3 ks1 cf2

 With some progress or other control in between, your choice.

 ** **

 Use repair with care, do not let your cluster go down.

 ** **

 ** **

 ** **


Best regards / Pagarbiai
 *Viktor Jevdokimov*
 Senior Developer

 Email: viktor.jevdoki...@adform.com
 Phone: +370 5 212 3063, Fax +370 5 261 0453
 J. Jasinskio 16C, LT-01112 Vilnius, Lithuania
 Follow us on Twitter: @adforminsider http://twitter.com/#!/adforminsider
 What is Adform: watch this short video http://vimeo.com/adform/display
  [image: Adform News] http://www.adform.com

 Disclaimer: The information contained in this message and attachments is
 intended solely for the attention and use of the named addressee and may be
 confidential. If you are not the intended recipient, you are reminded that
 the information remains the property of the sender. You must not use,
 disclose, distribute, copy, print or rely on this e-mail. If you have
 received this message in error, please contact the sender immediately and
 irrevocably delete this message and any copies.

   *From:* R. Verlangen [mailto:ro...@us2.nl]
 *Sent:* Monday, June 04, 2012 15:17
 *To:* user@cassandra.apache.org
 *Subject:* Re: repair

 ** **

 The repair -pr only repairs the nodes primary range: so is only usefullin 
 day to day use. When you're recovering from a crash use it without -
 pr.

 2012/6/4 Romain HARDOUIN romain.hardo...@urssaf.fr


 Run repair -pr in your cron.

 Tamar Fraenkel ta...@tok-media.com a écrit sur 04/06/2012 13:44:32 :

  Thanks.  

 
  I actually did just that with cron jobs running on different hours.
 
  I asked the question because I saw that when one of the logs was
  running the repair, all nodes logged some repair related entries in
  /var/log/cassandra/system.log
 
  Thanks again,
  Tamar Fraenkel
  Senior Software Engineer, TOK Media 



 

 ** **

 --
 With kind regards,

 ** **

 Robin Verlangen

 *Software engineer*

 ** **

 W www.robinverlangen.nl

 E ro...@us2.nl

 ** **

 Disclaimer: The information contained in this message and attachments is
 intended solely for the attention and use of the named addressee and may be
 confidential. If you are not the intended recipient, you are reminded that
 the information remains the property of the sender. You must not use,
 disclose, distribute, copy, print or rely on this e-mail. If you have
 received this message in error, please contact the sender immediately and
 irrevocably delete this message and any copies.

 ** **

signature-logo29.pngtokLogo.png

Re: Using EC2 ephemeral 4disk raid0 cause high iowait trouble

2012-05-22 Thread Tamar Fraenkel
Did you upgrade DataStax AMIs? Did you add a node to an existing ring?
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Wed, May 23, 2012 at 2:00 AM, Deno Vichas d...@syncopated.net wrote:

  for what it's worth i've been having pretty good success using the
 Datastax AMIs.



 On 5/17/2012 6:59 PM, koji Lin wrote:

 Hi

 We use amazon ami 3.2.12-3.2.4.amzn1.x86_64

 and some of our data file are more than 10G

 thanks

 koji
 2012-5-16 下午6:00 於 aaron morton aa...@thelastpickle.com 寫道:

 On Ubuntu ? Sounds like http://wiki.apache.org/cassandra/FAQ#ubuntu_hangs

  Cheers


 -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

  On 16/05/2012, at 2:13 PM, koji Lin wrote:

  Hi

 Our service already run cassandra 1.0 on 1x ec2 instances(with ebs), and
 we saw lots of discussion talk about using  ephemeral raid for better
 performance and consistent performance.

 So we want to create new instance using 4 ephemeral raid0, and copy the
 data from ebs to finally replace the old instance and reduce some .

 we create the xlarge instance with -b '/dev/sdb=ephemeral0' -b
 '/dev/sdc=ephemeral1' -b '/dev/sdd=ephemeral2' -b '/dev/sde=ephemeral3',

 and use mdadm command like this  mdadm --create /dev/md0 --level=0 -c256
 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde

 after copying file and start the cassandra(same token as old instance it
 replaced).

 we saw the read is really fast always keep 2xxm/sec, but system load
 exceed 40, with high iowait, and lots of client get timeout result. We
 guess maybe it's the problem of ec2 instance, so we create another one with
 same setting to replace other machine ,but the result is same . Then we
 rollback to ebs with single disk ,read speed keeps at 1xmb/sec but system
 becomes well .(using ebs with 2 disks raid0 will keep at 2xmb/sec and
 higher iowait then single disk ,but still works)

 Is there anyone meet the same problem too ? or do we forget something to
 configure?

 thank you

 koji




tokLogo.png

Re: unable to nodetool to remote EC2

2012-05-21 Thread Tamar Fraenkel
Hi!
I am trying the tunnel and it fails. Will be gratefull for some hints:

I defined

   - proxy_host = ubuntu@my_ec2_cassandra_node_public_ip
   - proxy_port = 22

I do:
*ssh -N -f -i /c/Users/tamar/.ssh/Amazon/tokey.openssh -D22
ubuntu@my_ec2_cassandra_node_public_ip*

I put some debug prints and I can see that the ssh_pid is indeed the
correct one.

I run
*jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=22
service:jmx:rmi:///jndi/rmi://my_ec2_cassandra_node_public_ip:7199/jmxrmi*

I get errors and it fails:
channel 2: open failed: connect failed: Connection timed out

One note though, I can ssh to that vm using
ssh -i /c/Users/tamar/.ssh/Amazon/tokey.openssh -D22
ubuntu@my_ec2_cassandra_node_public_ip
without being prompted for PW.

Any help appreciated

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Fri, May 18, 2012 at 9:49 PM, ramesh dbgroup...@gmail.com wrote:

  On 05/18/2012 01:35 PM, Tyler Hobbs wrote:

 Your firewall rules need to allow TCP traffic on any port = 1024 for JMX
 to work.  It initially connects on port 7199, but then the client is asked
 to reconnect on a randomly chosen port.

 You can open the firewall, SSH to the node first, or set up something like
 this: http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html

 On Fri, May 18, 2012 at 1:31 PM, ramesh dbgroup...@gmail.com wrote:

  I updated the cassandra-env.sh
 $JMX_HOST=10.20.30.40
 JVM_OPTS=$JVM_OPTS -Djava.rmi.server.hostname=$JMX_HOST

 netstat -ltn shows port 7199 is listening.

 I tried both public and private IP for connecting but neither helps.

 However, I am able to connect locally from within server.

  I get this error when I remote:

 Error connection to remote JMX agent! java.rmi.ConnectException:
 Connection refused to host: 10.20.30.40; nested exception is:
 java.net.ConnectException: Connection timed out at
 sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) at
 sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) at
 sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at
 sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110) at
 javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source) at
 javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2329)
 at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:279)
 at
 javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248)
 at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:144) at
 org.apache.cassandra.tools.NodeProbe. (NodeProbe.java:114) at
 org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:623) Caused by:
 java.net.ConnectException: Connection timed out at
 java.net.PlainSocketImpl.socketConnect(Native Method) at
 java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at
 java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at
 java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at
 java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at
 java.net.Socket.connect(Socket.java:529) at
 java.net.Socket.connect(Socket.java:478) at java.net.Socket. (Socket.java:375)
 at java.net.Socket. (Socket.java:189) at
 sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22)
 at
 sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128)
 at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) ...
 10 more

 Any help appreciated.
 Regards
 Ramesh




 --
 Tyler Hobbs
 DataStax http://datastax.com/


 It helped.
 Thanks Tyler for the info and the link to the post.

 Regards
 Ramesh

tokLogo.png

Re: RE Ordering counters in Cassandra

2012-05-21 Thread Tamar Fraenkel
I also had a similar problem. I have a temporary solution, which is not
best, but may be of help.
I have the coutner cf to count events, but apart from that I hold leaders
CF:

leaders = {
  // key is time bucket
  // values are composites(rank, event) ordered by
  // descending order of the rank
  // set relevant TTL on columns
  time_bucket1 : {
composite(1000,event1) : 
composite(999, event2) : 
  },
  ...
}

Whenever I increment counter for a specific event, I add a column in the
time bucket row of the leaders CF, with the new value of the counter and
the event name.
There are two ways to go here, either delete the old column(s) for that
event (with lower counters) from leaders CF. Or let them be.
If you choose to delete, there is the complication of not having
getAndSetfor counters, so you may end up not deleting all the old
columns.
If you choose not to  delete old column, and live with duplicate columns
for events (each with different count), it will make your query to retrieve
leaders run longer.
Anyway, when you need to retrieve the leaders, you can do slice query
onleaders CF and ignore
duplicates events using client (I use Java). This will happen less if you
do delete old columns.

Another option is not to use Cassandra for that purpose, http://redis.io/ is
a nice tool

Will be happy to hear you comments.
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, May 21, 2012 at 8:05 PM, Filippo Diotalevi fili...@ntoklo.comwrote:

 Hi Romain,
 thanks for your suggestion.

 When you say  build every day a ranking in a dedicated CF by iterating
 over events: do you mean
 - load all the columns for the specified row key
 - iterate over each column, and write a new column in the inversed index
 ?

 That's my current approach, but since I have many of these wide rows (1
 per day), the process is extremely slow as it involves moving an entire row
 from Cassandra to client, inverting every column, and sending the data back
 to create the inversed index.

 --
 Filippo Diotalevi


 On Monday, 21 May 2012 at 17:19, Romain HARDOUIN wrote:


 If I understand you've got a data model which looks like this:

 CF Events:
 row1: { event1: 1050, event2: 1200, event3: 830, ... }

 You can't query on column values but you can build every day a ranking in
 a dedicated CF by iterating over events:

 create column family Ranking
 with comparator = 'LongType(reversed=true)'
 ...

 CF Ranking:
 rank: { 1200: event2, 1050: event1, 830: event3, ... }

 Then you can make a top ten or whatever you want because counter values
 will be sorted.


 Filippo Diotalevi fili...@ntoklo.com a écrit sur 21/05/2012 16:59:43 :

  Hi,
  I'm trying to understand what's the best design for a simple
  ranking use cases.
  I have, in a row, a good number (10k - a few 100K) of counters; each
  one is counting the occurrence of an event. At the end of day, I
  want to create a ranking of the most occurred event.
 
  What's the best approach to perform this task?
  The brute force approach of retrieving the row and ordering it
  doesn't work well (the call usually goes timeout, especially is
  Cassandra is also under load); I also don't know in advance the full
  set of event names (column names), so it's difficult to slice the get
 call.
 
  Is there any trick to solve this problem? Maybe a way to retrieve
  the row ordering for counter values?
 
  Thanks,
  --
  Filippo Diotalevi



tokLogo.png

Re: RE Ordering counters in Cassandra

2012-05-21 Thread Tamar Fraenkel
Indeed I took the not delete approach. If time bucket rows are not that big, 
this is a good temporary solution.
I just finished implementation and testing now on a small staging environment. 
So far so good.
Tamar

Sent from my iPod

On May 21, 2012, at 9:11 PM, Filippo Diotalevi fili...@ntoklo.com wrote:

 Hi Tamar,
 the solution you propose is indeed a temporary solution, but it might be 
 the best one.
 
 Which approach did you follow?
 I'm a bit concerned about the deletion approach, since in case of concurrent 
 writes on the same counter you might lose the pointer to the column to 
 delete. 
 
 -- 
 Filippo Diotalevi
 
 
 On Monday, 21 May 2012 at 18:51, Tamar Fraenkel wrote:
 
 I also had a similar problem. I have a temporary solution, which is not 
 best, but may be of help.
 I have the coutner cf to count events, but apart from that I hold leaders CF:
 leaders = {
   // key is time bucket
   // values are composites(rank, event) ordered by
   // descending order of the rank
   // set relevant TTL on columns
   time_bucket1 : {
 composite(1000,event1) : 
 composite(999, event2) : 
   },
   ...
 }
 Whenever I increment counter for a specific event, I add a column in the 
 time bucket row of the leaders CF, with the new value of the counter and the 
 event name.
 There are two ways to go here, either delete the old column(s) for that 
 event (with lower counters) from leaders CF. Or let them be. 
 If you choose to delete, there is the complication of not having getAndSet 
 for counters, so you may end up not deleting all the old columns. 
 If you choose not to  delete old column, and live with duplicate columns for 
 events (each with different count), it will make your query to retrieve 
 leaders run longer.
 Anyway, when you need to retrieve the leaders, you can do slice query on 
 leaders CF and ignore duplicates events using client (I use Java). This will 
 happen less if you do delete old columns.
 
 Another option is not to use Cassandra for that purpose, http://redis.io/ is 
 a nice tool
 
 Will be happy to hear you comments.
 Thanks,
 
 Tamar Fraenkel 
 Senior Software Engineer, TOK Media 
 
 tokLogo.png
 
 ta...@tok-media.com
 Tel:   +972 2 6409736 
 Mob:  +972 54 8356490 
 Fax:   +972 2 5612956 
 
 
 
 
 
 On Mon, May 21, 2012 at 8:05 PM, Filippo Diotalevi fili...@ntoklo.com 
 wrote:
 Hi Romain,
 thanks for your suggestion.
 
 When you say  build every day a ranking in a dedicated CF by iterating 
 over events: do you mean
 - load all the columns for the specified row key
 - iterate over each column, and write a new column in the inversed index
 ?
 
 That's my current approach, but since I have many of these wide rows (1 per 
 day), the process is extremely slow as it involves moving an entire row 
 from Cassandra to client, inverting every column, and sending the data back 
 to create the inversed index.
 
 -- 
 Filippo Diotalevi
 
 
 On Monday, 21 May 2012 at 17:19, Romain HARDOUIN wrote:
 
 
 If I understand you've got a data model which looks like this: 
 
 CF Events: 
 row1: { event1: 1050, event2: 1200, event3: 830, ... } 
 
 You can't query on column values but you can build every day a ranking in 
 a dedicated CF by iterating over events: 
 
 create column family Ranking 
 with comparator = 'LongType(reversed=true)'   
 ... 
 
 CF Ranking: 
 rank: { 1200: event2, 1050: event1, 830: event3, ... } 
 
 Then you can make a top ten or whatever you want because counter values 
 will be sorted. 
 
 
 Filippo Diotalevi fili...@ntoklo.com a écrit sur 21/05/2012 16:59:43 :
 
  Hi, 
  I'm trying to understand what's the best design for a simple 
  ranking use cases. 
  I have, in a row, a good number (10k - a few 100K) of counters; each
  one is counting the occurrence of an event. At the end of day, I 
  want to create a ranking of the most occurred event. 
  
  What's the best approach to perform this task?  
  The brute force approach of retrieving the row and ordering it 
  doesn't work well (the call usually goes timeout, especially is 
  Cassandra is also under load); I also don't know in advance the full
  set of event names (column names), so it's difficult to slice the get 
  call. 
  
  Is there any trick to solve this problem? Maybe a way to retrieve 
  the row ordering for counter values? 
  
  Thanks, 
  -- 
  Filippo Diotalevi
 
 
 


Re: restoring from snapshot - missing data

2012-05-21 Thread Tamar Fraenkel
Thanks.
After creating the data model and matching the correct snapshot with the
correct new node (same token) all worked fine!

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, May 21, 2012 at 9:06 PM, Tyler Hobbs ty...@datastax.com wrote:

 On Mon, May 21, 2012 at 12:01 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 If I am putting the snapshots on a clean ring, I need to first create the
 data model?


 Yes.

 --
 Tyler Hobbs
 DataStax http://datastax.com/


tokLogo.png

Re: unable to nodetool to remote EC2

2012-05-21 Thread Tamar Fraenkel
Thanks for the response. But it still does not work.
I am running the script from a git bash on my windows 7.
adding some debug prints, this is what I am running
ssh -i key.pem -N -f -D8123 ubuntu@ec2-*.amazonaws.com
ssh pid = 11616
/c/PROGRA~2/Java/jdk1.7.0_02/bin/jconsole.exe -J-DsocksProxyHost=localhost
-J-DsocksProxyPort=8123 service:jmx:rmi:///jndi/rmi://ec2-*.
amazonaws.com:7199/jmxrmi

Still getting channel 2: open failed: connect failed: Connection timed out
Any further idea? Where are you running the script.
Thanks

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, May 21, 2012 at 11:00 PM, ramesh dbgroup...@gmail.com wrote:

  On 05/21/2012 03:55 AM, Tamar Fraenkel wrote:

 Hi!
 I am trying the tunnel and it fails. Will be gratefull for some hints:

  I defined

- proxy_host = ubuntu@my_ec2_cassandra_node_public_ip
- proxy_port = 22

 I do:
  *ssh -N -f -i /c/Users/tamar/.ssh/Amazon/tokey.openssh -D22
 ubuntu@my_ec2_cassandra_node_public_ip*

  I put some debug prints and I can see that the ssh_pid is indeed the
 correct one.

  I run
 *jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=22
 service:jmx:rmi:///jndi/rmi://my_ec2_cassandra_node_public_ip:7199/jmxrmi*

  I get errors and it fails:
 channel 2: open failed: connect failed: Connection timed out

  One note though, I can ssh to that vm using
 ssh -i /c/Users/tamar/.ssh/Amazon/tokey.openssh -D22
 ubuntu@my_ec2_cassandra_node_public_ip
 without being prompted for PW.

  Any help appreciated

   *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Fri, May 18, 2012 at 9:49 PM, ramesh dbgroup...@gmail.com wrote:

  On 05/18/2012 01:35 PM, Tyler Hobbs wrote:

  Your firewall rules need to allow TCP traffic on any port = 1024 for JMX
 to work.  It initially connects on port 7199, but then the client is asked
 to reconnect on a randomly chosen port.

 You can open the firewall, SSH to the node first, or set up something like
 this: http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html

  On Fri, May 18, 2012 at 1:31 PM, ramesh dbgroup...@gmail.com wrote:

  I updated the cassandra-env.sh
 $JMX_HOST=10.20.30.40
 JVM_OPTS=$JVM_OPTS -Djava.rmi.server.hostname=$JMX_HOST

 netstat -ltn shows port 7199 is listening.

 I tried both public and private IP for connecting but neither helps.

 However, I am able to connect locally from within server.

  I get this error when I remote:

  Error connection to remote JMX agent! java.rmi.ConnectException:
 Connection refused to host: 10.20.30.40; nested exception is:
 java.net.ConnectException: Connection timed out at
 sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) at
 sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) at
 sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at
 sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110) at
 javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source) at
 javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2329)
 at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:279)
 at
 javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248)
 at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:144) at
 org.apache.cassandra.tools.NodeProbe. (NodeProbe.java:114) at
 org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:623) Caused by:
 java.net.ConnectException: Connection timed out at
 java.net.PlainSocketImpl.socketConnect(Native Method) at
 java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at
 java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at
 java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at
 java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at
 java.net.Socket.connect(Socket.java:529) at
 java.net.Socket.connect(Socket.java:478) at java.net.Socket. (Socket.java:375)
 at java.net.Socket. (Socket.java:189) at
 sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22)
 at
 sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128)
 at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) ...
 10 more

 Any help appreciated.
 Regards
 Ramesh




 --
 Tyler Hobbs
 DataStax http://datastax.com/


 It helped.
 Thanks Tyler for the info and the link to the post.

 Regards
 Ramesh


   Hello Tamar,

 In your bash file, where you ssh , pass the .pem as well :

  # start up a background ssh tunnel on the desired port
 ssh -i mypem.pem -N -f -D$proxy_port $proxy_host

 Here is the entire code


 ---
 #!/bin/bash

 function jc {
  # set

failed to restore from snapshot

2012-05-20 Thread Tamar Fraenkel
Hi!
I wanted to test restoring my Cassandra cluster from a snapshot.

I created a new ring (with 3 nodes) same as my environment.
I started recovering one node at a time and failed with the first :)

I didn't create the schema on the new node, but I did create the cluster.
I stopped Cassandra
I copied the content of the snapshot to the data directory under the
keyspace
name.
I started Cassandra
==
Nothing happened - it didn't even know about the keyspace

So I created the keyspace and all the CF using cli and restarted
Cassandra again.
==
Now it knows the schema, but it does not seem to have the data.

What am I doing wrong?

By the way I am running Cassandra 1.0.9 on DataStax AMIs

Thanks

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: failed to restore from snapshot

2012-05-20 Thread Tamar Fraenkel
Hi!
Sorry, ignore previous mail, my bad.
Copied the files to the wrong place.
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Sun, May 20, 2012 at 6:19 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 I wanted to test restoring my Cassandra cluster from a snapshot.

 I created a new ring (with 3 nodes) same as my environment.
 I started recovering one node at a time and failed with the first :)

 I didn't create the schema on the new node, but I did create the cluster.
 I stopped Cassandra
 I copied the content of the snapshot to the data directory under the
 keyspace
 name.
 I started Cassandra
 ==
 Nothing happened - it didn't even know about the keyspace

 So I created the keyspace and all the CF using cli and restarted
 Cassandra again.
 ==
 Now it knows the schema, but it does not seem to have the data.

 What am I doing wrong?

 By the way I am running Cassandra 1.0.9 on DataStax AMIs

 Thanks

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




tokLogo.pngtokLogo.png

restoring from snapshot - missing data

2012-05-20 Thread Tamar Fraenkel
Hi!
I am testing backup and restore.
I created the restore using parallel ssh on all 3 nodes.
I created a new 3 ring setup and used the snapshot to test recover.
Snapshot from every original node went to one of the new nodes.
When I compare the content of the data dir it seems that all files from the
original cluster exist on the backup cluster.
*But* when I do some cqlsh queries it seems as though about 1/3 of my data
is missing.

Any idea what could be the issue?
I thought that snapshot flushes all in-memory writes to disk, so it can't
be that some data was not on the original snapshot.

Help is much appreciated,
Thanks

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: restoring from snapshot - missing data

2012-05-20 Thread Tamar Fraenkel
Thanks. Just figured out yesterday that I switched the snapshots mixing the
tokens.
Will try again today.
And another question. If I am putting the snapshots on a clean ring, I need
to first create the data model?
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, May 21, 2012 at 1:44 AM, Tyler Hobbs ty...@datastax.com wrote:

 Did you use the same tokens for the nodes in both clusters?


 On Sun, May 20, 2012 at 1:25 PM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 I am testing backup and restore.
 I created the restore using parallel ssh on all 3 nodes.
 I created a new 3 ring setup and used the snapshot to test recover.
 Snapshot from every original node went to one of the new nodes.
 When I compare the content of the data dir it seems that all files from
 the original cluster exist on the backup cluster.
 *But* when I do some cqlsh queries it seems as though about 1/3 of my
 data is missing.

 Any idea what could be the issue?
 I thought that snapshot flushes all in-memory writes to disk, so it
 can't be that some data was not on the original snapshot.

 Help is much appreciated,
 Thanks

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956






 --
 Tyler Hobbs
 DataStax http://datastax.com/


tokLogo.pngtokLogo.png

Matthew Dennis's Cassandra On EC2

2012-05-17 Thread Tamar Fraenkel
Hi!

I found the slides of the lecture
http://www.slideshare.net/mattdennis/cassandra-on-ec2
I wonder if there is a way to get a video of the lecture.
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: Matthew Dennis's Cassandra On EC2

2012-05-17 Thread Tamar Fraenkel
I think the topic is very interesting :)
I can't attend the SF event (as I am in Israel) and will appreciate a video!
Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, May 17, 2012 at 7:20 PM, Sasha Dolgy sdo...@gmail.com wrote:

 Although, probably inappropriate, I would be willing to contribute some
 funds for someone to recreate it with animated stick-figures.

 thanks. ;)


 On Thu, May 17, 2012 at 6:02 PM, Jeremy Hanna 
 jeremy.hanna1...@gmail.comwrote:

 Sorry - it was at the austin cassandra meetup and we didn't record the
 presentation.  I wonder if this would be a popular topic to have at the
 upcoming Cassandra SF event which would be recorded...


tokLogo.png

Re: How can I implement 'LIKE operation in SQL' on values while querying a column family in Cassandra

2012-05-15 Thread Tamar Fraenkel
Do you still need the sample code? I use Hector, well here is an example:
*This is the Column Family definition:*
(I have a composite, but if you like you can have only the UTF8Type).

CREATE COLUMN FAMILY title_indx
with comparator = 'CompositeType(UTF8Type,UUIDType)'
and default_validation_class = 'UTF8Type'
and key_validation_class = 'LongType';

*The Query:*
SliceQueryLong, Composite, String query =
HFactory.createSliceQuery(CassandraHectorConn.getKeyspace(),
LongSerializer.get(),
CompositeSerializer.get(),
StringSerializer.get());
query.setColumnFamily(title_indx);
query.setKey(...)

Composite start = new Composite();
start.add(prefix);
char c = lowerCasePrefix.charAt(lastCharIndex);
String prefixEnd =  prefix.substring(0, lastCharIndex) + ++c;
Composite end = new Composite();
end.add(prefixEnd);

ColumnSliceIteratorLong, Composite, String iterator =
  new ColumnSliceIteratorLong, Composite, String(
   query, start, end, false)
while (iterator.hasNext()) {
...
   }

Cheers,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, May 15, 2012 at 1:19 PM, samal samalgo...@gmail.com wrote:

 You cannot extract via relative column value.
 It can only extract via value if it has secondary index but exact column
 value need to match.

 as tamar suggested you can put value as column name , UTF8 comparator.

 {
 'name_abhijit'='abhijit'
 'name_abhishek'='abhiskek'
 'name_atul'='atul'
 }

 here you can do slice query on column name and get desired result.

 /samal

 On Tue, May 15, 2012 at 3:29 PM, selam selam...@gmail.com wrote:

 Mapreduce jobs may solve your problem  for batch processing


 On Tue, May 15, 2012 at 12:49 PM, Abhijit Chanda 
 abhijit.chan...@gmail.com wrote:

 Tamar,

 Can you please illustrate little bit with some sample code. It highly
 appreciable.

 Thanks,


 On Tue, May 15, 2012 at 10:48 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 I don't think this is possible, the best you can do is prefix, if your
 order is alphabetical. For example I have a CF with comparator UTF8Type,
 and then I can do slice query and bring all columns that start with the
 prefix, and end with the prefix where you replace the last char with
 the next one in order (i.e. aaa-aab).

 Hope that helps.

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, May 15, 2012 at 7:56 AM, Abhijit Chanda 
 abhijit.chan...@gmail.com wrote:

 I don't know the exact value on a column, but I want to do a partial
 matching to know all available values that matches.
 I want to do similar kind of operation that LIKE operator in SQL do.
 Any help is highly appreciated.

 --
 Abhijit Chanda
 Software Developer
 VeHere Interactive Pvt. Ltd.
 +91-974395





 --
 Abhijit Chanda
 Software Developer
 VeHere Interactive Pvt. Ltd.
 +91-974395




 --
 Saygılar  İyi Çalışmalar
 Timu EREN ( a.k.a selam )



tokLogo.pngtokLogo.png

Re: How can I implement 'LIKE operation in SQL' on values while querying a column family in Cassandra

2012-05-15 Thread Tamar Fraenkel
Actually woman ;-)

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, May 15, 2012 at 3:45 PM, Abhijit Chanda
abhijit.chan...@gmail.comwrote:

 Thanks so much Guys, specially Tamar, thank you so much man.

 Regards,
 Abhijit


 On Tue, May 15, 2012 at 4:26 PM, Tamar Fraenkel ta...@tok-media.comwrote:

 Do you still need the sample code? I use Hector, well here is an example:
 *This is the Column Family definition:*
 (I have a composite, but if you like you can have only the UTF8Type).

 CREATE COLUMN FAMILY title_indx
 with comparator = 'CompositeType(UTF8Type,UUIDType)'
 and default_validation_class = 'UTF8Type'
 and key_validation_class = 'LongType';

 *The Query:*
 SliceQueryLong, Composite, String query =
 HFactory.createSliceQuery(CassandraHectorConn.getKeyspace(),
 LongSerializer.get(),
 CompositeSerializer.get(),
 StringSerializer.get());
 query.setColumnFamily(title_indx);
 query.setKey(...)

 Composite start = new Composite();
 start.add(prefix);
 char c = lowerCasePrefix.charAt(lastCharIndex);
 String prefixEnd =  prefix.substring(0, lastCharIndex) + ++c;
 Composite end = new Composite();
 end.add(prefixEnd);

 ColumnSliceIteratorLong, Composite, String iterator =
   new ColumnSliceIteratorLong, Composite, String(
query, start, end, false)
 while (iterator.hasNext()) {
 ...
}

 Cheers,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, May 15, 2012 at 1:19 PM, samal samalgo...@gmail.com wrote:

 You cannot extract via relative column value.
 It can only extract via value if it has secondary index but exact column
 value need to match.

 as tamar suggested you can put value as column name , UTF8 comparator.

 {
 'name_abhijit'='abhijit'
 'name_abhishek'='abhiskek'
 'name_atul'='atul'
 }

 here you can do slice query on column name and get desired result.

 /samal

 On Tue, May 15, 2012 at 3:29 PM, selam selam...@gmail.com wrote:

 Mapreduce jobs may solve your problem  for batch processing


 On Tue, May 15, 2012 at 12:49 PM, Abhijit Chanda 
 abhijit.chan...@gmail.com wrote:

 Tamar,

 Can you please illustrate little bit with some sample code. It highly
 appreciable.

 Thanks,


 On Tue, May 15, 2012 at 10:48 AM, Tamar Fraenkel 
 ta...@tok-media.comwrote:

 I don't think this is possible, the best you can do is prefix, if
 your order is alphabetical. For example I have a CF with
 comparator UTF8Type, and then I can do slice query and bring all columns
 that start with the prefix, and end with the prefix where you replace the
 last char with the next one in order (i.e. aaa-aab).

 Hope that helps.

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, May 15, 2012 at 7:56 AM, Abhijit Chanda 
 abhijit.chan...@gmail.com wrote:

 I don't know the exact value on a column, but I want to do a partial
 matching to know all available values that matches.
 I want to do similar kind of operation that LIKE operator in SQL do.
 Any help is highly appreciated.

 --
 Abhijit Chanda
 Software Developer
 VeHere Interactive Pvt. Ltd.
 +91-974395





 --
 Abhijit Chanda
 Software Developer
 VeHere Interactive Pvt. Ltd.
 +91-974395




 --
 Saygılar  İyi Çalışmalar
 Timu EREN ( a.k.a selam )






 --
 Abhijit Chanda
 Software Developer
 VeHere Interactive Pvt. Ltd.
 +91-974395


tokLogo.pngtokLogo.png

Counter CF and TTL

2012-05-14 Thread Tamar Fraenkel

 Hi!
 I saw that when Counter CF were first introduced there was no support for
 TTL.

   CLI does not provide TTL for counter columns.

 Hector does seem to provide an interface for setting TTL
 for HCounterColumn, but when I list the content of the CF I don't see the
 TTL as I see for regular CFs.



 So does a counter column have TTL or not?

 I actually don't have an issue of big rows, but I don't need the data
 after a two weeks or so, so it seems a shame to clutter the DB with it.



 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




tokLogo.png

Re: counter CF and TTL

2012-05-14 Thread Tamar Fraenkel
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, May 14, 2012 at 3:20 PM, Viktor Jevdokimov 
viktor.jevdoki...@adform.com wrote:

  There’s no TTL on counter columns and no ready-to-use solution I know
 about.
 https://issues.apache.org/jira/browse/CASSANDRA-2774

 ** **

 ** **

 ** **


Best regards / Pagarbiai
 *Viktor Jevdokimov*
 Senior Developer

 Email: viktor.jevdoki...@adform.com
 Phone: +370 5 212 3063, Fax +370 5 261 0453
 J. Jasinskio 16C, LT-01112 Vilnius, Lithuania
 Follow us on Twitter: @adforminsider http://twitter.com/#!/adforminsider
 What is Adform: watch this short video http://vimeo.com/adform/display
  [image: Adform News] http://www.adform.com

 Disclaimer: The information contained in this message and attachments is
 intended solely for the attention and use of the named addressee and may be
 confidential. If you are not the intended recipient, you are reminded that
 the information remains the property of the sender. You must not use,
 disclose, distribute, copy, print or rely on this e-mail. If you have
 received this message in error, please contact the sender immediately and
 irrevocably delete this message and any copies.

 *From:* Tamar Fraenkel [mailto:ta...@tok-media.com]
 *Sent:* Sunday, May 13, 2012 18:30
 *To:* cassandra-u...@incubator.apache.org
 *Subject:* counter CF and TTL

 ** **

 Hi!

 I saw that when Counter CF were first introduced there was no support for
 TTL. 

 But I see that Hector does have TTL for HCounterColumn

 So does a counter column have TTL or not?

 ** **

 I actually don't have an issue of big rows, but I don't need the data
 after a two weeks or so, so it seems a shame to clutter the DB with it.***
 *

 Thanks,


 

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media 

 [image: Inline image 1]


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956 

 ** **

 ** **

 ** **

signature-logo29.pngtokLogo.pngimage001.png

Re: How can I implement 'LIKE operation in SQL' on values while querying a column family in Cassandra

2012-05-14 Thread Tamar Fraenkel
I don't think this is possible, the best you can do is prefix, if your
order is alphabetical. For example I have a CF with comparator UTF8Type,
and then I can do slice query and bring all columns that start with the
prefix, and end with the prefix where you replace the last char with the
next one in order (i.e. aaa-aab).

Hope that helps.

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, May 15, 2012 at 7:56 AM, Abhijit Chanda
abhijit.chan...@gmail.comwrote:

 I don't know the exact value on a column, but I want to do a partial
 matching to know all available values that matches.
 I want to do similar kind of operation that LIKE operator in SQL do.
 Any help is highly appreciated.

 --
 Abhijit Chanda
 Software Developer
 VeHere Interactive Pvt. Ltd.
 +91-974395


tokLogo.png

Re: Counter CF and TTL

2012-05-14 Thread Tamar Fraenkel
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, May 14, 2012 at 11:43 PM, aaron morton aa...@thelastpickle.comwrote:

 Counter columns do not support a TTL.

 Cheers

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 15/05/2012, at 12:20 AM, Tamar Fraenkel wrote:

 Hi!
 I saw that when Counter CF were first introduced there was no support for
 TTL.

CLI does not provide TTL for counter columns.

 Hector does seem to provide an interface for setting TTL
 for HCounterColumn, but when I list the content of the CF I don't see the
 TTL as I see for regular CFs.



 So does a counter column have TTL or not?

 I actually don't have an issue of big rows, but I don't need the data
 after a two weeks or so, so it seems a shame to clutter the DB with it.



 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956






tokLogo.png

counter CF and TTL

2012-05-13 Thread Tamar Fraenkel
Hi!
I saw that when Counter CF were first introduced there was no support for
TTL.
But I see that Hector does have TTL for HCounterColumn
So does a counter column have TTL or not?

I actually don't have an issue of big rows, but I don't need the data after
a two weeks or so, so it seems a shame to clutter the DB with it.
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: HColumn.getName() appending special characters

2012-05-02 Thread Tamar Fraenkel
Column name is a composite, so you should use
MultigetSliceQueryString, Composite, String and pass CompositeSerializer.


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, May 3, 2012 at 3:57 AM, Sunit Randhawa sunit.randh...@gmail.comwrote:

 Hello,

 Code snippet below is printing out column names and values:

 MultigetSliceQueryString, String, String multigetSliceQuery =
 HFactory.createMultigetSliceQuery(keyspace,
 stringSerializer, stringSerializer, stringSerializer);

 for (HColumnString, String column : c){
System.out.println(Col
 name:+column.getName()+,Col Value:+column.getValue());
columnData.put(column.getName(), column.getValue());
}


 The output for column name is printing some special characters that I
 cannot copy and paste in the email also.

 Below is the definition of that CF:
 create column family XYZ
 with comparator =
 'CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)'
 and key_validation_class = UTF8Type
 and default_validation_class = UTF8Type;


 Any suggestion as to why these special characters I am seeing in column
 name only when I display using Thrift API (cassandra-cli does not show
 that).

 Thanks,
 Sunit.

tokLogo.png

Re: Taking a Cluster Wide Snapshot

2012-05-01 Thread Tamar Fraenkel
I think it make's sense and would be happy if you can share the incremental
snapshot scripts.
Thanks!
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, May 1, 2012 at 11:06 AM, Shubham Srivastava 
shubham.srivast...@makemytrip.com wrote:

  On another thought I am writing a code/script for taking a backup of all
 the nodes in a single DC , renaming data files with some uid and then
 merging them . The storage however would happen on some storage medium nas
 for ex which would be in the same DC. This would help in data copying a non
 hefty job.

  Hopefully the one single DC data(from all the nodes in this DC) should
 give me the complete data just in case if RF =1 .

  The next improvement would be do do the same on incremental snapshots so
 that once you have a baseline data all the rest would be collecting chunks
 of increments alone and merging it with the original global snapshot.

  I have do the same on each individual DC's.

  Do you guys agree?

  Regards,
 Shubham


  *From:* Tamar Fraenkel [ta...@tok-media.com]
 *Sent:* Tuesday, May 01, 2012 10:50 AM

 *To:* user@cassandra.apache.org
 *Subject:* Re: Taking a Cluster Wide Snapshot

   Thanks for posting the script.
 I see that the snapshot is always a full one, and if I understand
 correctly, it replaces the old snapshot on S3. Am I right?

  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Thu, Apr 26, 2012 at 9:39 AM, Deno Vichas d...@syncopated.net wrote:

  On 4/25/2012 11:34 PM, Shubham Srivastava wrote:

 Whats the best way(or the only way) to take a cluster wide backup of
 Cassandra. Cant find much of the documentation on the same.

  I am using a MultiDC setup with cassandra 0.8.6.


  Regards,
 Shubham

   here's how i'm doing in AWS land using the DataStax AMI via a nightly
 cron job.  you'll need pssh and s3cmd -


 #!/bin/bash
 cd /home/ec2-user/ops

 echo making snapshots
 pssh -h prod-cassandra-nodes.txt -l ubuntu -P 'nodetool -h localhost -p
 7199 clearsnapshot stocktouch'
 pssh -h prod-cassandra-nodes.txt -l ubuntu -P 'nodetool -h localhost -p
 7199 snapshot stocktouch'

 echo making tar balls
 pssh -h prod-cassandra-nodes.txt -l ubuntu -P -t 0 'rm
 `hostname`-cassandra-snapshot.tar.gz'
 pssh -h prod-cassandra-nodes.txt -l ubuntu -P -t 0 'tar -zcvf
 `hostname`-cassandra-snapshot.tar.gz
 /raid0/cassandra/data/stocktouch/snapshots'

 echo coping tar balls
 pslurp -h prod-cassandra-nodes.txt -l ubuntu
 /home/ubuntu/*cassandra-snapshot.tar.gz .

 echo tar'ing tar balls
 tar -cvf cassandra-snapshots-all-nodes.tar 10*

 echo pushing to S3
 ../s3cmd-1.1.0-beta3/s3cmd put cassandra-snapshots-all-nodes.tar
 s3://stocktouch-backups

 echo DONE!



tokLogo.pngtokLogo.png

Re: Taking a Cluster Wide Snapshot

2012-04-30 Thread Tamar Fraenkel
Thanks for posting the script.
I see that the snapshot is always a full one, and if I understand
correctly, it replaces the old snapshot on S3. Am I right?

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Apr 26, 2012 at 9:39 AM, Deno Vichas d...@syncopated.net wrote:

  On 4/25/2012 11:34 PM, Shubham Srivastava wrote:

 Whats the best way(or the only way) to take a cluster wide backup of
 Cassandra. Cant find much of the documentation on the same.

  I am using a MultiDC setup with cassandra 0.8.6.


  Regards,
 Shubham

  here's how i'm doing in AWS land using the DataStax AMI via a nightly
 cron job.  you'll need pssh and s3cmd -


 #!/bin/bash
 cd /home/ec2-user/ops

 echo making snapshots
 pssh -h prod-cassandra-nodes.txt -l ubuntu -P 'nodetool -h localhost -p
 7199 clearsnapshot stocktouch'
 pssh -h prod-cassandra-nodes.txt -l ubuntu -P 'nodetool -h localhost -p
 7199 snapshot stocktouch'

 echo making tar balls
 pssh -h prod-cassandra-nodes.txt -l ubuntu -P -t 0 'rm
 `hostname`-cassandra-snapshot.tar.gz'
 pssh -h prod-cassandra-nodes.txt -l ubuntu -P -t 0 'tar -zcvf
 `hostname`-cassandra-snapshot.tar.gz
 /raid0/cassandra/data/stocktouch/snapshots'

 echo coping tar balls
 pslurp -h prod-cassandra-nodes.txt -l ubuntu
 /home/ubuntu/*cassandra-snapshot.tar.gz .

 echo tar'ing tar balls
 tar -cvf cassandra-snapshots-all-nodes.tar 10*

 echo pushing to S3
 ../s3cmd-1.1.0-beta3/s3cmd put cassandra-snapshots-all-nodes.tar
 s3://stocktouch-backups

 echo DONE!


tokLogo.png

Re: Cassandra backup queston regarding commitlogs

2012-04-29 Thread Tamar Fraenkel
I want to add a couple of questions regrading incremental backups:
1. If I already have a Cassandra cluster running, would changing the  i
ncremental_backups parameter in the cassandra.yaml of each node, and then
restart it do the trick?
2. Assuming I am creating a daily snapshot, what is the gain from setting
incremental backup to true?

Thanks,
Tamar

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Sat, Apr 28, 2012 at 4:04 PM, Roshan codeva...@gmail.com wrote:

 Hi

 Currently I am taking daily snapshot on my keyspace in production and
 already enable the incremental backups as well.

 According to the documentation, the incremental backup option will create
 an
 hard-link to the backup folder when new sstable is flushed. Snapshot will
 copy all the data/index/etc. files to a new folder.

 *Question:*
 What will happen (with enabling the incremental backup) when crash (due to
 any reason) the Cassandra before flushing the data as a SSTable (inserted
 data still in commitlog). In this case how can I backup/restore data?

 Do I need to backup the commitlogs as well and and replay during the server
 start to restore the data in commitlog files?

 Thanks.



 --
 View this message in context:
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-backup-queston-regarding-commitlogs-tp7508823.html
 Sent from the cassandra-u...@incubator.apache.org mailing list archive at
 Nabble.com.

tokLogo.png

incremental_backups

2012-04-29 Thread Tamar Fraenkel
Hi!
I wonder what are the advantages of doing incremental snapshot over non
incremental?
Are the snapshots smaller is size? Are there any other implications?
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: Long type column names in reverse order

2012-04-21 Thread Tamar Fraenkel
Maybe I don't get what you are trying to do, but using Cassandra and
Hector, what I do is:
I have the following column family definition
CREATE COLUMN FAMILY CF_NAME
with comparator = 'CompositeType(LongType(reversed=true),UUIDType)'
and default_validation_class = 'UTF8Type'
and key_validation_class = 'UUIDType';

Then to iterate it I use ColumnSliceIterator. You can also leave out the
reversed=true and use ColumnSliceIterator to iterate backwards.



*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Fri, Apr 20, 2012 at 10:47 PM, Tarun Gupta tarun.gu...@technogica.comwrote:

 Thanks, This post helped a lot, I discovered that the built-in comparators
 have a static instance called *reverseComparator.* My exact requirement
 was to create an API that allows creating a Column family with the required
 parameters, one such parameter was a flag that indicates the column order.
 I am using Hector API for this purpose. The way I finally solved this is as
 follows :

 *public class ReverseColumnComparator extends  AbstractTypeByteBuffer {*
 * *
 * private static ComparatorByteBuffer otherInstance =
 BytesType.instance.reverseComparator ;*
 * *
 * public static final ReverseColumnComparator instance = new
 ReverseColumnComparator();*
 * *
 * @Override*
 * public int compare(ByteBuffer o1, ByteBuffer o2) {*
 * return otherInstance.compare(o1, o2);*
 * }*
 * @Override*
 * public ByteBuffer compose(ByteBuffer arg0) {*
 * return BytesType.instance.compose(arg0);*
 * }*
 * @Override*
 * public ByteBuffer decompose(ByteBuffer arg0) {*
 * return BytesType.instance.decompose(arg0);*
 * }*
 * @Override*
 * public String getString(ByteBuffer arg0) {*
 * return BytesType.instance.getString(arg0);*
 * }*
 * @Override*
 * public void validate(ByteBuffer arg0) throws MarshalException {*
 * BytesType.instance.validate(arg0);*
 * }*
 *}*

 Regards,
 Tarun


 On Fri, Apr 20, 2012 at 11:46 PM, Edward Capriolo 
 edlinuxg...@gmail.comwrote:

 I think you can drop the compiler since that feature already exists.

 http://thelastpickle.com/2011/10/03/Reverse-Comparators/


 On Fri, Apr 20, 2012 at 12:57 PM, Tarun Gupta
 tarun.gu...@technogica.com wrote:
  Hi,
 
  My requirements is to get retrieve column values, sorted by column
 names in
  reverse order (column names are 'long' type). The way I am trying to
  implement this is by using a custom comparator. I have written the
 custom
  comparator by using 'org.apache.cassandra.db.marshal.BytesType' and
 altering
  the compare() method. While inserting values it works fine but while
  retrieving the values I am getting
  ColumnSerializer$CorruptColumnException.
 
  I've attached the Comparator class. Please suggest what should I change
 to
  make it work.
 
  Regards
  Tarun



tokLogo.png

Re: exceptions after upgrading from 1.0.7 to 1.0.9

2012-04-19 Thread Tamar Fraenkel
Thanks.
This was the one I followed :) Wonder if there is something more detailed...

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Apr 19, 2012 at 1:06 PM, aaron morton aa...@thelastpickle.comwrote:

 try this
 http://www.datastax.com/docs/1.0/install/upgrading#upgrading-between-minor-releases-of-cassandra-1-0-x

 Cheers


   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 18/04/2012, at 3:02 AM, Tamar Fraenkel wrote:

 Thanks!!!
 Two simple actions

1. sudo apt-get install python-setuptools
2. sudo easy_install cql

 And it did the trick!

 But just to be on the safe side, before I move to upgrade our staging
 environment, does anyone know a detailed description of how to upgrade
 cassandra installed from tar.gz? or how to upgrade Amazon EC2 datastax
 AMI?

 Thanks,
 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Apr 17, 2012 at 4:56 PM, Watanabe Maki watanabe.m...@gmail.comwrote:

 You need to install cql driver for python as it says.
 % easy_install cql
 If you don't have easy_install, you need to install it first. You will be
 able to find easy_install by querying easy_install python on google.

 maki


 On 2012/04/17, at 20:18, Tamar Fraenkel ta...@tok-media.com wrote:

 Thanks for answering!
 I unzipped the cassandra taken from
 http://off.co.il/apache/cassandra/1.0.9/apache-cassandra-1.0.9-bin.tar.gz
 .
 I changed the cassandra init script to run from the new installation
 (CASSANDRA_HOME).

 I retried it, and found out that the reason Cassandra didn't start the
 previous time was a typo in the init script.
 So now Cassandra 1.0.9 is up, but cqlsh still give me the following
 error, even when I make sure it is started from the 1.0.9 bin (Cassandra-
 cli works well):
 Python CQL driver not installed, or not on PYTHONPATH.
 You might try easy_install cql.

 Python: /usr/bin/python
 Module load path: ['/usr/share/apache-cassandra-1.0.9/bin/../pylib',
 '/usr/share/apache-cassandra-1.0.9/bin', '/usr/lib/python2.7',
 '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk',
 '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload',
 '/usr/local/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL',
 '/usr/lib/python2.7/dist-packages/gst-0.10',
 '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7',
 '/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
 '/usr/lib/python2.7/dist-packages/ubuntuone-client',
 '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel',
 '/usr/lib/python2.7/dist-packages/ubuntuone-couch',
 '/usr/lib/python2.7/dist-packages/ubuntuone-installer',
 '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol']

 Error: No module named cql


 Also, do you have a good step by step upgrade guide for tar.gz?

 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Apr 17, 2012 at 12:15 PM, aaron morton 
 aa...@thelastpickle.comwrote:

 1. No cql is installed now. Do I need to download and install
 separately?

 If you have just unzipped the bin distrobution nothing will be
 installed, it will only be in the unzipped file locations. cqlsh is in
 the bin/ directory.

 As well as the datastax packages there are Debian packages from Apache,
 see http://cassandra.apache.org/download/

 2. Cassandra won't start, I got the following exception below.


 This message

  INFO [RMI TCP Connection(2)-127.0.0.1] 2012-04-16 00:25:37,980
 StorageService.java (line 667) DRAINED

 Says that the sever was drained via the JMX / Node Tool interface. The
 RejectedExecutionError that followed are a result of the server shutting
 down.

 3. While at it, does someone know if  Hector 1.0.3 supports Cassandra
  1.0.9?

 It should do, cassandra 1.0.9 is compatible with previous 1.0.X
 releases.

 Cheers


   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 16/04/2012, at 7:54 PM, Tamar Fraenkel wrote:

 Hi!
 I had datastax 1.0.7 installed on ubuntu.
 I downloaded
 http://off.co.il/apache/cassandra/1.0.9/apache-cassandra-1.0.9-bin.tar.gzand
  unzipped it. I left both versions installed, but changed my service
 script to start the 1.0.9.
 Two problems:

 1. No cql is installed now. Do I need to download and install
 separately?
 2. Cassandra won't start, I got the following exception below.
 3. While at it, does someone know if  Hector 1.0.3 supports Cassandra
  1.0.9?

 Thanks,
 Tamar

  INFO [FlushWriter:2] 2012-04-16 00:25:37,879 Memtable.java (line 283)
 Completed flushing /var/lib/cassandra
 /data/OpsCenter

Re: Counter column family

2012-04-18 Thread Tamar Fraenkel
My problem was the result of Hector bug, see
http://groups.google.com/group/hector-users/browse_thread/thread/8359538ed387564e

So please ignore question,
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Apr 17, 2012 at 6:59 PM, Tamar Fraenkel ta...@tok-media.com wrote:

 Hi!
 I want to understand how incrementing of counter works.


- I have a 3 node ring,
- I use FailoverPolicy.FAIL_FAST,
- RF is 2,

 I have the following counter column family
 ColumnFamily: tk_counters
   Key Validation Class: org.apache.cassandra.db.marshal.CompositeType(
 org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.
 UUIDType)
   Default column value validator: org.apache.cassandra.db.marshal.
 CounterColumnType
   Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type
   Row cache size / save period in seconds / keys to save : 0.0/0/all
   Row Cache Provider: org.apache.cassandra.cache.
 SerializingCacheProvider
   Key cache size / save period in seconds: 0.0/14400
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 1.0
   Replicate on write: true
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy:
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

 My CL for this column family is Write=2, Read=1.

 When I increment a counter (using hector mutator), and execute returns
 without errors, what is the status of the nodes at that stage.
 Can execute return before the nodes are really updated? So that if a read
 is done immediately after the increment it will still read the previous
 values?
 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




tokLogo.png

Re: exceptions after upgrading from 1.0.7 to 1.0.9

2012-04-17 Thread Tamar Fraenkel
Thanks!!!
Two simple actions

   1. sudo apt-get install python-setuptools
   2. sudo easy_install cql

And it did the trick!

But just to be on the safe side, before I move to upgrade our staging
environment, does anyone know a detailed description of how to upgrade
cassandra installed from tar.gz? or how to upgrade Amazon EC2 datastax AMI?

Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Apr 17, 2012 at 4:56 PM, Watanabe Maki watanabe.m...@gmail.comwrote:

 You need to install cql driver for python as it says.
 % easy_install cql
 If you don't have easy_install, you need to install it first. You will be
 able to find easy_install by querying easy_install python on google.

 maki


 On 2012/04/17, at 20:18, Tamar Fraenkel ta...@tok-media.com wrote:

 Thanks for answering!
 I unzipped the cassandra taken from
 http://off.co.il/apache/cassandra/1.0.9/apache-cassandra-1.0.9-bin.tar.gz.
 I changed the cassandra init script to run from the new installation
 (CASSANDRA_HOME).

 I retried it, and found out that the reason Cassandra didn't start the
 previous time was a typo in the init script.
 So now Cassandra 1.0.9 is up, but cqlsh still give me the following
 error, even when I make sure it is started from the 1.0.9 bin (Cassandra-
 cli works well):
 Python CQL driver not installed, or not on PYTHONPATH.
 You might try easy_install cql.

 Python: /usr/bin/python
 Module load path: ['/usr/share/apache-cassandra-1.0.9/bin/../pylib',
 '/usr/share/apache-cassandra-1.0.9/bin', '/usr/lib/python2.7',
 '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk',
 '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload',
 '/usr/local/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL',
 '/usr/lib/python2.7/dist-packages/gst-0.10',
 '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7',
 '/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
 '/usr/lib/python2.7/dist-packages/ubuntuone-client',
 '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel',
 '/usr/lib/python2.7/dist-packages/ubuntuone-couch',
 '/usr/lib/python2.7/dist-packages/ubuntuone-installer',
 '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol']

 Error: No module named cql


 Also, do you have a good step by step upgrade guide for tar.gz?

 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Apr 17, 2012 at 12:15 PM, aaron morton aa...@thelastpickle.comwrote:

 1. No cql is installed now. Do I need to download and install separately?

 If you have just unzipped the bin distrobution nothing will be
 installed, it will only be in the unzipped file locations. cqlsh is in
 the bin/ directory.

 As well as the datastax packages there are Debian packages from Apache,
 see http://cassandra.apache.org/download/

 2. Cassandra won't start, I got the following exception below.


 This message

  INFO [RMI TCP Connection(2)-127.0.0.1] 2012-04-16 00:25:37,980
 StorageService.java (line 667) DRAINED

 Says that the sever was drained via the JMX / Node Tool interface. The
 RejectedExecutionError that followed are a result of the server shutting
 down.

 3. While at it, does someone know if  Hector 1.0.3 supports Cassandra
  1.0.9?

 It should do, cassandra 1.0.9 is compatible with previous 1.0.X
 releases.

 Cheers


   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 16/04/2012, at 7:54 PM, Tamar Fraenkel wrote:

 Hi!
 I had datastax 1.0.7 installed on ubuntu.
 I downloaded
 http://off.co.il/apache/cassandra/1.0.9/apache-cassandra-1.0.9-bin.tar.gzand 
 unzipped it. I left both versions installed, but changed my service
 script to start the 1.0.9.
 Two problems:

 1. No cql is installed now. Do I need to download and install separately?
 2. Cassandra won't start, I got the following exception below.
 3. While at it, does someone know if  Hector 1.0.3 supports Cassandra
  1.0.9?

 Thanks,
 Tamar

  INFO [FlushWriter:2] 2012-04-16 00:25:37,879 Memtable.java (line 283)
 Completed flushing /var/lib/cassandra
 /data/OpsCenter/events_timeline-hc-28-Data.db (79 bytes)
  INFO [RMI TCP Connection(2)-127.0.0.1] 2012-04-16 00:25:37,980
 StorageService.java (line 667) DRAINED
 ERROR [CompactionExecutor:3] 2012-04-16 00:25:38,021
 AbstractCassandraDaemon.java (line 139) Fatal exception in thread Thread[
 CompactionExecutor:3,1,RMI Runtime]
 java.util.concurrent.RejectedExecutionException
 at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.
 rejectedExecution(ThreadPoolExecutor.java:1768)
 at java.util.concurrent.ThreadPoolExecutor.reject(
 ThreadPoolExecutor.java:767

Counter column family

2012-04-17 Thread Tamar Fraenkel
Hi!
I want to understand how incrementing of counter works.


   - I have a 3 node ring,
   - I use FailoverPolicy.FAIL_FAST,
   - RF is 2,

I have the following counter column family
ColumnFamily: tk_counters
  Key Validation Class: org.apache.cassandra.db.marshal.CompositeType(
org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.
UUIDType)
  Default column value validator: org.apache.cassandra.db.marshal.
CounterColumnType
  Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type
  Row cache size / save period in seconds / keys to save : 0.0/0/all
  Row Cache Provider: org.apache.cassandra.cache.
SerializingCacheProvider
  Key cache size / save period in seconds: 0.0/14400
  GC grace seconds: 864000
  Compaction min/max thresholds: 4/32
  Read repair chance: 1.0
  Replicate on write: true
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy:
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

My CL for this column family is Write=2, Read=1.

When I increment a counter (using hector mutator), and execute returns
without errors, what is the status of the nodes at that stage.
Can execute return before the nodes are really updated? So that if a read
is done immediately after the increment it will still read the previous
values?
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: cql shell error

2012-04-16 Thread Tamar Fraenkel
Thanks!
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Sun, Apr 15, 2012 at 9:05 PM, Janne Jalkanen janne.jalka...@ecyrd.comwrote:


 The Resolution line says Fixed, and the Fix Version line says
 1.0.9, 1.1.0. So upgrade to 1.0.9 to get a fix for this particular bug :-)

 (Luckily, 1.0.9 has been released a few days ago, so you can just download
 and upgrade.)

 /Janne

 On Apr 15, 2012, at 20:31 , Tamar Fraenkel wrote:

 I apologize for what must be a dumb question, but I see that there are
 patches etc, what do I need to do in order to have the fix. I am running
 latest Cassandra 1.0.8.

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Sun, Apr 15, 2012 at 7:46 PM, Janne Jalkanen 
 janne.jalka...@ecyrd.comwrote:


 You might have hit this bug:
 https://issues.apache.org/jira/browse/CASSANDRA-4003

 /Janne

 On Apr 15, 2012, at 17:21 , Tamar Fraenkel wrote:

 Hi!
 I have an error when I try to read column value using cql but I can read
 it when I use cli.

 When I read in cli I get:
  get cf['a52efb7a-b2ea-417b-b54a-9d6a2ebf6d71']['i:nwtp_name']=
 = (column=i:nwtp_name, value=G�¼nter Grass's Israel poem provokes
 outrage, timestamp=1333816116526001)

 When I try to read with cqlsh I get:
 'ascii' codec can't encode character u'\u2019' in position 5: ordinal
 not in range(128)

 Do I need to save only ascii chars, or can I read it somehow using cql?

 Thanks


 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956







tokLogo.png

cql shell error

2012-04-15 Thread Tamar Fraenkel
Hi!
I have an error when I try to read column value using cql but I can read it
when I use cli.

When I read in cli I get:
 get cf['a52efb7a-b2ea-417b-b54a-9d6a2ebf6d71']['i:nwtp_name']=
= (column=i:nwtp_name, value=G�¼nter Grass's Israel poem provokes outrage,
timestamp=1333816116526001)

When I try to read with cqlsh I get:
'ascii' codec can't encode character u'\u2019' in position 5: ordinal not
in range(128)

Do I need to save only ascii chars, or can I read it somehow using cql?

Thanks


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: cql shell error

2012-04-15 Thread Tamar Fraenkel
I apologize for what must be a dumb question, but I see that there are
patches etc, what do I need to do in order to have the fix. I am running
latest Cassandra 1.0.8.

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Sun, Apr 15, 2012 at 7:46 PM, Janne Jalkanen janne.jalka...@ecyrd.comwrote:


 You might have hit this bug:
 https://issues.apache.org/jira/browse/CASSANDRA-4003

 /Janne

 On Apr 15, 2012, at 17:21 , Tamar Fraenkel wrote:

 Hi!
 I have an error when I try to read column value using cql but I can read
 it when I use cli.

 When I read in cli I get:
  get cf['a52efb7a-b2ea-417b-b54a-9d6a2ebf6d71']['i:nwtp_name']=
 = (column=i:nwtp_name, value=G�¼nter Grass's Israel poem provokes
 outrage, timestamp=1333816116526001)

 When I try to read with cqlsh I get:
 'ascii' codec can't encode character u'\u2019' in position 5: ordinal not
 in range(128)

 Do I need to save only ascii chars, or can I read it somehow using cql?

 Thanks


 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





tokLogo.png

Re: data size difference between supercolumn and regular column

2012-04-03 Thread Tamar Fraenkel
Do you have a good reference for maintenance scripts for Cassandra ring?
Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Apr 3, 2012 at 4:37 AM, aaron morton aa...@thelastpickle.comwrote:

 If you have a workload with overwrites you will end up with some data
 needing compaction. Running a nightly manual compaction would remove this,
 but it will also soak up some IO so it may not be the best solution.

 I do not know if Leveled compaction would result in a smaller disk load
 for the same workload.

 I agree with other people, turn on compaction.

 Cheers

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 3/04/2012, at 9:19 AM, Yiming Sun wrote:

 Yup Jeremiah, I learned a hard lesson on how cassandra behaves when it
 runs out of disk space :-S.I didn't try the compression, but when it
 ran out of disk space, or near running out, compaction would fail because
 it needs space to create some tmp data files.

 I shall get a tatoo that says keep it around 50% -- this is valuable tip.

 -- Y.

 On Sun, Apr 1, 2012 at 11:25 PM, Jeremiah Jordan 
 jeremiah.jor...@morningstar.com wrote:

  Is that 80% with compression?  If not, the first thing to do is turn on
 compression.  Cassandra doesn't behave well when it runs out of disk space.
  You really want to try and stay around 50%,  60-70% works, but only if it
 is spread across multiple column families, and even then you can run into
 issues when doing repairs.

  -Jeremiah



  On Apr 1, 2012, at 9:44 PM, Yiming Sun wrote:

 Thanks Aaron.  Well I guess it is possible the data files from
 sueprcolumns could've been reduced in size after compaction.

  This bring yet another question.  Say I am on a shoestring budget and
 can only put together a cluster with very limited storage space.  The first
 iteration of pushing data into cassandra would drive the disk usage up into
 the 80% range.  As time goes by, there will be updates to the data, and
 many columns will be overwritten.  If I just push the updates in, the disks
 will run out of space on all of the cluster nodes.  What would be the best
 way to handle such a situation if I cannot to buy larger disks? Do I need
 to delete the rows/columns that are going to be updated, do a compaction,
 and then insert the updates?  Or is there a better way?  Thanks

  -- Y.

 On Sat, Mar 31, 2012 at 3:28 AM, aaron morton aa...@thelastpickle.comwrote:

   does cassandra 1.0 perform some default compression?

  No.

  The on disk size depends to some degree on the work load.

  If there are a lot of overwrites or deleted you may have rows/columns
 that need to be compacted. You may have some big old SSTables that have not
 been compacted for a while.

  There is some overhead involved in the super columns: the super col
 name, length of the name and the number of columns.

  Cheers

 -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

  On 29/03/2012, at 9:47 AM, Yiming Sun wrote:

 Actually, after I read an article on cassandra 1.0 compression just now
 (
 http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-compression),
 I am more puzzled.  In our schema, we didn't specify any compression
 options -- does cassandra 1.0 perform some default compression? or is the
 data reduction purely because of the schema change?  Thanks.

  -- Y.

 On Wed, Mar 28, 2012 at 4:40 PM, Yiming Sun yiming@gmail.comwrote:

 Hi,

  We are trying to estimate the amount of storage we need for a
 production cassandra cluster.  While I was doing the calculation, I noticed
 a very dramatic difference in terms of storage space used by cassandra data
 files.

  Our previous setup consists of a single-node cassandra 0.8.x with no
 replication, and the data is stored using supercolumns, and the data files
 total about 534GB on disk.

  A few weeks ago, I put together a cluster consisting of 3 nodes
 running cassandra 1.0 with replication factor of 2, and the data is
 flattened out and stored using regular columns.  And the aggregated data
 file size is only 488GB (would be 244GB if no replication).

  This is a very dramatic reduction in terms of storage needs, and is
 certainly good news in terms of how much storage we need to provision.
  However, because of the dramatic reduction, I also would like to make sure
 it is absolutely correct before submitting it - and also get a sense of why
 there was such a difference. -- I know cassandra 1.0 does data compression,
 but does the schema change from supercolumn to regular column also help
 reduce storage usage?  Thanks.

  -- Y.








tokLogo.png

Re: counter column family

2012-04-03 Thread Tamar Fraenkel
Hi!
So, if I am using Hector, I need to do:
cassandraHostConfigurator.setRetryDownedHosts(false)?

How will this affect my application generally?

Thanks

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Mar 27, 2012 at 4:25 PM, R. Verlangen ro...@us2.nl wrote:

 You should use a connection pool without retries to prevent a single
 increment of +1 have a result of e.g. +3.


 2012/3/27 Rishabh Agrawal rishabh.agra...@impetus.co.in

  You can even define how much increment you want. But let me just warn
 you, as far my knowledge, it has consistency issues.



 *From:* puneet loya [mailto:puneetl...@gmail.com]
 *Sent:* Tuesday, March 27, 2012 5:59 PM

 *To:* user@cassandra.apache.org
 *Subject:* Re: counter column family



 thanxx a ton :) :)



 the counter column family works synonymous as 'auto increment' in other
 databases rite?



 I mean we have a column of  type integer which increments with every
 insert.



 Am i goin the rite way??



 please reply :)

 On Tue, Mar 27, 2012 at 5:50 PM, R. Verlangen ro...@us2.nl wrote:

 *create column family MyCounterColumnFamily with
 default_validation_class=CounterColumnType and
 key_validation_class=UTF8Type and comparator=UTF8Type;*



 There you go! Keys must be utf8, as well as the column names. Of course
 you can change those validators.



 Cheers!



 2012/3/27 puneet loya puneetl...@gmail.com

 Can u give an example of create column family with counter column in it.





 Please reply





 Regards,



 Puneet Loya





 --
 With kind regards,



 Robin Verlangen

 www.robinverlangen.nl





 --

 Impetus to sponsor and exhibit at Structure Data 2012, NY; Mar 21-22.
 Know more about our Big Data quick-start program at the event.

 New Impetus webcast ‘Cloud-enabled Performance Testing vis-à-vis
 On-premise’ available at http://bit.ly/z6zT4L.


 NOTE: This message may contain information that is confidential,
 proprietary, privileged or otherwise protected by law. The message is
 intended solely for the named addressee. If received in error, please
 destroy and notify the sender. Any use of this email is prohibited when
 received in error. Impetus does not represent, warrant and/or guarantee,
 that the integrity of this communication has been maintained nor that the
 communication is free of errors, virus, interception or interference.




 --
 With kind regards,

 Robin Verlangen
 www.robinverlangen.nl


tokLogo.png

Counter question

2012-03-29 Thread Tamar Fraenkel
Hi!
Asking again, as I didn't get responses :)

I have a ring with 3 nodes and replication factor of 2.
I have counter cf with the following definition:

CREATE COLUMN FAMILY tk_counters
with comparator = 'UTF8Type'
and default_validation_class = 'CounterColumnType'
and key_validation_class = 'CompositeType(UTF8Type,UUIDType)'
and replicate_on_write = true;

In my code (Java, Hector), I increment a counter and then read it.
Is it possible that the value read will be the value before increment?
If yes, how can I ensure it does not happen. All my reads and writes are
done with consistency level one.
If this is consistency issue, can I do only the actions on tk_counters
column family with a higher consistency level?
What does replicate_on_write mean? I thought this should help, but maybe
even if replicating after write, my read happen before replication finished
and it returns value from a still not updated node.

My increment code is:
MutatorComposite mutator =
HFactory.createMutator(keyspace,
CompositeSerializer.get());
mutator.incrementCounter(key,tk_counters, columnName, inc);
mutator.execute();

My read counter code is:
CounterQueryComposite,String query =
createCounterColumnQuery(keyspace,
CompositeSerializer.get(), StringSerializer.get());
query.setColumnFamily(tk_counters);
query.setKey(key);
query.setName(columnName);
QueryResultHCounterColumnString r = query.execute();
return r.get().getValue();

Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: Counter question

2012-03-29 Thread Tamar Fraenkel
Can this be set on a CF basis.
Only this CF needs higher consistency level.
Thanks,
Tamar
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Mar 29, 2012 at 10:44 AM, Shimi Kiviti shim...@gmail.com wrote:

 Like everything else in Cassandra, If you need full consistency you need
 to make sure that you have the right combination of (write consistency
 level) + (read consistency level)

 if
 W = write consistency level
 R = read consistency level
 N = replication factor
 then
 W + R  N

 Shimi


 On Thu, Mar 29, 2012 at 10:09 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Asking again, as I didn't get responses :)

 I have a ring with 3 nodes and replication factor of 2.
 I have counter cf with the following definition:

 CREATE COLUMN FAMILY tk_counters
 with comparator = 'UTF8Type'
 and default_validation_class = 'CounterColumnType'
 and key_validation_class = 'CompositeType(UTF8Type,UUIDType)'
 and replicate_on_write = true;

 In my code (Java, Hector), I increment a counter and then read it.
 Is it possible that the value read will be the value before increment?
 If yes, how can I ensure it does not happen. All my reads and writes are
 done with consistency level one.
 If this is consistency issue, can I do only the actions on tk_counters
 column family with a higher consistency level?
 What does replicate_on_write mean? I thought this should help, but maybe
 even if replicating after write, my read happen before replication
 finished and it returns value from a still not updated node.

 My increment code is:
 MutatorComposite mutator =
 HFactory.createMutator(keyspace,
 CompositeSerializer.get());
 mutator.incrementCounter(key,tk_counters, columnName, inc);
 mutator.execute();

 My read counter code is:
 CounterQueryComposite,String query =
 createCounterColumnQuery(keyspace,
 CompositeSerializer.get(), StringSerializer.get());
 query.setColumnFamily(tk_counters);
 query.setKey(key);
 query.setName(columnName);
 QueryResultHCounterColumnString r = query.execute();
 return r.get().getValue();

 Thanks,
 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





tokLogo.pngtokLogo.png

Re: Counter question

2012-03-29 Thread Tamar Fraenkel
Thanks! will do.
Tamar
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Thu, Mar 29, 2012 at 12:11 PM, Shimi Kiviti shim...@gmail.com wrote:

 You set the consistency with every request.
 Usually a client library will let you set a default one for all write/read
 requests.
 I don't know if Hector lets you set a default consistency level per CF.
 Take a look at the Hector docs or ask it in the Hector mailing list.

 Shimi


 On Thu, Mar 29, 2012 at 11:47 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Can this be set on a CF basis.
 Only this CF needs higher consistency level.
 Thanks,
 Tamar

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Thu, Mar 29, 2012 at 10:44 AM, Shimi Kiviti shim...@gmail.com wrote:

 Like everything else in Cassandra, If you need full consistency you need
 to make sure that you have the right combination of (write consistency
 level) + (read consistency level)

 if
  W = write consistency level
 R = read consistency level
 N = replication factor
 then
 W + R  N

 Shimi


 On Thu, Mar 29, 2012 at 10:09 AM, Tamar Fraenkel ta...@tok-media.comwrote:

 Hi!
 Asking again, as I didn't get responses :)

 I have a ring with 3 nodes and replication factor of 2.
 I have counter cf with the following definition:

 CREATE COLUMN FAMILY tk_counters
 with comparator = 'UTF8Type'
 and default_validation_class = 'CounterColumnType'
 and key_validation_class = 'CompositeType(UTF8Type,UUIDType)'
 and replicate_on_write = true;

 In my code (Java, Hector), I increment a counter and then read it.
 Is it possible that the value read will be the value before increment?
 If yes, how can I ensure it does not happen. All my reads and writes
 are done with consistency level one.
 If this is consistency issue, can I do only the actions on tk_counters
 column family with a higher consistency level?
 What does replicate_on_write mean? I thought this should help, but
 maybe even if replicating after write, my read happen before
 replication finished and it returns value from a still not updated
 node.

 My increment code is:
 MutatorComposite mutator =
 HFactory.createMutator(keyspace,
 CompositeSerializer.get());
 mutator.incrementCounter(key,tk_counters, columnName, inc);
 mutator.execute();

 My read counter code is:
 CounterQueryComposite,String query =
 createCounterColumnQuery(keyspace,
 CompositeSerializer.get(), StringSerializer.get());
 query.setColumnFamily(tk_counters);
 query.setKey(key);
 query.setName(columnName);
 QueryResultHCounterColumnString r = query.execute();
 return r.get().getValue();

 Thanks,
 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956







tokLogo.pngtokLogo.png

Re: unbalanced ring

2012-03-27 Thread Tamar Fraenkel
This morning I have
 nodetool ring -h localhost
Address DC  RackStatus State   LoadOwns
   Token

   113427455640312821154458202477256070485
10.34.158.33us-east 1c  Up Normal  5.78 MB
33.33%  0
10.38.175.131   us-east 1c  Up Normal  7.23 MB
33.33%  56713727820156410577229101238628035242
10.116.83.10us-east 1c  Up Normal  5.02 MB
33.33%  113427455640312821154458202477256070485

Version is 1.0.8.


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Mar 27, 2012 at 4:05 AM, Maki Watanabe watanabe.m...@gmail.comwrote:

 What version are you using?
 Anyway try nodetool repair  compact.

 maki


 2012/3/26 Tamar Fraenkel ta...@tok-media.com

 Hi!
 I created Amazon ring using datastax image and started filling the db.
 The cluster seems un-balanced.

 nodetool ring returns:
 Address DC  RackStatus State   Load
  OwnsToken

  113427455640312821154458202477256070485
 10.34.158.33us-east 1c  Up Normal  514.29 KB
 33.33%  0
 10.38.175.131   us-east 1c  Up Normal  1.5 MB
  33.33%  56713727820156410577229101238628035242
 10.116.83.10us-east 1c  Up Normal  1.5 MB
  33.33%  113427455640312821154458202477256070485

 [default@tok] describe;
 Keyspace: tok:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:2]

 [default@tok] describe cluster;
 Cluster Information:
Snitch: org.apache.cassandra.locator.Ec2Snitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
 4687d620-7664-11e1--1bcb936807ff: [10.38.175.131,
 10.34.158.33, 10.116.83.10]


 Any idea what is the cause?
 I am running similar code on local ring and it is balanced.

 How can I fix this?

 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





tokLogo.pngtokLogo.png

Re: unbalanced ring

2012-03-27 Thread Tamar Fraenkel
Thanks, I will wait and see as data accumulates.
Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Mar 27, 2012 at 9:00 AM, R. Verlangen ro...@us2.nl wrote:

 Cassandra is built to store tons and tons of data. In my opinion roughly ~
 6MB per node is not enough data to allow it to become a fully balanced
 cluster.


 2012/3/27 Tamar Fraenkel ta...@tok-media.com

 This morning I have
  nodetool ring -h localhost
 Address DC  RackStatus State   Load
  OwnsToken

  113427455640312821154458202477256070485
 10.34.158.33us-east 1c  Up Normal  5.78 MB
 33.33%  0
 10.38.175.131   us-east 1c  Up Normal  7.23 MB
 33.33%  56713727820156410577229101238628035242
  10.116.83.10us-east 1c  Up Normal  5.02 MB
 33.33%  113427455640312821154458202477256070485

 Version is 1.0.8.


  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956





 On Tue, Mar 27, 2012 at 4:05 AM, Maki Watanabe 
 watanabe.m...@gmail.comwrote:

 What version are you using?
 Anyway try nodetool repair  compact.

 maki


 2012/3/26 Tamar Fraenkel ta...@tok-media.com

 Hi!
 I created Amazon ring using datastax image and started filling the db.
 The cluster seems un-balanced.

 nodetool ring returns:
 Address DC  RackStatus State   Load
  OwnsToken

113427455640312821154458202477256070485
 10.34.158.33us-east 1c  Up Normal  514.29 KB
 33.33%  0
 10.38.175.131   us-east 1c  Up Normal  1.5 MB
  33.33%  56713727820156410577229101238628035242
 10.116.83.10us-east 1c  Up Normal  1.5 MB
  33.33%  113427455640312821154458202477256070485

 [default@tok] describe;
 Keyspace: tok:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:2]

 [default@tok] describe cluster;
 Cluster Information:
Snitch: org.apache.cassandra.locator.Ec2Snitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
 4687d620-7664-11e1--1bcb936807ff: [10.38.175.131,
 10.34.158.33, 10.116.83.10]


 Any idea what is the cause?
 I am running similar code on local ring and it is balanced.

 How can I fix this?

 Thanks,

 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956








 --
 With kind regards,

 Robin Verlangen
 www.robinverlangen.nl


tokLogo.pngtokLogo.png

Re: Mutator or Template?

2012-03-21 Thread Tamar Fraenkel
Thanks, I posted there.
*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Tue, Mar 20, 2012 at 7:39 PM, aaron morton aa...@thelastpickle.comwrote:

 For hector based questions try the Hector User group
 https://groups.google.com/forum/?fromgroups#!forum/hector-users

 Cheers

   -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 20/03/2012, at 7:26 PM, Tamar Fraenkel wrote:


 Hi!
 I am using Cassandra with Hector. Usually I use ColumnFamilyTemplate and
 ColumnFamilyUpdater to update column families, but sometimes I use Mutator.

 1. Is there a preference of using one vs. the other?
 2. Are there any actions that can be done with only one of them?

 Thanks,


 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 tokLogo.png


 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956






tokLogo.png

Mutator or Template?

2012-03-20 Thread Tamar Fraenkel
Hi!
I am using Cassandra with Hector. Usually I use ColumnFamilyTemplate and
ColumnFamilyUpdater to update column families, but sometimes I use Mutator.

1. Is there a preference of using one vs. the other?
2. Are there any actions that can be done with only one of them?

Thanks,


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: Hector counter question

2012-03-20 Thread Tamar Fraenkel
Thanks.
But the increment is thread safe right? if I have two threads trying to
increment a counter, then they won't step on each other toe?


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Mon, Mar 19, 2012 at 9:05 PM, Jeremiah Jordan 
jeremiah.jor...@morningstar.com wrote:

  No,
 Cassandra doesn't support atomic counters.  IIRC it is on the list of
 things for 1.2.

 -Jeremiah

  --
 *From:* Tamar Fraenkel [ta...@tok-media.com]
 *Sent:* Monday, March 19, 2012 1:26 PM
 *To:* cassandra-u...@incubator.apache.org
 *Subject:* Hector counter question

   Hi!

  Is there a way to read and increment counter column atomically,
 something like incrementAndGet (Hector)?

  Thanks,

  *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




tokLogo.pngtokLogo.png

Hector counter question

2012-03-19 Thread Tamar Fraenkel
Hi!

Is there a way to read and increment counter column atomically, something
like incrementAndGet (Hector)?

Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

consistency level question

2012-03-18 Thread Tamar Fraenkel
Hi!
I have a 3 node cassandra cluster.
I use Hector API.

I give hecotr one of the node's IP address
I call setAutoDiscoverHosts(true) and setRunAutoDiscoveryAtStartup(true).

The describe on one node returns:

Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
Options: [replication_factor:1]

The odd thing is that when I take one of the nodes down, expecting all to
continue running smoothly, I get exceptions of the format seen bellow, and
no read or write succeeds. When I bring the node back up, exceptions stop
and read and write resumes.

Any idea or explanation why this is the case?
Thanks!


me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be
enough replicas present to handle consistency level.
at
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:66)
at
me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:285)
at
me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:268)
at
me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)
at
me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:246)
at
me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:131)
at
me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSlice(KeyspaceServiceImpl.java:289)
at
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:53)
at
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:49)
at
me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
at
me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:85)
at
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery.execute(ThriftSliceQuery.java:48)
at
me.prettyprint.cassandra.service.ColumnSliceIterator.hasNext(ColumnSliceIterator.java:60)
at


*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956
tokLogo.png

Re: consistency level question

2012-03-18 Thread Tamar Fraenkel
Hi!
Thanks for the prompt answer,
That is true, I intend to have it at two.
What you say, is that if I change that, then even when the node is down, my
application will be able to read\write from the other node where the data
is replicated?

Forgot to mention that I have
ConfigurableConsistencyLevel ccl = new
ConfigurableConsistencyLevel();
ccl.setDefaultReadConsistencyLevel(HConsistencyLevel.ONE);
ccl.setDefaultWriteConsistencyLevel(HConsistencyLevel.ONE);
keyspace = HFactory.createKeyspace(KEYSPACE, cluster, ccl,
new FailoverPolicy(3, 10));

Thanks,

*Tamar Fraenkel *
Senior Software Engineer, TOK Media

[image: Inline image 1]

ta...@tok-media.com
Tel:   +972 2 6409736
Mob:  +972 54 8356490
Fax:   +972 2 5612956





On Sun, Mar 18, 2012 at 9:31 AM, Caleb Rackliffe ca...@steelhouse.comwrote:

 If your replication factor is set to one, your cluster is obviously in a
 bad state following any node failure.  At best, I think it would make sense
 that about a third of your operations fail, but I'm not sure why all of
 them would.  I don't know if Hector just refuses to work with a compromised
 cluster, etc.

 I guess I'm wondering why your replication factor is set to 1…

 *
 Caleb Rackliffe | Software Developer
 M 949.981.0159 | ca...@steelhouse.com
 **
  *

 From: Tamar Fraenkel ta...@tok-media.com
 Reply-To: user@cassandra.apache.org user@cassandra.apache.org
 Date: Sun, 18 Mar 2012 03:15:53 -0400
 To: cassandra-u...@incubator.apache.org 
 cassandra-u...@incubator.apache.org
 Subject: consistency level question

 Hi!
 I have a 3 node cassandra cluster.
 I use Hector API.

 I give hecotr one of the node's IP address
 I call setAutoDiscoverHosts(true) and setRunAutoDiscoveryAtStartup(true).

 The describe on one node returns:

 Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:1]

 The odd thing is that when I take one of the nodes down, expecting all to
 continue running smoothly, I get exceptions of the format seen bellow, and
 no read or write succeeds. When I bring the node back up, exceptions stop
 and read and write resumes.

 Any idea or explanation why this is the case?
 Thanks!


 me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be
 enough replicas present to handle consistency level.
 at
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:66)
 at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:285)
 at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:268)
 at
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)
 at
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:246)
 at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:131)
 at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSlice(KeyspaceServiceImpl.java:289)
 at
 me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:53)
 at
 me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:49)
 at
 me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
 at
 me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:85)
 at
 me.prettyprint.cassandra.model.thrift.ThriftSliceQuery.execute(ThriftSliceQuery.java:48)
 at
 me.prettyprint.cassandra.service.ColumnSliceIterator.hasNext(ColumnSliceIterator.java:60)
 at


 *Tamar Fraenkel *
 Senior Software Engineer, TOK Media

 [image: Inline image 1]

 ta...@tok-media.com
 Tel:   +972 2 6409736
 Mob:  +972 54 8356490
 Fax:   +972 2 5612956




EB2FF764-478C-4966-9B0A-E7B76D6AD7DC[15].pngtokLogo.pngtokLogo.png

  1   2   >