Re: Throttle Heavy Read / Write Loads

2015-06-04 Thread Anishek Agarwal
may be just increase the read and write timeouts at cassandra currently at
5 sec i think. i think the datastax java client driver provides ability to
say how many max requests per connection are to be sent, you can try and
lower that to limit excessive requests along with limiting the number of
connections a client can do.

just out of curiosity how long are GC pauses for you both ParNew and CMS
and at what intervals are you seeing the GC happening. I just recently
spent time to tune it and would be good to know if its working well.

thanks
anishek

On Fri, Jun 5, 2015 at 12:03 AM, Anuj Wadehra anujw_2...@yahoo.co.in
wrote:

 We are using Cassandra 2.0.14 with Hector as client ( will be gradually
 moving to CQL Driver ).

 Often we see that heavy read and write loads lead to Cassandra timeouts
 and unpredictable results due to gc pauses and request timeouts. We need to
 know the best way to throttle read and write load on Cassandra such that
 even if heavy operations are slower they complete gracefully. This will
 also shield us against misbehaving clients.

 I was thinking of limiting rpc connections via rpc_max_threads property
 and implementing connection pool at client side.

 I would appreciate if you could please share your suggestions on the above
 mentioned approach or share any alternatives to the approach.

 Thanks
 Anuj Wadehra




Re: Reading too many tombstones

2015-06-04 Thread Alain RODRIGUEZ
Actually what happen is that STC as well as LCS mix old and fresh data
during the compaction process.

So all the fragments of the same row that you deleted (or reached the TTL
of), are spread among multiple sstables. The point is that they need to be
gathered all in the same compaction to be really and fully evicted. In a
time series model (using wide rows), there might be quite a few fragments
for each row depending on your sharding on your primary key and on your
insert / update workload. This is due to some issues around distributes
deletes. Tombstones are actually like special inserts, if you remove them
without removing all the fragments of the row, old incorrect data may come
back. So you need to run repairs within the grace period (usually 10 days)
to avoid ghosts (make sure all node has received the tombstone + compact
all the fragments at once (from multiple SSTable) at node level.

Using STCS, the unique way to make sure of this is to run a major
compaction (nodetool compact mykeyspace mycf). But major compaction also
has some negative impacts. If you chose this option, you should read about
it.

An other approach would be to truncate periodically your data. Depending on
your needs, of course...

Anyway this has alway been a tricky issue, handling tombstones (and so
TTLs) properly. That's precisely for this kind of use cases that DTCS was
designed. This strategy keeps data per date which makes a sens for
timeseries and constant TTLs.

See https://labs.spotify.com/2014/12/18/date-tiered-compaction/ and
http://www.datastax.com/dev/blog/datetieredcompactionstrategy

Hope this will help.

NB : I haven't used this (yet ;-)). DTCS is enabled from Cassandra 2.0.11,
you should look at the changelog for some improvements / issues around
 DTCS and imho go directly to the last minor (2.0.15 ?) if you want to use
this compaction strategy.

C*heers,

Alain





2015-06-04 22:31 GMT+02:00 Sebastian Estevez sebastian.este...@datastax.com
:

 Check out the compaction subproperties for tombstones.


 http://docs.datastax.com/en/cql/3.1/cql/cql_reference/compactSubprop.html?scroll=compactSubprop__compactionSubpropertiesDTCS
 On Jun 4, 2015 1:29 PM, Aiman Parvaiz ai...@flipagram.com wrote:

 Thanks Carlos for pointing me in that direction, I have some interesting
 findings to share. So in December last year there was a redesign of
 home_feed and it was migrated to a new CF. Initially all the data in
 home_feed had a TTL of 1 year but migrated data was inserted with TTL of
 30days.
 Now on digging a bit deeper I found that home_feed still has data from
 Jan 2015 with ttl 1275094 (14 days).

 This data is for the same id from home_feed:
  date | ttl(description)
 --+--
  2015-04-03 21:22:58+ |   759791
  2015-04-03 04:50:11+ |   412706
  2015-03-30 22:18:58+ |   759791
  2015-03-29 15:20:36+ |  1978689
  2015-03-28 14:41:28+ |  1275116
  2015-03-28 14:31:25+ |  1275116
  2015-03-18 19:23:44+ |  2512936
  2015-03-13 17:51:01+ |  1978689
  2015-02-12 15:41:01+ |  1978689
  2015-01-18 02:36:27+ |  1275094


 I am not sure what happened in that migration but I think that when
 trying to load data we are reading this old data(as feed queries a
 1000/page to be displayed to the user) and in order to read this data we
 have to cross(read) lots of tombstones(newer data has TTL working
 correctly) and hence the error.
 I am not sure how much would date tier help us in this situation too. If
 anyone has any suggestions in how to handle this either on Systems or
 Developer level please pitch in.

 Thanks

 On Thu, Jun 4, 2015 at 11:47 AM, Carlos Rolo r...@pythian.com wrote:

 The TTL data will only be removed after the gc_grace_seconds. So your
 data with 30 days TTL will be still in Cassandra for 10 days more (40 in
 total). Is your data being there for more than that? Otherwise it is
 expected behaviour and probably you should do something on your data model
 to avoid scanning tombstoned data.

 Regards,

 Carlos Juzarte Rolo
 Cassandra Consultant

 Pythian - Love your data

 rolo@pythian | Twitter: cjrolo | Linkedin: 
 *linkedin.com/in/carlosjuzarterolo
 http://linkedin.com/in/carlosjuzarterolo*
 Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
 www.pythian.com

 On Thu, Jun 4, 2015 at 8:31 PM, Aiman Parvaiz ai...@flipagram.com
 wrote:

 yeah we don't update old data. One thing I am curious about is why are
 we running in to so many tombstones with compaction happening normally. Is
 compaction not removing tombstomes?


 On Thu, Jun 4, 2015 at 11:25 AM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 DateTiered is fantastic if you've got time series, TTLed data.  That
 means no updates to old data.

 On Thu, Jun 4, 2015 at 10:58 AM Aiman Parvaiz ai...@flipagram.com
 wrote:

 Hi everyone,
 We are running a 10 node Cassandra 2.0.9 without vnode 

Re: Reading too many tombstones

2015-06-04 Thread Jonathan Haddad
DateTiered is fantastic if you've got time series, TTLed data.  That means
no updates to old data.

On Thu, Jun 4, 2015 at 10:58 AM Aiman Parvaiz ai...@flipagram.com wrote:

 Hi everyone,
 We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
 running in to a issue where we are reading too many tombstones and hence
 getting tons of WARN messages and some ERROR query aborted.

 cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
 https://logentries.com/app/9f95dbd4#1998
 SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
 cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
 requested, slices= https://logentries.com/app/9f95dbd4#[-], delInfo=
 https://logentries.com/app/9f95dbd4#{deletedAt=
 https://logentries.com/app/9f95dbd4#-9223372036854775808, localDeletion=
 https://logentries.com/app/9f95dbd4#2147483647}

 cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
 https://logentries.com/app/9f95dbd4#1953
 SliceQueryFilter.collectReducedColumns - Scanned over 10 tombstones in
 ABC.home_feed; query aborted (see tombstone_fail_threshold)

 As you can see all of this is happening for CF home_feed. This CF is
 basically maintaining a feed with TTL set to 2592000 (30 days).
 gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.

 Repairs have been running regularly and automatic compactions are
 occurring normally too.

 I can definitely use some help here in how to tackle this issue.

 Up till now I have the following ideas:

 1) I can make gc_grace_seconds to 0 and then do a manual compaction for
 this CF and bump up the gc_grace again.

 2) Make gc_grace 0, run manual compaction on this CF and leave gc_grace to
 zero. In this case have to be careful in running repairs.

 3) I am also considering moving to DateTier Compaction.

 What would be the best approach here for my feed case. Any help is
 appreciated.

 Thanks




Re: Reading too many tombstones

2015-06-04 Thread Carlos Rolo
The TTL data will only be removed after the gc_grace_seconds. So your data
with 30 days TTL will be still in Cassandra for 10 days more (40 in total).
Is your data being there for more than that? Otherwise it is expected
behaviour and probably you should do something on your data model to avoid
scanning tombstoned data.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
http://linkedin.com/in/carlosjuzarterolo*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Thu, Jun 4, 2015 at 8:31 PM, Aiman Parvaiz ai...@flipagram.com wrote:

 yeah we don't update old data. One thing I am curious about is why are we
 running in to so many tombstones with compaction happening normally. Is
 compaction not removing tombstomes?


 On Thu, Jun 4, 2015 at 11:25 AM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 DateTiered is fantastic if you've got time series, TTLed data.  That
 means no updates to old data.

 On Thu, Jun 4, 2015 at 10:58 AM Aiman Parvaiz ai...@flipagram.com
 wrote:

 Hi everyone,
 We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
 running in to a issue where we are reading too many tombstones and hence
 getting tons of WARN messages and some ERROR query aborted.

 cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
 https://logentries.com/app/9f95dbd4#1998
 SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
 cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
 requested, slices= https://logentries.com/app/9f95dbd4#[-], delInfo=
 https://logentries.com/app/9f95dbd4#{deletedAt=
 https://logentries.com/app/9f95dbd4#-9223372036854775808,
 localDeletion= https://logentries.com/app/9f95dbd4#2147483647}

 cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
 https://logentries.com/app/9f95dbd4#1953
 SliceQueryFilter.collectReducedColumns - Scanned over 10 tombstones in
 ABC.home_feed; query aborted (see tombstone_fail_threshold)

 As you can see all of this is happening for CF home_feed. This CF is
 basically maintaining a feed with TTL set to 2592000 (30 days).
 gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.

 Repairs have been running regularly and automatic compactions are
 occurring normally too.

 I can definitely use some help here in how to tackle this issue.

 Up till now I have the following ideas:

 1) I can make gc_grace_seconds to 0 and then do a manual compaction for
 this CF and bump up the gc_grace again.

 2) Make gc_grace 0, run manual compaction on this CF and leave gc_grace
 to zero. In this case have to be careful in running repairs.

 3) I am also considering moving to DateTier Compaction.

 What would be the best approach here for my feed case. Any help is
 appreciated.

 Thanks







-- 


--





Reading too many tombstones

2015-06-04 Thread Aiman Parvaiz
Hi everyone,
We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
running in to a issue where we are reading too many tombstones and hence
getting tons of WARN messages and some ERROR query aborted.

cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
https://logentries.com/app/9f95dbd4#1998
SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
requested, slices= https://logentries.com/app/9f95dbd4#[-], delInfo=
https://logentries.com/app/9f95dbd4#{deletedAt=
https://logentries.com/app/9f95dbd4#-9223372036854775808, localDeletion=
https://logentries.com/app/9f95dbd4#2147483647}

cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
https://logentries.com/app/9f95dbd4#1953
SliceQueryFilter.collectReducedColumns - Scanned over 10 tombstones in
ABC.home_feed; query aborted (see tombstone_fail_threshold)

As you can see all of this is happening for CF home_feed. This CF is
basically maintaining a feed with TTL set to 2592000 (30 days).
gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.

Repairs have been running regularly and automatic compactions are occurring
normally too.

I can definitely use some help here in how to tackle this issue.

Up till now I have the following ideas:

1) I can make gc_grace_seconds to 0 and then do a manual compaction for
this CF and bump up the gc_grace again.

2) Make gc_grace 0, run manual compaction on this CF and leave gc_grace to
zero. In this case have to be careful in running repairs.

3) I am also considering moving to DateTier Compaction.

What would be the best approach here for my feed case. Any help is
appreciated.

Thanks


Re: Reading too many tombstones

2015-06-04 Thread Aiman Parvaiz
yeah we don't update old data. One thing I am curious about is why are we
running in to so many tombstones with compaction happening normally. Is
compaction not removing tombstomes?

On Thu, Jun 4, 2015 at 11:25 AM, Jonathan Haddad j...@jonhaddad.com wrote:

 DateTiered is fantastic if you've got time series, TTLed data.  That means
 no updates to old data.

 On Thu, Jun 4, 2015 at 10:58 AM Aiman Parvaiz ai...@flipagram.com wrote:

 Hi everyone,
 We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
 running in to a issue where we are reading too many tombstones and hence
 getting tons of WARN messages and some ERROR query aborted.

 cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
 https://logentries.com/app/9f95dbd4#1998
 SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
 cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
 requested, slices= https://logentries.com/app/9f95dbd4#[-], delInfo=
 https://logentries.com/app/9f95dbd4#{deletedAt=
 https://logentries.com/app/9f95dbd4#-9223372036854775808,
 localDeletion= https://logentries.com/app/9f95dbd4#2147483647}

 cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
 https://logentries.com/app/9f95dbd4#1953
 SliceQueryFilter.collectReducedColumns - Scanned over 10 tombstones in
 ABC.home_feed; query aborted (see tombstone_fail_threshold)

 As you can see all of this is happening for CF home_feed. This CF is
 basically maintaining a feed with TTL set to 2592000 (30 days).
 gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.

 Repairs have been running regularly and automatic compactions are
 occurring normally too.

 I can definitely use some help here in how to tackle this issue.

 Up till now I have the following ideas:

 1) I can make gc_grace_seconds to 0 and then do a manual compaction for
 this CF and bump up the gc_grace again.

 2) Make gc_grace 0, run manual compaction on this CF and leave gc_grace
 to zero. In this case have to be careful in running repairs.

 3) I am also considering moving to DateTier Compaction.

 What would be the best approach here for my feed case. Any help is
 appreciated.

 Thanks




Re: sstableloader usage doubts

2015-06-04 Thread Robert Coli
On Thu, Jun 4, 2015 at 5:39 AM, ZeroUno zerozerouno...@gmail.com wrote:

 while defining backup and restore procedures for a Cassandra cluster I'm
 trying to use sstableloader for restoring a snapshot from a backup, but I'm
 not sure I fully understand the documentation on how it should be used.


http://www.pythian.com/blog/bulk-loading-options-for-cassandra/

=Rob


Throttle Heavy Read / Write Loads

2015-06-04 Thread Anuj Wadehra
We are using Cassandra 2.0.14 with Hector as client ( will be gradually moving 
to CQL Driver ). 


Often we see that heavy read and write loads lead to Cassandra timeouts and 
unpredictable results due to gc pauses and request timeouts. We need to know 
the best way to throttle read and write load on Cassandra such that even if 
heavy operations are slower they complete gracefully. This will also shield us 
against misbehaving clients.


I was thinking of limiting rpc connections via rpc_max_threads property and 
implementing connection pool at client side. 


I would appreciate if you could please share your suggestions on the above 
mentioned approach or share any alternatives to the approach.


Thanks

Anuj Wadehra




Re: Reading too many tombstones

2015-06-04 Thread Sebastian Estevez
Check out the compaction subproperties for tombstones.

http://docs.datastax.com/en/cql/3.1/cql/cql_reference/compactSubprop.html?scroll=compactSubprop__compactionSubpropertiesDTCS
On Jun 4, 2015 1:29 PM, Aiman Parvaiz ai...@flipagram.com wrote:

 Thanks Carlos for pointing me in that direction, I have some interesting
 findings to share. So in December last year there was a redesign of
 home_feed and it was migrated to a new CF. Initially all the data in
 home_feed had a TTL of 1 year but migrated data was inserted with TTL of
 30days.
 Now on digging a bit deeper I found that home_feed still has data from Jan
 2015 with ttl 1275094 (14 days).

 This data is for the same id from home_feed:
  date | ttl(description)
 --+--
  2015-04-03 21:22:58+ |   759791
  2015-04-03 04:50:11+ |   412706
  2015-03-30 22:18:58+ |   759791
  2015-03-29 15:20:36+ |  1978689
  2015-03-28 14:41:28+ |  1275116
  2015-03-28 14:31:25+ |  1275116
  2015-03-18 19:23:44+ |  2512936
  2015-03-13 17:51:01+ |  1978689
  2015-02-12 15:41:01+ |  1978689
  2015-01-18 02:36:27+ |  1275094


 I am not sure what happened in that migration but I think that when trying
 to load data we are reading this old data(as feed queries a 1000/page to be
 displayed to the user) and in order to read this data we have to
 cross(read) lots of tombstones(newer data has TTL working correctly) and
 hence the error.
 I am not sure how much would date tier help us in this situation too. If
 anyone has any suggestions in how to handle this either on Systems or
 Developer level please pitch in.

 Thanks

 On Thu, Jun 4, 2015 at 11:47 AM, Carlos Rolo r...@pythian.com wrote:

 The TTL data will only be removed after the gc_grace_seconds. So your
 data with 30 days TTL will be still in Cassandra for 10 days more (40 in
 total). Is your data being there for more than that? Otherwise it is
 expected behaviour and probably you should do something on your data model
 to avoid scanning tombstoned data.

 Regards,

 Carlos Juzarte Rolo
 Cassandra Consultant

 Pythian - Love your data

 rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
 http://linkedin.com/in/carlosjuzarterolo*
 Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
 www.pythian.com

 On Thu, Jun 4, 2015 at 8:31 PM, Aiman Parvaiz ai...@flipagram.com
 wrote:

 yeah we don't update old data. One thing I am curious about is why are
 we running in to so many tombstones with compaction happening normally. Is
 compaction not removing tombstomes?


 On Thu, Jun 4, 2015 at 11:25 AM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 DateTiered is fantastic if you've got time series, TTLed data.  That
 means no updates to old data.

 On Thu, Jun 4, 2015 at 10:58 AM Aiman Parvaiz ai...@flipagram.com
 wrote:

 Hi everyone,
 We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
 running in to a issue where we are reading too many tombstones and hence
 getting tons of WARN messages and some ERROR query aborted.

 cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
 https://logentries.com/app/9f95dbd4#1998
 SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
 cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
 requested, slices= https://logentries.com/app/9f95dbd4#[-], delInfo=
 https://logentries.com/app/9f95dbd4#{deletedAt=
 https://logentries.com/app/9f95dbd4#-9223372036854775808,
 localDeletion= https://logentries.com/app/9f95dbd4#2147483647}

 cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
 https://logentries.com/app/9f95dbd4#1953
 SliceQueryFilter.collectReducedColumns - Scanned over 10 tombstones in
 ABC.home_feed; query aborted (see tombstone_fail_threshold)

 As you can see all of this is happening for CF home_feed. This CF is
 basically maintaining a feed with TTL set to 2592000 (30 days).
 gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.

 Repairs have been running regularly and automatic compactions are
 occurring normally too.

 I can definitely use some help here in how to tackle this issue.

 Up till now I have the following ideas:

 1) I can make gc_grace_seconds to 0 and then do a manual compaction
 for this CF and bump up the gc_grace again.

 2) Make gc_grace 0, run manual compaction on this CF and leave
 gc_grace to zero. In this case have to be careful in running repairs.

 3) I am also considering moving to DateTier Compaction.

 What would be the best approach here for my feed case. Any help is
 appreciated.

 Thanks







 --






 --
 Lead Systems Architect
 10351 Santa Monica Blvd, Suite 3310
 Los Angeles CA 90025



Re: sstableloader usage doubts

2015-06-04 Thread Sebastian Estevez
You don't need sstable loader if your topology hasn't changed and you have
all your sstables backed up for each node. SStableloader actually streams
data to all the nodes in a ring (this is what OpsCenter backup restore
does). So you can actually restore to a larger or smaller cluster or a
cluster with different token ranges / vnodes vs. non vnodes etc. It also
requires all your nodes to be up.

If you have all the sstables for each node and no token range changes, you
can just move the sstables to their spot in the data directory (rsync or
w/e) and bring up your nodes. If you're already up you can use nodetool
refresh to load the sstables.

http://docs.datastax.com/en/cassandra/2.0/cassandra/tools/toolsRefresh.html


All the best,


[image: datastax_logo.png] http://www.datastax.com/

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

[image: linkedin.png] https://www.linkedin.com/company/datastax [image:
facebook.png] https://www.facebook.com/datastax [image: twitter.png]
https://twitter.com/datastax [image: g+.png]
https://plus.google.com/+Datastax/about
http://feeds.feedburner.com/datastax

http://cassandrasummit-datastax.com/

DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Thu, Jun 4, 2015 at 5:39 AM, ZeroUno zerozerouno...@gmail.com wrote:

 Hi,
 while defining backup and restore procedures for a Cassandra cluster I'm
 trying to use sstableloader for restoring a snapshot from a backup, but I'm
 not sure I fully understand the documentation on how it should be used.

 Looking at the examples in the doc at
 http://docs.datastax.com/en/cassandra/2.0/cassandra/tools/toolsBulkloader_t.html
 it seems like the path_to_keyspace to be passed as an argument is exactly
 the cassandra data directory. So, you already move the data in the final
 target location and then again stream it to the cluster?

 Let's do a step back. My cluster is composed of two data centers. Each
 data center has two nodes (nodeA1, nodeA2 for center A, nodeB1, nodeB2 for
 center B).
 I'm using NetworkTopologyStrategy with RF=2.

 For doing periodic backups I'm creating a snapshot on two nodes
 simultaneously in a single data center (nodeA1 and nodeA2), and then moving
 the snapshot files in a safe place.
 To simulate a disaster recovery situation, I truncate all tables to erase
 data (but not the schema which would be re-created anyway by my
 application), I stop cassandra on all 4 nodes, I move the snapshot backup
 files in their original locations (e.g.
 /mydatapath/cassandra/data/mykeyspace/mytable1/) on nodeA1 and nodeA2, then
 I restart cassandra on all 4 nodes.

 At last, I run:

  sstableloader -d nodeA1,nodeA2,nodeB1,nodeB2
 /mydatapath/cassandra/data/mykeyspace/mytable1/
 sstableloader -d nodeA1,nodeA2,nodeB1,nodeB2
 /mydatapath/cassandra/data/mykeyspace/mytable2/
 sstableloader -d nodeA1,nodeA2,nodeB1,nodeB2
 /mydatapath/cassandra/data/mykeyspace/mytable3/
 [...and so on for all tables]


 ...on both nodeA1 and nodeA2, where I restored the snapshot.

 Is that correct?

 I observed some strange behaviour after doing this: when I truncated
 tables again, a select count(*) on one of the A nodes still returned a
 non-zero number, as if data was still there.
 I started thinking that maybe the source sstable directory for
 sstableloader should not be the data directory itself, as this causes some
 kind if double data problem...

 Can anyone please tell me if this is the correct way to proceed?
 Thank you very much!

 --
 01




RE: Different number of records from COPY command

2015-06-04 Thread Vanlerberghe, Luc
You’re probably hitting https://issues.apache.org/jira/browse/CASSANDRA-8940: 
Inconsistent select count and select distinct
It’s resolved (as I understand, a non-thread-safe object was shared between 
threads) and the patch will be included in 2.1.6 and 2.0.16

It’s a showstopper for me too: while developing I sometimes need to rebuild 
stuff based on the complete dataset (should become *very* rare in production, 
but still).
However, as long as this bug is around, I can never be sure all records are 
included.

Unfortunately, I don’t see any schedule for releasing either version…

Luc


From: Josef Lindman Hörnlund [mailto:jo...@appdata.biz]
Sent: woensdag 3 juni 2015 12:16
To: user@cassandra.apache.org
Subject: Re: Different number of records from COPY command


I ran into that issue a while ago and it was because I hit the tombstone limit 
on one of the nodes. Try running `nodetool compact adlog 
'adclicklog20150528.csv` and see if that helps.

Josef Lindman Hörnlund

On 02 Jun 2015, at 17:48, Saurabh Chandolia 
s.chando...@gmail.commailto:s.chando...@gmail.com wrote:

Still getting inconsistent number of records on consistency ALL and QUORUM. 
Following is the output of consistency ALL and QUORUM.

cqlsh:adlog CONSISTENCY ALL;
Consistency level set to ALL.
cqlsh:adlog copy adclicklog20150528 (imprid) TO 'adclicklog20150528.csv';
Processed 58000 rows; Write: 3065.60 rows/s
58463 rows exported in 21.353 seconds.
cqlsh:adlog copy adclicklog20150528 (imprid) TO 'adclicklog20150528.csv';
Processed 63000 rows; Write: 3517.03 rows/s
63972 rows exported in 22.885 seconds.

cqlsh:adlog CONSISTENCY QUORUM ;
Consistency level set to QUORUM.
cqlsh:adlog copy adclicklog20150528 (imprid) TO 'adclicklog20150528.csv';
Processed 63000 rows; Write: 3443.37 rows/s
63440 rows exported in 21.987 seconds.
cqlsh:adlog copy adclicklog20150528 (imprid) TO 'adclicklog20150528.csv';
Processed 65000 rows; Write: 3405.90 rows/s
65524 rows exported in 24.053 seconds.


- Saurabh

On Tue, Jun 2, 2015 at 9:09 PM, Anuj Wadehra 
anujw_2...@yahoo.co.inmailto:anujw_2...@yahoo.co.in wrote:
I have never exported data myself but can u just try setting 'consistency ALL' 
on cqlsh before executing command?

Thanks
Anuj Wadehra
Sent from Yahoo Mail on 
Androidhttps://overview.mail.yahoo.com/mobile/?.src=Android

From:Saurabh Chandolia s.chando...@gmail.commailto:s.chando...@gmail.com
Date:Tue, 2 Jun, 2015 at 8:47 pm
Subject:Different number of records from COPY command
I am seeing different number of records each time I export a particular table. 
There were no writes/reads in this table while exporting the data. I am not 
able to understand why it is happening.
Am I missing something here?

Cassandra version: 2.1.4
Java driver version: 2.1.5
Cluster Size: 4 Nodes in same DC
Keyspace Replication factor: 2

Following commands were issued:
cqlsh:adlog copy adclicklog20150528 (imprid) TO 'adclicklog20150528.csv';
Processed 68000 rows; Write: 3025.93 rows/s
68682 rows exported in 27.737 seconds.

cqlsh:adlog copy adclicklog20150528 (imprid) TO 'adclicklog20150528.csv';
Processed 65000 rows; Write: 2821.06 rows/s
65535 rows exported in 26.667 seconds.

cqlsh:adlog copy adclicklog20150528 (imprid) TO 'adclicklog20150528.csv';
Processed 66000 rows; Write: 3285.07 rows/s
66055 rows exported in 26.269 seconds.


cfstats for adlog.adclicklog20150528:
---
$ nodetool cfstats adlog.adclicklog20150528
Keyspace: adlog
Read Count: 217
Read Latency: 2.773073732718894 ms.
Write Count: 103191
Write Latency: 0.10233075558915021 ms.
Pending Flushes: 0
Table: adclicklog20150528
SSTable count: 11
Space used (live): 37981202
Space used (total): 37981202
Space used by snapshots (total): 13407843
Off heap memory used (total): 25580
SSTable Compression Ratio: 0.26684147550494164
Number of keys (estimate): 5627
Memtable cell count: 94620
Memtable data size: 13459445
Memtable off heap memory used: 0
Memtable switch count: 19
Local read count: 217
Local read latency: 2.774 ms
Local write count: 103191
Local write latency: 0.103 ms
Pending flushes: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used: 7192
Bloom filter off heap memory used: 7104
Index summary off heap memory used: 980
Compression metadata off heap memory used: 17496
Compacted partition minimum bytes: 1110
Compacted partition maximum bytes: 182785
Compacted partition mean bytes: 27808
Average live cells per slice (last five minutes): 44.663594470046085
Maximum live cells per slice (last five minutes): 86.0
Average tombstones per slice (last five minutes): 0.0
Maximum tombstones per slice (last five minutes): 0.0



- Saurabh






com/datastax/driver/core/policies/LoadBalancingPolicy

2015-06-04 Thread Marko Dinic
Hello everyone,

I'm new to Cassandra and I'm trying to use it as input for Hadoop.

For some reason I'm getting the following exception while trying to use
Cassandra as input to Hadoop

Exception in thread main java.lang.NoClassDefFoundError:
com/datastax/driver/core/policies/LoadBalancingPolicy

Here is the code

public class CDriver extends Configured implements Tool{

public static void main(String[] args) throws IOException,
InterruptedException, ClassNotFoundException, Exception
{
Configuration conf = new Configuration();

ToolRunner.run(new CDriver(), args);
}

@Override
public int run(String[] args) throws Exception {

String output = args[0];

Configuration conf = super.getConf();

Job job = new Job(conf);

job.setJarByClass(CDriver.class);
job.setJobName(Cassandra as input);

ConfigHelper.setInputInitialAddress(conf, 127.0.0.1);
ConfigHelper.setInputColumnFamily(conf, basketball, nba);
ConfigHelper.setInputPartitioner(conf, Murmur3Partitioner);
CqlConfigHelper.setInputCQLPageRowSize(conf, 3);
job.setInputFormatClass(CqlInputFormat.class);

FileOutputFormat.setOutputPath(job, new Path(output));

job.setMapperClass(CMapper.class);
job.setReducerClass(CReducer.class);

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.waitForCompletion(true);

return 0;
}
}

It goes off on the following line

CqlConfigHelper.setInputCQLPageRowSize(conf, 3);

And here are the Maven dependencies:

?xml version=1.0 encoding=UTF-8?
project xmlns=http://maven.apache.org/POM/4.0.0; xmlns:xsi=
http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=
http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd;
modelVersion4.0.0/modelVersion
groupIdcom.nissatech/groupId
artifactIdTestingCassandra/artifactId
version1.0-SNAPSHOT/version
packagingjar/packaging
properties

project.build.sourceEncodingUTF-8/project.build.sourceEncoding
maven.compiler.source1.7/maven.compiler.source
maven.compiler.target1.7/maven.compiler.target
/properties
dependencies
dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-core/artifactId
version1.2.1/version
/dependency
dependency
groupIdorg.apache.cassandra/groupId
artifactIdcassandra-all/artifactId
version2.1.5/version
/dependency
/dependencies
build
plugins
plugin
artifactIdmaven-assembly-plugin/artifactId
configuration
archive
manifest

mainClasscom.nissatech.testingcassandra.CDriver/mainClass
/manifest
/archive
descriptorRefs
descriptorRef
jar-with-dependencies
/descriptorRef
/descriptorRefs
/configuration
executions
execution
idmake-assembly/id
phasepackage/phase
goals
goalsingle/goal
/goals
/execution
/executions
/plugin
/plugins
/build
/project
Can anyone explain what is the problem? I have Cassandra running on
localhost.

When I tried adding this dependency additionally

 dependency
groupIdcom.datastax.cassandra/groupId
artifactIdcassandra-driver-mapping/artifactId
version2.1.5/version
  /dependency

I've got the following exception while running from NetBeans

Exception in thread main java.lang.UnsupportedOperationException: you
must set the keyspace and columnfamily with setInputColumnFamily()

While when running in pseudo-distributed mode I get the same exception like
before.

I'm really confused, everything seems ok, but it doesn't work. What seems
to be the problem?

-- 
Marko


Re: Reading too many tombstones

2015-06-04 Thread Aiman Parvaiz
Thanks Carlos for pointing me in that direction, I have some interesting
findings to share. So in December last year there was a redesign of
home_feed and it was migrated to a new CF. Initially all the data in
home_feed had a TTL of 1 year but migrated data was inserted with TTL of
30days.
Now on digging a bit deeper I found that home_feed still has data from Jan
2015 with ttl 1275094 (14 days).

This data is for the same id from home_feed:
 date | ttl(description)
--+--
 2015-04-03 21:22:58+ |   759791
 2015-04-03 04:50:11+ |   412706
 2015-03-30 22:18:58+ |   759791
 2015-03-29 15:20:36+ |  1978689
 2015-03-28 14:41:28+ |  1275116
 2015-03-28 14:31:25+ |  1275116
 2015-03-18 19:23:44+ |  2512936
 2015-03-13 17:51:01+ |  1978689
 2015-02-12 15:41:01+ |  1978689
 2015-01-18 02:36:27+ |  1275094


I am not sure what happened in that migration but I think that when trying
to load data we are reading this old data(as feed queries a 1000/page to be
displayed to the user) and in order to read this data we have to
cross(read) lots of tombstones(newer data has TTL working correctly) and
hence the error.
I am not sure how much would date tier help us in this situation too. If
anyone has any suggestions in how to handle this either on Systems or
Developer level please pitch in.

Thanks

On Thu, Jun 4, 2015 at 11:47 AM, Carlos Rolo r...@pythian.com wrote:

 The TTL data will only be removed after the gc_grace_seconds. So your data
 with 30 days TTL will be still in Cassandra for 10 days more (40 in total).
 Is your data being there for more than that? Otherwise it is expected
 behaviour and probably you should do something on your data model to avoid
 scanning tombstoned data.

 Regards,

 Carlos Juzarte Rolo
 Cassandra Consultant

 Pythian - Love your data

 rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
 http://linkedin.com/in/carlosjuzarterolo*
 Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
 www.pythian.com

 On Thu, Jun 4, 2015 at 8:31 PM, Aiman Parvaiz ai...@flipagram.com wrote:

 yeah we don't update old data. One thing I am curious about is why are we
 running in to so many tombstones with compaction happening normally. Is
 compaction not removing tombstomes?


 On Thu, Jun 4, 2015 at 11:25 AM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 DateTiered is fantastic if you've got time series, TTLed data.  That
 means no updates to old data.

 On Thu, Jun 4, 2015 at 10:58 AM Aiman Parvaiz ai...@flipagram.com
 wrote:

 Hi everyone,
 We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
 running in to a issue where we are reading too many tombstones and hence
 getting tons of WARN messages and some ERROR query aborted.

 cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
 https://logentries.com/app/9f95dbd4#1998
 SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
 cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
 requested, slices= https://logentries.com/app/9f95dbd4#[-], delInfo=
 https://logentries.com/app/9f95dbd4#{deletedAt=
 https://logentries.com/app/9f95dbd4#-9223372036854775808,
 localDeletion= https://logentries.com/app/9f95dbd4#2147483647}

 cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
 https://logentries.com/app/9f95dbd4#1953
 SliceQueryFilter.collectReducedColumns - Scanned over 10 tombstones in
 ABC.home_feed; query aborted (see tombstone_fail_threshold)

 As you can see all of this is happening for CF home_feed. This CF is
 basically maintaining a feed with TTL set to 2592000 (30 days).
 gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.

 Repairs have been running regularly and automatic compactions are
 occurring normally too.

 I can definitely use some help here in how to tackle this issue.

 Up till now I have the following ideas:

 1) I can make gc_grace_seconds to 0 and then do a manual compaction for
 this CF and bump up the gc_grace again.

 2) Make gc_grace 0, run manual compaction on this CF and leave gc_grace
 to zero. In this case have to be careful in running repairs.

 3) I am also considering moving to DateTier Compaction.

 What would be the best approach here for my feed case. Any help is
 appreciated.

 Thanks







 --






-- 
Lead Systems Architect
10351 Santa Monica Blvd, Suite 3310
Los Angeles CA 90025


sstableloader usage doubts

2015-06-04 Thread ZeroUno

Hi,
while defining backup and restore procedures for a Cassandra cluster I'm 
trying to use sstableloader for restoring a snapshot from a backup, but 
I'm not sure I fully understand the documentation on how it should be used.


Looking at the examples in the doc at 
http://docs.datastax.com/en/cassandra/2.0/cassandra/tools/toolsBulkloader_t.html 
it seems like the path_to_keyspace to be passed as an argument is 
exactly the cassandra data directory. So, you already move the data in 
the final target location and then again stream it to the cluster?


Let's do a step back. My cluster is composed of two data centers. Each 
data center has two nodes (nodeA1, nodeA2 for center A, nodeB1, nodeB2 
for center B).

I'm using NetworkTopologyStrategy with RF=2.

For doing periodic backups I'm creating a snapshot on two nodes 
simultaneously in a single data center (nodeA1 and nodeA2), and then 
moving the snapshot files in a safe place.
To simulate a disaster recovery situation, I truncate all tables to 
erase data (but not the schema which would be re-created anyway by my 
application), I stop cassandra on all 4 nodes, I move the snapshot 
backup files in their original locations (e.g. 
/mydatapath/cassandra/data/mykeyspace/mytable1/) on nodeA1 and nodeA2, 
then I restart cassandra on all 4 nodes.


At last, I run:


sstableloader -d nodeA1,nodeA2,nodeB1,nodeB2 
/mydatapath/cassandra/data/mykeyspace/mytable1/
sstableloader -d nodeA1,nodeA2,nodeB1,nodeB2 
/mydatapath/cassandra/data/mykeyspace/mytable2/
sstableloader -d nodeA1,nodeA2,nodeB1,nodeB2 
/mydatapath/cassandra/data/mykeyspace/mytable3/
[...and so on for all tables]


...on both nodeA1 and nodeA2, where I restored the snapshot.

Is that correct?

I observed some strange behaviour after doing this: when I truncated 
tables again, a select count(*) on one of the A nodes still returned a 
non-zero number, as if data was still there.
I started thinking that maybe the source sstable directory for 
sstableloader should not be the data directory itself, as this causes 
some kind if double data problem...


Can anyone please tell me if this is the correct way to proceed?
Thank you very much!

--
01