Re: Stress tool command

2017-09-04 Thread Jeff Jirsa
You can create the schema in advance with custom table options and stress will happily use it as-is -- Jeff Jirsa > On Sep 4, 2017, at 10:25 AM, Akshit Jain wrote: > > Hi, > Is there any way to set the gc_grace_seconds parameter in the stress tool > command? > >

Stress tool command

2017-09-04 Thread Akshit Jain
Hi, Is there any way to set the *gc_grace_seconds* parameter in the stress tool command? Regards

Re: Cassandra snapshot restore with VNODES missing some data

2017-09-04 Thread Jai Bheemsen Rao Dhanwada
Hello Kurt, Thanks for the help :) On Fri, Sep 1, 2017 at 1:12 PM, Jai Bheemsen Rao Dhanwada < jaibheem...@gmail.com> wrote: > yes looks like I am missing that. > > Let me test on one node and try a full cluster restore. > > will update here once I complete my test > > On Fri, Sep 1, 2017 at

Re: old big tombstone data file occupy much disk space

2017-09-04 Thread Shalom Sagges
Thanks! :-) On Mon, Sep 4, 2017 at 2:56 PM, Nicolas Guyomar wrote: > Wrong copy/paste ! > > Looking at the code, it should do nothing : > > // look up the sstables now that we're on the compaction executor, so we > don't try to re-compact > // something that was

Re: timeouts on counter tables

2017-09-04 Thread Rudi Bruchez
I'm going to try different options. Do any of you have some experience with tweaking one of those conf parameters to improve read throughput, especially in case of counter tables ? 1/ using SSD : trickle_fsync: true trickle_fsync_interval_in_kb: 1024 2/ concurrent_compactors to the number of

Re: timeouts on counter tables

2017-09-04 Thread Rudi Bruchez
It can happen on any of the nodes. We can have a large number of pending on ReadStage and CounterMutationStage. We'll try to increase concurrent_counter_writes to see how it changes things Likely. I believe counter mutations are a tad more expensive than a normal mutation. If you're doing a

Re: timeouts on counter tables

2017-09-04 Thread kurt greaves
Likely. I believe counter mutations are a tad more expensive than a normal mutation. If you're doing a lot of counter updates that probably doesn't help. Regardless, high amounts of pending reads/mutations is generally not good and indicates the node being overloaded. Are you just seeing this on

Re: Test repair command

2017-09-04 Thread kurt greaves
Try checking the Percent Repaired reported in nodetool cfstats​

Re: old big tombstone data file occupy much disk space

2017-09-04 Thread Nicolas Guyomar
Wrong copy/paste ! Looking at the code, it should do nothing : // look up the sstables now that we're on the compaction executor, so we don't try to re-compact // something that was already being compacted earlier. On 4 September 2017 at 13:54, Nicolas Guyomar

Re: old big tombstone data file occupy much disk space

2017-09-04 Thread Nicolas Guyomar
You'll get the WARN "Will not compact {}: it is not an active sstable" :) On 4 September 2017 at 12:07, Shalom Sagges wrote: > By the way, does anyone know what happens if I run a user defined > compaction on an sstable that's already in compaction? > > > > > > > On

Re: old big tombstone data file occupy much disk space

2017-09-04 Thread Shalom Sagges
By the way, does anyone know what happens if I run a user defined compaction on an sstable that's already in compaction? On Sun, Sep 3, 2017 at 2:55 PM, Shalom Sagges wrote: > Try this blog by The Last Pickle: > >