The read queries are continuously failing though because of the tombstones.
"Request did not complete within rpc_timeout."

thanks


On Wed, Jul 27, 2016 at 5:51 PM, Jeff Jirsa <jeff.ji...@crowdstrike.com>
wrote:

> 220kb worth of tombstones doesn’t seem like enough to worry about.
>
>
>
>
>
> *From: *sai krishnam raju potturi <pskraj...@gmail.com>
> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Date: *Wednesday, July 27, 2016 at 2:43 PM
> *To: *Cassandra Users <user@cassandra.apache.org>
> *Subject: *Re: Re : Purging tombstones from a particular row in SSTable
>
>
>
> and also the sstable size in question is like 220 kb in size.
>
>
>
> thanks
>
>
>
>
>
> On Wed, Jul 27, 2016 at 5:41 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
> it's set to 1800 Vinay.
>
>
>
>  bloom_filter_fp_chance=0.010000 AND
>
>   caching='KEYS_ONLY' AND
>
>   comment='' AND
>
>   dclocal_read_repair_chance=0.100000 AND
>
>   gc_grace_seconds=1800 AND
>
>   index_interval=128 AND
>
>   read_repair_chance=0.000000 AND
>
>   replicate_on_write='true' AND
>
>   populate_io_cache_on_flush='false' AND
>
>   default_time_to_live=0 AND
>
>   speculative_retry='99.0PERCENTILE' AND
>
>   memtable_flush_period_in_ms=0 AND
>
>   compaction={'min_sstable_size': '1024', 'tombstone_threshold': '0.01',
> 'tombstone_compaction_interval': '1800', 'class':
> 'SizeTieredCompactionStrategy'} AND
>
>   compression={'sstable_compression': 'LZ4Compressor'};
>
>
>
> thanks
>
>
>
>
>
> On Wed, Jul 27, 2016 at 5:34 PM, Vinay Kumar Chella <
> vinaykumar...@gmail.com> wrote:
>
> What is your GC_grace_seconds set to?
>
>
>
> On Wed, Jul 27, 2016 at 1:13 PM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
> thanks Vinay and DuyHai.
>
>
>
>     we are using verison 2.0.14. I did "user defined compaction" following
> the instructions in the below link, The tombstones still persist even after
> that.
>
>
>
> https://gist.github.com/jeromatron/e238e5795b3e79866b83
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__gist.github.com_jeromatron_e238e5795b3e79866b83&d=CwMFaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow&m=-sQ3Vf5bs3z4cO36h_AU-kIhMGVKcb3eCtzIb-fZ1Fc&s=0RQ3r6c0L4vICot8eqpOBKBAuKiKEkoKdmcjLbvBBwY&e=>
>
>
>
> Also, we changed the tombstone_compaction_interval : 1800
> and tombstone_threshold : 0.1, but it did not help.
>
>
>
> thanks
>
>
>
>
>
>
>
> On Wed, Jul 27, 2016 at 4:05 PM, DuyHai Doan <doanduy...@gmail.com> wrote:
>
> This feature is also exposed directly in nodetool from version Cassandra
> 3.4
>
>
>
> nodetool compact --user-defined <SSTable file>
>
>
>
> On Wed, Jul 27, 2016 at 9:58 PM, Vinay Chella <vche...@netflix.com> wrote:
>
> You can run file level compaction using JMX to get rid of tombstones in
> one SSTable. Ensure you set GC_Grace_seconds such that
>
>
>
> current time >= deletion(tombstone time)+ GC_Grace_seconds
>
>
>
> File level compaction
>
>
>
> /usr/bin/java -jar cmdline-jmxclient-0.10.3.jar - localhost:
>
> ​{​
>
> ​port}
>
>  org.apache.cassandra.db:type=CompactionManager 
> forceUserDefinedCompaction="'${KEYSPACE}','${
>
> ​SSTABLEFILENAME
>
> }'""
>
>
>
>
>
>
>
>
> On Wed, Jul 27, 2016 at 11:59 AM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
> hi;
>
>   we have a columnfamily that has around 1000 rows, with one row is really
> huge (million columns). 95% of the row contains tombstones. Since there
> exists just one SSTable , there is going to be no compaction kicked in. Any
> way we can get rid of the tombstones in that row?
>
>
>
> Userdefined compaction nor nodetool compact had no effect. Any ideas folks?
>
>
>
> thanks
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

Reply via email to