Re: old big tombstone data file occupy much disk space

2017-09-04 Thread Shalom Sagges
Thanks! :-) On Mon, Sep 4, 2017 at 2:56 PM, Nicolas Guyomar wrote: > Wrong copy/paste ! > > Looking at the code, it should do nothing : > > // look up the sstables now that we're on the compaction executor, so we > don't try to re-compact > // something that was already being compacted earli

Re: old big tombstone data file occupy much disk space

2017-09-04 Thread Nicolas Guyomar
Wrong copy/paste ! Looking at the code, it should do nothing : // look up the sstables now that we're on the compaction executor, so we don't try to re-compact // something that was already being compacted earlier. On 4 September 2017 at 13:54, Nicolas Guyomar wrote: > You'll get the WARN "W

Re: old big tombstone data file occupy much disk space

2017-09-04 Thread Nicolas Guyomar
You'll get the WARN "Will not compact {}: it is not an active sstable" :) On 4 September 2017 at 12:07, Shalom Sagges wrote: > By the way, does anyone know what happens if I run a user defined > compaction on an sstable that's already in compaction? > > > > > > > On Sun, Sep 3, 2017 at 2:55 PM,

Re: old big tombstone data file occupy much disk space

2017-09-04 Thread Shalom Sagges
By the way, does anyone know what happens if I run a user defined compaction on an sstable that's already in compaction? On Sun, Sep 3, 2017 at 2:55 PM, Shalom Sagges wrote: > Try this blog by The Last Pickle: > > http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html > > > >

Re: old big tombstone data file occupy much disk space

2017-09-03 Thread Shalom Sagges
Try this blog by The Last Pickle: http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html Shalom Sagges DBA We Create Meaningful Connections On Sat, Sep 2, 2017 a

Re: old big tombstone data file occupy much disk space

2017-09-02 Thread Jeff Jirsa
If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which), TWCS was designed for ttl-only time series use cases Alternatively, if you have IO to spare, you may find LCS works as well (it'll cause quite a bit more compaction, but a much higher chance to compact away tombstones) There ar

Re: old big tombstone data file occupy much disk space

2017-09-02 Thread qf zhou
Yes, your are right. I am using STCS compaction strategy with some kind of timeseries model. Too much disk space has been occupied. What should I do to stop the disk full ? I only want to keep 100 days data most recently, so I set default_time_to_live = 864(100 days ). I know I need

Re: old big tombstone data file occupy much disk space

2017-09-02 Thread Nicolas Guyomar
Hi, The nodetool command only shows what's going on on this particular node. Validation Compaction means that Cassandra is computing Merkel Tree so that this node can participate in an ongoing repair. What kind of disk hardware do you have ? Node with To of data seems a lot in regards to your fir

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
After I run nodetool compactionstats -H, it says that: pending tasks: 6 - gps.gpsfullwithstate: 6 id compaction type keyspace table completed total unit progress 56ebd730-8ede-11e7-9754-c981af5d39a9 Validation gps gpsfullwithstate

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
Hi, Well, the command you are using works for me on 3.0.9, I do not have any logs in INFO level when I force a compaction and everything works fine for me. Are you sure there is nothing happening behind the scene ? What dies 'nodetool compactionstats -H' says ? On 1 September 2017 at 12:05, qf z

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
When I trigger the compaction with the full path,  I found nothing in the system.log.  Nothing happens in the  terminal and it just stops there.#calling operation forceUserDefinedCompaction of mbean org.apache.cassandra.db:type=CompactionManager在 2017年9月1日,下午5:06,qf zhou 写道:I

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
Whoops sorry I mislead you with cassandra 2.1 behavior, you were right giving your sstable full path , what kind of log do you have when you trigger the compaction with the full path ? On 1 September 2017 at 11:30, Nicolas Guyomar wrote: > Well, not sure why you reached a memory usage limit, but

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
Well, not sure why you reached a memory usage limit, but according to the 3.0 branche's code : https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L632 you just need to give the sstable filename, and Cassandra manage to find it b

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
I found the following log. What does it mean ? INFO [CompactionExecutor:11] 2017-09-01 16:55:47,909 NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB WARN [RMI TCP Connection(1714)-127.0.0.1] 2017-09-01 17:02:42,516 CompactionManager.java:70

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
You should have a log coming from the CompactionManager (in cassandra system.log) when you try the command, what does it says ? On 1 September 2017 at 10:07, qf zhou wrote: > When I run the command, the following occurs and it returns null. > > Is it normal ? > > echo "run -b org.apache.cassa

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
When I run the command, the following occurs and it returns null. Is it normal ? echo "run -b org.apache.cassandra.db:type=CompactionManager forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar -l localhost:7199 Welcome to JMX

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
Hi, Last time I used forceUserDefinedCompaction, I got myself a headache because I was trying to use a full path like you're doing, but in fact it just need the sstable as parameter Can you just try : echo "run -b org.apache.cassandra.db:type=CompactionManager forceUserDefinedCompaction mc-10096

Re: old big tombstone data file occupy much disk space

2017-08-31 Thread qf zhou
dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0cdb811e68dce511be6a305f6/mc-100963-big-Data.db echo "run -b org.apache.cassandra.db:type=CompactionManager forceUserDefinedCompaction $dataPath" | java -jar /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar -l localhost:7199 In th

Re: old big tombstone data file occupy much disk space

2017-08-31 Thread Jeff Jirsa
User defined compaction to do a single sstable compaction on just that sstable It's a nodetool command in very recent versions, or a jmx method in older versions -- Jeff Jirsa > On Aug 31, 2017, at 11:04 PM, qf zhou wrote: > > I am using a cluster with 3 nodes and the cassandra version

old big tombstone data file occupy much disk space

2017-08-31 Thread qf zhou
I am using a cluster with 3 nodes and the cassandra version is 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in the disk. I found some sstables file are over 300G. Using the sstablemetadata command, I found it: Estimated droppable tombstones: 0.9622972799707109.