How to do cassandra routine maintenance

2017-09-01 Thread qf zhou
I am using the cluster with 3 cassandra  nodes, the cluster version is 3.0.9. 
Each day about 200~300 million records are inserted into the cluster.
As time goes by,  more and more data occupied more and more disk space. 
Currently,the data distribution  on each node is  as  the following:

UN  172.20.5.4  2.5 TiB256  66.3% 
c5271e74-19a1-4cee-98d7-dc169cf87e95  rack1
UN  172.20.5.2  1.73 TiB   256  67.0% 
c623bbc0-9839-4d2d-8ff3-db7115719d59  rack1
UN  172.20.5.3  1.86 TiB   256  66.7% 
c555e44c-9590-4f45-aea4-f5eca68180b2  rack1 

There is only one datacenter.  

The compaciton strategy is here:
compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '12', 'tombstone_threshold': '0.1', 
'unchecked_tombstone_compaction': 'true'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 864
AND gc_grace_seconds = 432000

I really want to know  about how to do cassandra routine maintenance ?

I found the data seems to grow faster  and  the disk is in heavy load. 
Sometimes I found the data inconsistency: two different results appear with the 
same query.

So what I shoud I do to keep the cluster healthy,  how to maintain the cluster?

I hope  some help  very much!  Thanks a lot ! 



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
After  I run  nodetool compactionstats -H,  it says that:

pending tasks: 6
- gps.gpsfullwithstate: 6

id   compaction type keyspace table
completed  total  unit  progress
56ebd730-8ede-11e7-9754-c981af5d39a9 Validation  gps  gpsfullwithstate 
478.67 GiB 4.59 TiB   bytes 10.19%  
3fc33340-8e4e-11e7-9754-c981af5d39a9 Compaction  gps  gpsfullwithstate 
451.73 GiB 817.51 GiB bytes 55.26%  
f9acc4b0-8edf-11e7-9754-c981af5d39a9 Validation  gps  gpsfullwithstate 
472.36 GiB 5.32 TiB   bytes 8.67%   
4af0b300-8f7a-11e7-9754-c981af5d39a9 Compaction  gps  gpsfullwithstate 
3.76 GiB   75.37 GiB  bytes 5.00%   
f1282280-8edf-11e7-9754-c981af5d39a9 Validation  gps  gpsfullwithstate 
474.95 GiB 4.59 TiB   bytes 10.11%  
0ccb7b90-8ee0-11e7-9754-c981af5d39a9 Validation  gps  gpsfullwithstate 
472.4 GiB  5.32 TiB   bytes 8.67%  

what does it mean? the difference between Validation and Compaction


> 在 2017年9月1日,下午8:36,Nicolas Guyomar  写道:
> 
> Hi,
> 
> Well, the command you are using works for me on 3.0.9, I do not have any logs 
> in INFO level when I force a compaction and everything works fine for me.
> 
> Are you sure there is nothing happening behind the scene ? What dies 
> 'nodetool compactionstats -H' says ? 
> 
> On 1 September 2017 at 12:05, qf zhou  > wrote:
> When I trigger the compaction with the full path,  I found nothing in the 
> system.log.  Nothing happens in the  terminal and it just stops there.
> 
> #calling operation forceUserDefinedCompaction of mbean 
> org.apache.cassandra.db:type=CompactionManager
> 
> 
> 
> 
>> 在 2017年9月1日,下午5:06,qf zhou > > 写道:
>> 
>> I  found the  following log.  What does it mean ?
>> 
>> INFO  [CompactionExecutor:11] 2017-09-01 16:55:47,909 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> WARN  [RMI TCP Connection(1714)-127.0.0.1] 2017-09-01 17:02:42,516 
>> CompactionManager.java:704 - Schema does not exist for file 
>> mc-151276-big-Data.db. Skipping.
>> 
>> 
>>> 在 2017年9月1日,下午4:54,Nicolas Guyomar >> > 写道:
>>> 
>>> You should have a log coming from the CompactionManager (in cassandra 
>>> system.log) when you try the command, what does it says  ?
>>> 
>>> On 1 September 2017 at 10:07, qf zhou >> > wrote:
>>> When I run the command,  the following occurs and  it returns null.
>>> 
>>> Is it normal ?
>>> 
>>> echo "run -b org.apache.cassandra.db:type=CompactionManager 
>>> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar 
>>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>>> 
>>> 
>>> Welcome to JMX terminal. Type "help" for available commands.
>>> $>run -b org.apache.cassandra.db:type=CompactionManager 
>>> forceUserDefinedCompaction mc-100963-big-Data.db
>>> #calling operation forceUserDefinedCompaction of mbean 
>>> org.apache.cassandra.db:type=CompactionManager
>>> #operation returns: 
>>> null
>>> 
>>> 
>>> 
>>> 
 在 2017年9月1日,下午3:49,Nicolas Guyomar > 写道:
 
 Hi,
 
 Last time I used forceUserDefinedCompaction, I got myself a headache 
 because I was trying to use a full path like you're doing, but in fact it 
 just need the sstable as parameter
 
 Can you just try : 
 
 echo "run -b org.apache.cassandra.db:type=CompactionManager 
 forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar 
 /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
 
 
 
 On 1 September 2017 at 08:29, qf zhou > wrote:
 
 dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0cdb811e68dce511be6a305f6/mc-100963-big-Data.db
 echo "run -b org.apache.cassandra.db:type=CompactionManager 
 forceUserDefinedCompaction $dataPath" | java -jar 
 /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
 
 In the above, I am using a jmx method. But it seems that the file size 
 doesn’t change. My command is wrong ?
 
 > 在 2017年9月1日,下午2:17,Jeff Jirsa  > 写道:
 >
 > User defined compaction to do a single sstable compaction on just that 
 > sstable
 >
 > It's a nodetool command in very recent versions, or a jmx method in 
 > older versions
 >
 >
 > --
 > Jeff Jirsa
 >
 >
 >> On Aug 31, 2017, at 11:04 PM, qf zhou > > wrote:
 >>
 >> I am using  a cluster with  3 nodes and  the cassandra version is 
 >> 3.0.9. I have used it about 6 

Re: Cassandra 3.11 is compacting forever

2017-09-01 Thread Fay Hou [Storage Service] ­
try to do a rolling restart for the cluster before doing a compation

On Fri, Sep 1, 2017 at 3:09 PM, Igor Leão  wrote:

> Some generic errors:
>
> *[aladdin@ip-172-16-1-10 cassandra]$ tail cassandra.log | grep -i error*
> *[aladdin@ip-172-16-1-10 cassandra]$ tail cassandra.log | grep -i excep*
> *[aladdin@ip-172-16-1-10 cassandra]$ tail cassandra.log | grep -i fail*
> *[aladdin@ip-172-16-1-10 cassandra]$ tail debug.log | grep -i error*
> *[aladdin@ip-172-16-1-10 cassandra]$ tail debug.log | grep -i exce*
> *[aladdin@ip-172-16-1-10 cassandra]$ tail debug.log | grep -i fail*
> *DEBUG [GossipStage:1] 2017-09-01 15:33:27,046 FailureDetector.java:457 -
> Ignoring interval time of 2108299431 <(210)%20829-9431> for /172.16.1.112
> *
> *DEBUG [GossipStage:1] 2017-09-01 15:33:29,051 FailureDetector.java:457 -
> Ignoring interval time of 2005507384 for /172.16.1.74 *
> *DEBUG [GossipStage:1] 2017-09-01 15:33:45,968 FailureDetector.java:457 -
> Ignoring interval time of 2003371497 for /172.16.1.74 *
> *DEBUG [GossipStage:1] 2017-09-01 15:33:51,133 FailureDetector.java:457 -
> Ignoring interval time of 2013260173 <(201)%20326-0173> for /172.16.1.74
> *
> *DEBUG [GossipStage:1] 2017-09-01 15:33:58,981 FailureDetector.java:457 -
> Ignoring interval time of 2009620081 for /172.16.1.112
> *
> *DEBUG [GossipStage:1] 2017-09-01 15:34:19,235 FailureDetector.java:457 -
> Ignoring interval time of 2010956256 for /172.16.1.74 *
> *DEBUG [GossipStage:1] 2017-09-01 15:34:19,235 FailureDetector.java:457 -
> Ignoring interval time of 2011127930 for /10.0.1.122 *
> *[aladdin@ip-172-16-1-10 cassandra]$ tail system.log | grep -i error*
> *io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)()
> failed: Connection reset by peer*
> *[aladdin@ip-172-16-1-10 cassandra]$ tail system.log | grep -i exce*
> *INFO  [Native-Transport-Requests-5] 2017-09-01 15:22:58,806
> Message.java:619 - Unexpected exception during request; channel = [id:
> 0xdd63db2f, L:/10.0.1.47:9042  !
> R:/10.0.44.196:41422 ]*
> *io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)()
> failed: Connection reset by peer*
> *[aladdin@ip-172-16-1-10 cassandra]$ tail system.log | grep -i fail*
> *io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)()
> failed: Connection reset by peer*
>
>
> Some interesting errors:
>
> 1.
> *DEBUG [ReadRepairStage:1] 2017-09-01 15:34:58,485 ReadCallback.java:242 -
> Digest mismatch:*
> *org.apache.cassandra.service.DigestMismatchException: Mismatch for key
> DecoratedKey(5988282114260523734,
> 32623331326162652d63352d343237632d626334322d306466643762653836343830)
> (023d99bbcf2263f0fa450c2312fdce88 vs a60ba37a46e0a61227a8b560fa4e0dfb)*
> * at
> org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92)
> ~[apache-cassandra-3.11.0.jar:3.11.0]*
> * at
> org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233)
> ~[apache-cassandra-3.11.0.jar:3.11.0]*
> * at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_112]*
> * at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_112]*
> * at
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
> [apache-cassandra-3.11.0.jar:3.11.0]*
> * at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_112]*
>
> 2.
> *INFO  [Native-Transport-Requests-5] 2017-09-01 15:22:58,806
> Message.java:619 - Unexpected exception during request; channel = [id:
> 0xdd63db2f, L:/10.0.1.47:9042  !
> R:/10.0.44.196:41422 ]*
> *io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)()
> failed: Connection reset by peer*
> * at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown Source)
> ~[netty-all-4.0.44.Final.jar:4.0.44.Final]*
> *INFO  [Native-Transport-Requests-11] 2017-09-01 15:31:42,722
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB*
>
> *INFO  [CompactionExecutor:470] 2017-09-01 10:16:42,026
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB*
> *INFO  [CompactionExecutor:475] 2017-09-01 10:31:42,032
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB*
> *INFO  [CompactionExecutor:478] 2017-09-01 10:46:42,108
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB*
> *INFO  [CompactionExecutor:482] 2017-09-01 11:01:42,131
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB*
>
> About this last error, I tried to increase `file_cache_size_in_mb` of

Re: Cassandra 3.11 is compacting forever

2017-09-01 Thread Igor Leão
Some generic errors:

*[aladdin@ip-172-16-1-10 cassandra]$ tail cassandra.log | grep -i error*
*[aladdin@ip-172-16-1-10 cassandra]$ tail cassandra.log | grep -i excep*
*[aladdin@ip-172-16-1-10 cassandra]$ tail cassandra.log | grep -i fail*
*[aladdin@ip-172-16-1-10 cassandra]$ tail debug.log | grep -i error*
*[aladdin@ip-172-16-1-10 cassandra]$ tail debug.log | grep -i exce*
*[aladdin@ip-172-16-1-10 cassandra]$ tail debug.log | grep -i fail*
*DEBUG [GossipStage:1] 2017-09-01 15:33:27,046 FailureDetector.java:457 -
Ignoring interval time of 2108299431 <(210)%20829-9431> for /172.16.1.112
*
*DEBUG [GossipStage:1] 2017-09-01 15:33:29,051 FailureDetector.java:457 -
Ignoring interval time of 2005507384 for /172.16.1.74 *
*DEBUG [GossipStage:1] 2017-09-01 15:33:45,968 FailureDetector.java:457 -
Ignoring interval time of 2003371497 for /172.16.1.74 *
*DEBUG [GossipStage:1] 2017-09-01 15:33:51,133 FailureDetector.java:457 -
Ignoring interval time of 2013260173 <(201)%20326-0173> for /172.16.1.74
*
*DEBUG [GossipStage:1] 2017-09-01 15:33:58,981 FailureDetector.java:457 -
Ignoring interval time of 2009620081 for /172.16.1.112
*
*DEBUG [GossipStage:1] 2017-09-01 15:34:19,235 FailureDetector.java:457 -
Ignoring interval time of 2010956256 for /172.16.1.74 *
*DEBUG [GossipStage:1] 2017-09-01 15:34:19,235 FailureDetector.java:457 -
Ignoring interval time of 2011127930 for /10.0.1.122 *
*[aladdin@ip-172-16-1-10 cassandra]$ tail system.log | grep -i error*
*io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)()
failed: Connection reset by peer*
*[aladdin@ip-172-16-1-10 cassandra]$ tail system.log | grep -i exce*
*INFO  [Native-Transport-Requests-5] 2017-09-01 15:22:58,806
Message.java:619 - Unexpected exception during request; channel = [id:
0xdd63db2f, L:/10.0.1.47:9042  !
R:/10.0.44.196:41422 ]*
*io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)()
failed: Connection reset by peer*
*[aladdin@ip-172-16-1-10 cassandra]$ tail system.log | grep -i fail*
*io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)()
failed: Connection reset by peer*


Some interesting errors:

1.
*DEBUG [ReadRepairStage:1] 2017-09-01 15:34:58,485 ReadCallback.java:242 -
Digest mismatch:*
*org.apache.cassandra.service.DigestMismatchException: Mismatch for key
DecoratedKey(5988282114260523734,
32623331326162652d63352d343237632d626334322d306466643762653836343830)
(023d99bbcf2263f0fa450c2312fdce88 vs a60ba37a46e0a61227a8b560fa4e0dfb)*
* at
org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92)
~[apache-cassandra-3.11.0.jar:3.11.0]*
* at
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233)
~[apache-cassandra-3.11.0.jar:3.11.0]*
* at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_112]*
* at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_112]*
* at
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
[apache-cassandra-3.11.0.jar:3.11.0]*
* at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_112]*

2.
*INFO  [Native-Transport-Requests-5] 2017-09-01 15:22:58,806
Message.java:619 - Unexpected exception during request; channel = [id:
0xdd63db2f, L:/10.0.1.47:9042  !
R:/10.0.44.196:41422 ]*
*io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)()
failed: Connection reset by peer*
* at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown Source)
~[netty-all-4.0.44.Final.jar:4.0.44.Final]*
*INFO  [Native-Transport-Requests-11] 2017-09-01 15:31:42,722
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate chunk of 1.000MiB*

*INFO  [CompactionExecutor:470] 2017-09-01 10:16:42,026
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate chunk of 1.000MiB*
*INFO  [CompactionExecutor:475] 2017-09-01 10:31:42,032
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate chunk of 1.000MiB*
*INFO  [CompactionExecutor:478] 2017-09-01 10:46:42,108
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate chunk of 1.000MiB*
*INFO  [CompactionExecutor:482] 2017-09-01 11:01:42,131
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate chunk of 1.000MiB*

About this last error, I tried to increase `file_cache_size_in_mb` of this
node to 2048, but the error only changed to
*INFO  [ReadStage-2] 2017-09-01 16:18:38,657 NoSpamLogger.java:91 - Maximum
memory usage reached (2.000GiB), cannot allocate chunk of 1.000MiB*

2017-09-01 9:07 GMT-03:00 kurt greaves :

> are you seeing any errors in the logs? Is that one 

Re: Cassandra snapshot restore with VNODES missing some data

2017-09-01 Thread Jai Bheemsen Rao Dhanwada
yes looks like I am missing that.

Let me test on one node and try a full cluster restore.

will update here once I complete my test

On Fri, Sep 1, 2017 at 5:01 AM, kurt greaves  wrote:

> is num_tokens also set to 256?
>


Re: How to check if repair is actually successful

2017-09-01 Thread Fay Hou [Storage Service] ­
At the end of the repair, you should see something like:

[2017-09-01 06:59:04,699] Repair completed successfully
[2017-09-01 06:59:04,704] Repair command #1 finished in X hour X minutes X
seconds

On Fri, Sep 1, 2017 at 9:51 AM, Blake Eggleston 
wrote:

> If nodetool repair doesn't return an error, and doesn't hang, the repair
> completed successfully.
>
> On September 1, 2017 at 5:50:53 AM, Akshit Jain (akshit13...@iiitd.ac.in)
> wrote:
>
> Hi,
> I am performing repair on cassandra cluster.
> After getting repair status as successful, How to figure out if it is
> successful actually?
> Is there any way to test it?
>
>


Re: How to check if repair is actually successful

2017-09-01 Thread Blake Eggleston
If nodetool repair doesn't return an error, and doesn't hang, the repair 
completed successfully.

On September 1, 2017 at 5:50:53 AM, Akshit Jain (akshit13...@iiitd.ac.in) wrote:

Hi,
I am performing repair on cassandra cluster.
After getting repair status as successful, How to figure out if it is 
successful actually?
Is there any way to test it?


Re: Cassandra CF Level Metrics (Read, Write Count and Latency)

2017-09-01 Thread Chris Lohfink
To be future compatible should consider using `type=Table` instead of
`type=ColumnFamily`
depending on your version.

> not matching with the total read requests

the table level metrics for Read/Write latencies will not match the number
of requests you've made. This metric is the amount of time it took to
perform the action of the read/write locally on that node. The
`type=ClientRequests` mbeans are the ones that are at the coordinator level
including querying all the replicas and merging results etc.

The table metrics do have a name=CoordinatorReadLatency (also Scan for
range queries) mbean which may be what your looking for. Table level write
coordinator metrics are missing since the read coordinator metrics were
actually added for speculative retry so I think writes were overlooked.

Chris

On Thu, Aug 31, 2017 at 10:58 PM, Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> okay, let me try it out
>
> On Thu, Aug 31, 2017 at 8:30 PM, Christophe Schmitz <
> christo...@instaclustr.com> wrote:
>
>> Hi Jai,
>>
>> The ReadLatency MBean expose a few metrics, including the count one,
>> which is the total read requests you are after.
>> See attached screenshot
>>
>> Cheers,
>>
>> Christophe
>>
>> On 1 September 2017 at 09:21, Jai Bheemsen Rao Dhanwada <
>> jaibheem...@gmail.com> wrote:
>>
>>> I did look at the document and tried setting up the metric as following,
>>> does this is not matching with the total read requests. I am using
>>> "ReadLatency_OneMinuteRate"
>>>
>>> /org.apache.cassandra.metrics:type=ColumnFamily,keyspace=*,s
>>> cope=*,name=ReadLatency
>>>
>>> On Thu, Aug 31, 2017 at 4:17 PM, Christophe Schmitz <
>>> christo...@instaclustr.com> wrote:
>>>
 Hello Jai,

 Did you have a look at the following page:
 http://cassandra.apache.org/doc/latest/operating/metrics.html

 In your case, you would want the following MBeans:
 org.apache.cassandra.metrics:type=Table keyspace=
 scope= name=
 With MetricName set to ReadLatency and WriteLatency

 Cheers,

 Christophe



 On 1 September 2017 at 09:08, Jai Bheemsen Rao Dhanwada <
 jaibheem...@gmail.com> wrote:

> Hello All,
>
> I am looking to capture the CF level Read, Write count and Latency. As
> of now I am using Telegraf plugin to capture the JMX metrics.
>
> What is the MBeans, scope and metric to look for the CF level metrics?
>
>



>>>
>>
>>
>> --
>>
>>
>> *Christophe Schmitz*
>> *Director of consulting EMEA*AU: +61 4 03751980 / FR: +33 7 82022899
>> <+33%207%2082%2002%2028%2099>
>>
>>
>> 
>>
>> 
>> 
>> 
>>
>> Read our latest technical blog posts here
>> .
>>
>> This email has been sent on behalf of Instaclustr Pty. Limited
>> (Australia) and Instaclustr Inc (USA).
>>
>> This email and any attachments may contain confidential and legally
>> privileged information.  If you are not the intended recipient, do not copy
>> or disclose its content, but please reply to this email immediately and
>> highlight the error to the sender and then immediately delete the message.
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>
>


How to check if repair is actually successful

2017-09-01 Thread Akshit Jain
Hi,
I am performing repair on cassandra cluster.
After getting repair status as successful, How to figure out if it is
successful actually?
Is there any way to test it?


Test repair command

2017-09-01 Thread Akshit Jain
​Hi everyone,
I'm new to cassandra.I was checking the nodetool repair command. I ran the
command and got success but I am not able to figure out how to check repair
has actually happened or not?
It would be a great help if somebody can suggest a way to do that in terms
of data check etc.


Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
Hi,

Well, the command you are using works for me on 3.0.9, I do not have any
logs in INFO level when I force a compaction and everything works fine for
me.

Are you sure there is nothing happening behind the scene ? What dies
'nodetool compactionstats -H' says ?

On 1 September 2017 at 12:05, qf zhou  wrote:

> When I trigger the compaction with the full path,  I found nothing in the
> system.log.  Nothing happens in the  terminal and it just stops there.
>
> #calling operation forceUserDefinedCompaction of mbean
> org.apache.cassandra.db:type=CompactionManager
>
>
>
>
> 在 2017年9月1日,下午5:06,qf zhou  写道:
>
> I  found the  following log.  What does it mean ?
>
> INFO  [CompactionExecutor:11] 2017-09-01 16:55:47,909 NoSpamLogger.java:91
> - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
> 1.000MiB
> WARN  [RMI TCP Connection(1714)-127.0.0.1] 2017-09-01 17:02:42,516
> CompactionManager.java:704 - Schema does not exist for file
> mc-151276-big-Data.db. Skipping.
>
>
> 在 2017年9月1日,下午4:54,Nicolas Guyomar  写道:
>
> You should have a log coming from the CompactionManager (in cassandra
> system.log) when you try the command, what does it says  ?
>
> On 1 September 2017 at 10:07, qf zhou  wrote:
>
>> When I run the command,  the following occurs and  it returns null.
>>
>> Is it normal ?
>>
>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>>
>>
>> Welcome to JMX terminal. Type "help" for available commands.
>> $>run -b org.apache.cassandra.db:type=CompactionManager
>> forceUserDefinedCompaction mc-100963-big-Data.db
>> #calling operation forceUserDefinedCompaction of mbean
>> org.apache.cassandra.db:type=CompactionManager
>> #operation returns:
>> null
>>
>>
>>
>>
>> 在 2017年9月1日,下午3:49,Nicolas Guyomar  写道:
>>
>> Hi,
>>
>> Last time I used forceUserDefinedCompaction, I got myself a headache
>> because I was trying to use a full path like you're doing, but in fact it
>> just need the sstable as parameter
>>
>> Can you just try :
>>
>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>>
>>
>>
>> On 1 September 2017 at 08:29, qf zhou  wrote:
>>
>>>
>>> dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0c
>>> db811e68dce511be6a305f6/mc-100963-big-Data.db
>>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>>> forceUserDefinedCompaction $dataPath" | java -jar
>>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l
>>> localhost:7199
>>>
>>> In the above, I am using a jmx method. But it seems that the file size
>>> doesn’t change. My command is wrong ?
>>>
>>> > 在 2017年9月1日,下午2:17,Jeff Jirsa  写道:
>>> >
>>> > User defined compaction to do a single sstable compaction on just that
>>> sstable
>>> >
>>> > It's a nodetool command in very recent versions, or a jmx method in
>>> older versions
>>> >
>>> >
>>> > --
>>> > Jeff Jirsa
>>> >
>>> >
>>> >> On Aug 31, 2017, at 11:04 PM, qf zhou  wrote:
>>> >>
>>> >> I am using  a cluster with  3 nodes and  the cassandra version is
>>> 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in
>>> the disk.
>>> >> I found some sstables file are over 300G. Using the  sstablemetadata
>>> command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
>>> >> It is obvious that too much tombstone data exists.
>>> >> The default_time_to_live = 864(100 days) and   gc_grace_seconds =
>>> 432000(5 days).  Using nodetool  compactionstats, I found the some
>>> compaction processes exists.
>>> >> So I really  want to know how to clear tombstone data ?  otherwise
>>> the disk space will cost too much.
>>> >> I really need some help, because some few people know cassandra in my
>>> company.
>>> >> Thank you very much!
>>> >>
>>> >>
>>> >> -
>>> >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> >> For additional commands, e-mail: user-h...@cassandra.apache.org
>>> >>
>>> >
>>> > -
>>> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> > For additional commands, e-mail: user-h...@cassandra.apache.org
>>> >
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>>
>>>
>>
>>
>
>
> - To
> unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org For 

Re: Cassandra 3.11 is compacting forever

2017-09-01 Thread kurt greaves
are you seeing any errors in the logs? Is that one compaction still getting
stuck?


Re: Cassandra snapshot restore with VNODES missing some data

2017-09-01 Thread kurt greaves
is num_tokens also set to 256?


Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
When I trigger the compaction with the full path,  I found nothing in the system.log.  Nothing happens in the  terminal and it just stops there.#calling operation forceUserDefinedCompaction of mbean org.apache.cassandra.db:type=CompactionManager在 2017年9月1日,下午5:06,qf zhou  写道:I  found the  following log.  What does it mean ?INFO  [CompactionExecutor:11] 2017-09-01 16:55:47,909 NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiBWARN  [RMI TCP Connection(1714)-127.0.0.1] 2017-09-01 17:02:42,516 CompactionManager.java:704 - Schema does not exist for file mc-151276-big-Data.db. Skipping.在 2017年9月1日,下午4:54,Nicolas Guyomar  写道:You should have a log coming from the CompactionManager (in cassandra system.log) when you try the command, what does it says  ?On 1 September 2017 at 10:07, qf zhou  wrote:When I run the command,  the following occurs and  it returns null.Is it normal ?echo "run -b org.apache.cassandra.db:type=CompactionManager forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199Welcome to JMX terminal. Type "help" for available commands.$>run -b org.apache.cassandra.db:type=CompactionManager forceUserDefinedCompaction mc-100963-big-Data.db#calling operation forceUserDefinedCompaction of mbean org.apache.cassandra.db:type=CompactionManager#operation returns: null在 2017年9月1日,下午3:49,Nicolas Guyomar  写道:Hi,Last time I used forceUserDefinedCompaction, I got myself a headache because I was trying to use a full path like you're doing, but in fact it just need the sstable as parameterCan you just try : echo "run -b org.apache.cassandra.db:type=CompactionManager forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199On 1 September 2017 at 08:29, qf zhou  wrote:
dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0cdb811e68dce511be6a305f6/mc-100963-big-Data.db
echo "run -b org.apache.cassandra.db:type=CompactionManager forceUserDefinedCompaction $dataPath" | java -jar /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199

In the above, I am using a jmx method. But it seems that the file size doesn’t change. My command is wrong ?

> 在 2017年9月1日,下午2:17,Jeff Jirsa  写道:
>
> User defined compaction to do a single sstable compaction on just that sstable
>
> It's a nodetool command in very recent versions, or a jmx method in older versions
>
>
> --
> Jeff Jirsa
>
>
>> On Aug 31, 2017, at 11:04 PM, qf zhou  wrote:
>>
>> I am using  a cluster with  3 nodes and  the cassandra version is 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in the disk.
>> I found some sstables file are over 300G. Using the  sstablemetadata command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
>> It is obvious that too much tombstone data exists.
>> The default_time_to_live = 864(100 days) and   gc_grace_seconds = 432000(5 days).  Using nodetool  compactionstats, I found the some compaction processes exists.
>> So I really  want to know how to clear tombstone data ?  otherwise the disk space will cost too much.
>> I really need some help, because some few people know cassandra in my company.
>> Thank you very much!
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
Whoops sorry I mislead you with cassandra 2.1 behavior, you were right
giving your sstable full path , what kind of log do you have when you
trigger the compaction with the full path ?

On 1 September 2017 at 11:30, Nicolas Guyomar 
wrote:

> Well, not sure why you reached a memory usage limit, but according to the
> 3.0 branche's code : https://github.com/apache/
> cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/compaction/
> CompactionManager.java#L632 you just need to give the sstable filename,
> and Cassandra manage to find it based on cassandra version, sstable
> filename convention and so on
>
> Are you sure those sstables you are trying to get rid off are really in an
> active schema, and not some leftover from an old keyspace/table? This is
> what "schema does not exist" means to me.
>
> On 1 September 2017 at 11:06, qf zhou  wrote:
>
>> I  found the  following log.  What does it mean ?
>>
>> INFO  [CompactionExecutor:11] 2017-09-01 16:55:47,909
>> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
>> allocate chunk of 1.000MiB
>> WARN  [RMI TCP Connection(1714)-127.0.0.1] 2017-09-01 17:02:42,516
>> CompactionManager.java:704 - Schema does not exist for file
>> mc-151276-big-Data.db. Skipping.
>>
>>
>> 在 2017年9月1日,下午4:54,Nicolas Guyomar  写道:
>>
>> You should have a log coming from the CompactionManager (in cassandra
>> system.log) when you try the command, what does it says  ?
>>
>> On 1 September 2017 at 10:07, qf zhou  wrote:
>>
>>> When I run the command,  the following occurs and  it returns null.
>>>
>>> Is it normal ?
>>>
>>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>>> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
>>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l
>>> localhost:7199
>>>
>>>
>>> Welcome to JMX terminal. Type "help" for available commands.
>>> $>run -b org.apache.cassandra.db:type=CompactionManager
>>> forceUserDefinedCompaction mc-100963-big-Data.db
>>> #calling operation forceUserDefinedCompaction of mbean
>>> org.apache.cassandra.db:type=CompactionManager
>>> #operation returns:
>>> null
>>>
>>>
>>>
>>>
>>> 在 2017年9月1日,下午3:49,Nicolas Guyomar  写道:
>>>
>>> Hi,
>>>
>>> Last time I used forceUserDefinedCompaction, I got myself a headache
>>> because I was trying to use a full path like you're doing, but in fact it
>>> just need the sstable as parameter
>>>
>>> Can you just try :
>>>
>>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>>> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
>>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l
>>> localhost:7199
>>>
>>>
>>>
>>> On 1 September 2017 at 08:29, qf zhou  wrote:
>>>

 dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0c
 db811e68dce511be6a305f6/mc-100963-big-Data.db
 echo "run -b org.apache.cassandra.db:type=CompactionManager
 forceUserDefinedCompaction $dataPath" | java -jar
 /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l
 localhost:7199

 In the above, I am using a jmx method. But it seems that the file size
 doesn’t change. My command is wrong ?

 > 在 2017年9月1日,下午2:17,Jeff Jirsa  写道:
 >
 > User defined compaction to do a single sstable compaction on just
 that sstable
 >
 > It's a nodetool command in very recent versions, or a jmx method in
 older versions
 >
 >
 > --
 > Jeff Jirsa
 >
 >
 >> On Aug 31, 2017, at 11:04 PM, qf zhou  wrote:
 >>
 >> I am using  a cluster with  3 nodes and  the cassandra version is
 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in
 the disk.
 >> I found some sstables file are over 300G. Using the  sstablemetadata
 command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
 >> It is obvious that too much tombstone data exists.
 >> The default_time_to_live = 864(100 days) and   gc_grace_seconds
 = 432000(5 days).  Using nodetool  compactionstats, I found the some
 compaction processes exists.
 >> So I really  want to know how to clear tombstone data ?  otherwise
 the disk space will cost too much.
 >> I really need some help, because some few people know cassandra in
 my company.
 >> Thank you very much!
 >>
 >>
 >> 
 -
 >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
 >> For additional commands, e-mail: user-h...@cassandra.apache.org
 >>
 >
 > -
 > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
 > For additional commands, e-mail: user-h...@cassandra.apache.org

Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
Well, not sure why you reached a memory usage limit, but according to the
3.0 branche's code :
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L632
you just need to give the sstable filename, and Cassandra manage to find it
based on cassandra version, sstable filename convention and so on

Are you sure those sstables you are trying to get rid off are really in an
active schema, and not some leftover from an old keyspace/table? This is
what "schema does not exist" means to me.

On 1 September 2017 at 11:06, qf zhou  wrote:

> I  found the  following log.  What does it mean ?
>
> INFO  [CompactionExecutor:11] 2017-09-01 16:55:47,909 NoSpamLogger.java:91
> - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
> 1.000MiB
> WARN  [RMI TCP Connection(1714)-127.0.0.1] 2017-09-01 17:02:42,516
> CompactionManager.java:704 - Schema does not exist for file
> mc-151276-big-Data.db. Skipping.
>
>
> 在 2017年9月1日,下午4:54,Nicolas Guyomar  写道:
>
> You should have a log coming from the CompactionManager (in cassandra
> system.log) when you try the command, what does it says  ?
>
> On 1 September 2017 at 10:07, qf zhou  wrote:
>
>> When I run the command,  the following occurs and  it returns null.
>>
>> Is it normal ?
>>
>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>>
>>
>> Welcome to JMX terminal. Type "help" for available commands.
>> $>run -b org.apache.cassandra.db:type=CompactionManager
>> forceUserDefinedCompaction mc-100963-big-Data.db
>> #calling operation forceUserDefinedCompaction of mbean
>> org.apache.cassandra.db:type=CompactionManager
>> #operation returns:
>> null
>>
>>
>>
>>
>> 在 2017年9月1日,下午3:49,Nicolas Guyomar  写道:
>>
>> Hi,
>>
>> Last time I used forceUserDefinedCompaction, I got myself a headache
>> because I was trying to use a full path like you're doing, but in fact it
>> just need the sstable as parameter
>>
>> Can you just try :
>>
>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>>
>>
>>
>> On 1 September 2017 at 08:29, qf zhou  wrote:
>>
>>>
>>> dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0c
>>> db811e68dce511be6a305f6/mc-100963-big-Data.db
>>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>>> forceUserDefinedCompaction $dataPath" | java -jar
>>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l
>>> localhost:7199
>>>
>>> In the above, I am using a jmx method. But it seems that the file size
>>> doesn’t change. My command is wrong ?
>>>
>>> > 在 2017年9月1日,下午2:17,Jeff Jirsa  写道:
>>> >
>>> > User defined compaction to do a single sstable compaction on just that
>>> sstable
>>> >
>>> > It's a nodetool command in very recent versions, or a jmx method in
>>> older versions
>>> >
>>> >
>>> > --
>>> > Jeff Jirsa
>>> >
>>> >
>>> >> On Aug 31, 2017, at 11:04 PM, qf zhou  wrote:
>>> >>
>>> >> I am using  a cluster with  3 nodes and  the cassandra version is
>>> 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in
>>> the disk.
>>> >> I found some sstables file are over 300G. Using the  sstablemetadata
>>> command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
>>> >> It is obvious that too much tombstone data exists.
>>> >> The default_time_to_live = 864(100 days) and   gc_grace_seconds =
>>> 432000(5 days).  Using nodetool  compactionstats, I found the some
>>> compaction processes exists.
>>> >> So I really  want to know how to clear tombstone data ?  otherwise
>>> the disk space will cost too much.
>>> >> I really need some help, because some few people know cassandra in my
>>> company.
>>> >> Thank you very much!
>>> >>
>>> >>
>>> >> -
>>> >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> >> For additional commands, e-mail: user-h...@cassandra.apache.org
>>> >>
>>> >
>>> > -
>>> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> > For additional commands, e-mail: user-h...@cassandra.apache.org
>>> >
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>>
>>>
>>
>>
>
>


Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
I  found the  following log.  What does it mean ?

INFO  [CompactionExecutor:11] 2017-09-01 16:55:47,909 NoSpamLogger.java:91 - 
Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
WARN  [RMI TCP Connection(1714)-127.0.0.1] 2017-09-01 17:02:42,516 
CompactionManager.java:704 - Schema does not exist for file 
mc-151276-big-Data.db. Skipping.


> 在 2017年9月1日,下午4:54,Nicolas Guyomar  写道:
> 
> You should have a log coming from the CompactionManager (in cassandra 
> system.log) when you try the command, what does it says  ?
> 
> On 1 September 2017 at 10:07, qf zhou  > wrote:
> When I run the command,  the following occurs and  it returns null.
> 
> Is it normal ?
> 
> echo "run -b org.apache.cassandra.db:type=CompactionManager 
> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar 
> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
> 
> 
> Welcome to JMX terminal. Type "help" for available commands.
> $>run -b org.apache.cassandra.db:type=CompactionManager 
> forceUserDefinedCompaction mc-100963-big-Data.db
> #calling operation forceUserDefinedCompaction of mbean 
> org.apache.cassandra.db:type=CompactionManager
> #operation returns: 
> null
> 
> 
> 
> 
>> 在 2017年9月1日,下午3:49,Nicolas Guyomar > > 写道:
>> 
>> Hi,
>> 
>> Last time I used forceUserDefinedCompaction, I got myself a headache because 
>> I was trying to use a full path like you're doing, but in fact it just need 
>> the sstable as parameter
>> 
>> Can you just try : 
>> 
>> echo "run -b org.apache.cassandra.db:type=CompactionManager 
>> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar 
>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>> 
>> 
>> 
>> On 1 September 2017 at 08:29, qf zhou > > wrote:
>> 
>> dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0cdb811e68dce511be6a305f6/mc-100963-big-Data.db
>> echo "run -b org.apache.cassandra.db:type=CompactionManager 
>> forceUserDefinedCompaction $dataPath" | java -jar 
>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>> 
>> In the above, I am using a jmx method. But it seems that the file size 
>> doesn’t change. My command is wrong ?
>> 
>> > 在 2017年9月1日,下午2:17,Jeff Jirsa > 
>> > 写道:
>> >
>> > User defined compaction to do a single sstable compaction on just that 
>> > sstable
>> >
>> > It's a nodetool command in very recent versions, or a jmx method in older 
>> > versions
>> >
>> >
>> > --
>> > Jeff Jirsa
>> >
>> >
>> >> On Aug 31, 2017, at 11:04 PM, qf zhou > >> > wrote:
>> >>
>> >> I am using  a cluster with  3 nodes and  the cassandra version is 3.0.9. 
>> >> I have used it about 6 months. Now each node has about 1.5T data in the 
>> >> disk.
>> >> I found some sstables file are over 300G. Using the  sstablemetadata 
>> >> command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
>> >> It is obvious that too much tombstone data exists.
>> >> The default_time_to_live = 864(100 days) and   gc_grace_seconds = 
>> >> 432000(5 days).  Using nodetool  compactionstats, I found the some 
>> >> compaction processes exists.
>> >> So I really  want to know how to clear tombstone data ?  otherwise the 
>> >> disk space will cost too much.
>> >> I really need some help, because some few people know cassandra in my 
>> >> company.
>> >> Thank you very much!
>> >>
>> >>
>> >> -
>> >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
>> >> 
>> >> For additional commands, e-mail: user-h...@cassandra.apache.org 
>> >> 
>> >>
>> >
>> > -
>> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
>> > 
>> > For additional commands, e-mail: user-h...@cassandra.apache.org 
>> > 
>> >
>> 
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
>> 
>> For additional commands, e-mail: user-h...@cassandra.apache.org 
>> 
>> 
>> 
> 
> 



Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
You should have a log coming from the CompactionManager (in cassandra
system.log) when you try the command, what does it says  ?

On 1 September 2017 at 10:07, qf zhou  wrote:

> When I run the command,  the following occurs and  it returns null.
>
> Is it normal ?
>
> echo "run -b org.apache.cassandra.db:type=CompactionManager
> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>
>
> Welcome to JMX terminal. Type "help" for available commands.
> $>run -b org.apache.cassandra.db:type=CompactionManager
> forceUserDefinedCompaction mc-100963-big-Data.db
> #calling operation forceUserDefinedCompaction of mbean
> org.apache.cassandra.db:type=CompactionManager
> #operation returns:
> null
>
>
>
>
> 在 2017年9月1日,下午3:49,Nicolas Guyomar  写道:
>
> Hi,
>
> Last time I used forceUserDefinedCompaction, I got myself a headache
> because I was trying to use a full path like you're doing, but in fact it
> just need the sstable as parameter
>
> Can you just try :
>
> echo "run -b org.apache.cassandra.db:type=CompactionManager
> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>
>
>
> On 1 September 2017 at 08:29, qf zhou  wrote:
>
>>
>> dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0c
>> db811e68dce511be6a305f6/mc-100963-big-Data.db
>> echo "run -b org.apache.cassandra.db:type=CompactionManager
>> forceUserDefinedCompaction $dataPath" | java -jar
>> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
>>
>> In the above, I am using a jmx method. But it seems that the file size
>> doesn’t change. My command is wrong ?
>>
>> > 在 2017年9月1日,下午2:17,Jeff Jirsa  写道:
>> >
>> > User defined compaction to do a single sstable compaction on just that
>> sstable
>> >
>> > It's a nodetool command in very recent versions, or a jmx method in
>> older versions
>> >
>> >
>> > --
>> > Jeff Jirsa
>> >
>> >
>> >> On Aug 31, 2017, at 11:04 PM, qf zhou  wrote:
>> >>
>> >> I am using  a cluster with  3 nodes and  the cassandra version is
>> 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in
>> the disk.
>> >> I found some sstables file are over 300G. Using the  sstablemetadata
>> command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
>> >> It is obvious that too much tombstone data exists.
>> >> The default_time_to_live = 864(100 days) and   gc_grace_seconds =
>> 432000(5 days).  Using nodetool  compactionstats, I found the some
>> compaction processes exists.
>> >> So I really  want to know how to clear tombstone data ?  otherwise the
>> disk space will cost too much.
>> >> I really need some help, because some few people know cassandra in my
>> company.
>> >> Thank you very much!
>> >>
>> >>
>> >> -
>> >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> >> For additional commands, e-mail: user-h...@cassandra.apache.org
>> >>
>> >
>> > -
>> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> > For additional commands, e-mail: user-h...@cassandra.apache.org
>> >
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>
>


Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
When I run the command,  the following occurs and  it returns null.

Is it normal ?

echo "run -b org.apache.cassandra.db:type=CompactionManager 
forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar 
/opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199


Welcome to JMX terminal. Type "help" for available commands.
$>run -b org.apache.cassandra.db:type=CompactionManager 
forceUserDefinedCompaction mc-100963-big-Data.db
#calling operation forceUserDefinedCompaction of mbean 
org.apache.cassandra.db:type=CompactionManager
#operation returns: 
null




> 在 2017年9月1日,下午3:49,Nicolas Guyomar  写道:
> 
> Hi,
> 
> Last time I used forceUserDefinedCompaction, I got myself a headache because 
> I was trying to use a full path like you're doing, but in fact it just need 
> the sstable as parameter
> 
> Can you just try : 
> 
> echo "run -b org.apache.cassandra.db:type=CompactionManager 
> forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar 
> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
> 
> 
> 
> On 1 September 2017 at 08:29, qf zhou  > wrote:
> 
> dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0cdb811e68dce511be6a305f6/mc-100963-big-Data.db
> echo "run -b org.apache.cassandra.db:type=CompactionManager 
> forceUserDefinedCompaction $dataPath" | java -jar 
> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199
> 
> In the above, I am using a jmx method. But it seems that the file size 
> doesn’t change. My command is wrong ?
> 
> > 在 2017年9月1日,下午2:17,Jeff Jirsa > 
> > 写道:
> >
> > User defined compaction to do a single sstable compaction on just that 
> > sstable
> >
> > It's a nodetool command in very recent versions, or a jmx method in older 
> > versions
> >
> >
> > --
> > Jeff Jirsa
> >
> >
> >> On Aug 31, 2017, at 11:04 PM, qf zhou  >> > wrote:
> >>
> >> I am using  a cluster with  3 nodes and  the cassandra version is 3.0.9. I 
> >> have used it about 6 months. Now each node has about 1.5T data in the disk.
> >> I found some sstables file are over 300G. Using the  sstablemetadata 
> >> command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
> >> It is obvious that too much tombstone data exists.
> >> The default_time_to_live = 864(100 days) and   gc_grace_seconds = 
> >> 432000(5 days).  Using nodetool  compactionstats, I found the some 
> >> compaction processes exists.
> >> So I really  want to know how to clear tombstone data ?  otherwise the 
> >> disk space will cost too much.
> >> I really need some help, because some few people know cassandra in my 
> >> company.
> >> Thank you very much!
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> >> 
> >> For additional commands, e-mail: user-h...@cassandra.apache.org 
> >> 
> >>
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> > 
> > For additional commands, e-mail: user-h...@cassandra.apache.org 
> > 
> >
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
> 
> For additional commands, e-mail: user-h...@cassandra.apache.org 
> 
> 
> 



Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Nicolas Guyomar
Hi,

Last time I used forceUserDefinedCompaction, I got myself a headache
because I was trying to use a full path like you're doing, but in fact it
just need the sstable as parameter

Can you just try :

echo "run -b org.apache.cassandra.db:type=CompactionManager
forceUserDefinedCompaction mc-100963-big-Data.db" | java -jar
/opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199



On 1 September 2017 at 08:29, qf zhou  wrote:

>
> dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-
> 073e51a0cdb811e68dce511be6a305f6/mc-100963-big-Data.db
> echo "run -b org.apache.cassandra.db:type=CompactionManager
> forceUserDefinedCompaction $dataPath" | java -jar 
> /opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar
>  -l localhost:7199
>
> In the above, I am using a jmx method. But it seems that the file size
> doesn’t change. My command is wrong ?
>
> > 在 2017年9月1日,下午2:17,Jeff Jirsa  写道:
> >
> > User defined compaction to do a single sstable compaction on just that
> sstable
> >
> > It's a nodetool command in very recent versions, or a jmx method in
> older versions
> >
> >
> > --
> > Jeff Jirsa
> >
> >
> >> On Aug 31, 2017, at 11:04 PM, qf zhou  wrote:
> >>
> >> I am using  a cluster with  3 nodes and  the cassandra version is
> 3.0.9. I have used it about 6 months. Now each node has about 1.5T data in
> the disk.
> >> I found some sstables file are over 300G. Using the  sstablemetadata
> command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
> >> It is obvious that too much tombstone data exists.
> >> The default_time_to_live = 864(100 days) and   gc_grace_seconds =
> 432000(5 days).  Using nodetool  compactionstats, I found the some
> compaction processes exists.
> >> So I really  want to know how to clear tombstone data ?  otherwise the
> disk space will cost too much.
> >> I really need some help, because some few people know cassandra in my
> company.
> >> Thank you very much!
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> >> For additional commands, e-mail: user-h...@cassandra.apache.org
> >>
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou

dataPath=/hdd3/cassandra/data/gps/gpsfullwithstate-073e51a0cdb811e68dce511be6a305f6/mc-100963-big-Data.db
echo "run -b org.apache.cassandra.db:type=CompactionManager 
forceUserDefinedCompaction $dataPath" | java -jar 
/opt/cassandra/tools/jmx/jmxterm-1.0-alpha-4-uber.jar   -l localhost:7199

In the above, I am using a jmx method. But it seems that the file size doesn’t 
change. My command is wrong ?

> 在 2017年9月1日,下午2:17,Jeff Jirsa  写道:
> 
> User defined compaction to do a single sstable compaction on just that sstable
> 
> It's a nodetool command in very recent versions, or a jmx method in older 
> versions
> 
> 
> -- 
> Jeff Jirsa
> 
> 
>> On Aug 31, 2017, at 11:04 PM, qf zhou  wrote:
>> 
>> I am using  a cluster with  3 nodes and  the cassandra version is 3.0.9. I 
>> have used it about 6 months. Now each node has about 1.5T data in the disk.
>> I found some sstables file are over 300G. Using the  sstablemetadata 
>> command,  I found it:  Estimated droppable tombstones: 0.9622972799707109.
>> It is obvious that too much tombstone data exists.
>> The default_time_to_live = 864(100 days) and   gc_grace_seconds = 
>> 432000(5 days).  Using nodetool  compactionstats, I found the some 
>> compaction processes exists.
>> So I really  want to know how to clear tombstone data ?  otherwise the disk 
>> space will cost too much.
>> I really need some help, because some few people know cassandra in my 
>> company.  
>> Thank you very much!
>> 
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: old big tombstone data file occupy much disk space

2017-09-01 Thread Jeff Jirsa
User defined compaction to do a single sstable compaction on just that sstable

It's a nodetool command in very recent versions, or a jmx method in older 
versions


-- 
Jeff Jirsa


> On Aug 31, 2017, at 11:04 PM, qf zhou  wrote:
> 
> I am using  a cluster with  3 nodes and  the cassandra version is 3.0.9. I 
> have used it about 6 months. Now each node has about 1.5T data in the disk.
> I found some sstables file are over 300G. Using the  sstablemetadata command, 
>  I found it:  Estimated droppable tombstones: 0.9622972799707109.
> It is obvious that too much tombstone data exists.
> The default_time_to_live = 864(100 days) and   gc_grace_seconds = 
> 432000(5 days).  Using nodetool  compactionstats, I found the some compaction 
> processes exists.
> So I really  want to know how to clear tombstone data ?  otherwise the disk 
> space will cost too much.
> I really need some help, because some few people know cassandra in my 
> company.  
> Thank you very much!
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



old big tombstone data file occupy much disk space

2017-09-01 Thread qf zhou
I am using  a cluster with  3 nodes and  the cassandra version is 3.0.9. I have 
used it about 6 months. Now each node has about 1.5T data in the disk.
 I found some sstables file are over 300G. Using the  sstablemetadata command,  
I found it:  Estimated droppable tombstones: 0.9622972799707109.
 It is obvious that too much tombstone data exists.
The default_time_to_live = 864(100 days) and   gc_grace_seconds = 432000(5 
days).  Using nodetool  compactionstats, I found the some compaction processes 
exists.
So I really  want to know how to clear tombstone data ?  otherwise the disk 
space will cost too much.
I really need some help, because some few people know cassandra in my company.  
Thank you very much!


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org