Re: disk space used vs nodetool status

2016-03-25 Thread Alain RODRIGUEZ
Hi Anishek

they were created from more than a couple of months ago


You then probably free a fair amount of data :-).

 We didn't do any actions that would create a snapshot


You shouldn't have any snapshot unless you drop or truncate a table, call
them through "nodetool snapshot" or run repair without the -pr option. Also
make sure to disable the 'snapshot_before_compaction' option, never see a
case where this was useful or in use, I believe this one is here for
debugging purposes (it is disabled by default). I believe that's all.

i couldn't find the command in 2.0.17


 Added in 2.1.0.beta-1:
https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L1903 :-).

C*heers,
---
Alain Rodriguez - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com


2016-03-22 11:10 GMT+01:00 Anishek Agarwal :

> Thanks Carlos,
>
> We didn't do any actions that would create a snapshot, and i couldn't find
> the command in 2.0.17, but i found the respective snapshot directories and
> they were created from more than a couple of months ago so, i it might be
> that i might have forgotten, its fine now, i have cleared them.
>
> anishek
>
> On Tue, Mar 22, 2016 at 3:20 PM, Carlos Alonso  wrote:
>
>> I'd say you have snapshots holding disk space.
>>
>> Check it with nodetool listsnapshots. A snapshot is automatically taken
>> on destructive actions (drop, truncate...) and is basically a hard link to
>> the involved SSTables, so it's not considered as data load from Cassandra
>> but it is effectively using disk space.
>>
>> Hope this helps.
>>
>> Carlos Alonso | Software Engineer | @calonso
>> 
>>
>> On 22 March 2016 at 07:57, Anishek Agarwal  wrote:
>>
>>> Hello,
>>>
>>> Using cassandra 2.0.17  on one of the 7 nodes i see that the "Load"
>>> column from nodetool status
>>> shows around 279.34 GB where as doing df -h on the two mounted disks the
>>> total is about 400GB any reason of why this difference could show up and
>>> how do i go about finding the cause for this ?
>>>
>>> Thanks In Advance.
>>> Anishek
>>>
>>
>>
>


Re: disk space used vs nodetool status

2016-03-22 Thread Anishek Agarwal
Thanks Carlos,

We didn't do any actions that would create a snapshot, and i couldn't find
the command in 2.0.17, but i found the respective snapshot directories and
they were created from more than a couple of months ago so, i it might be
that i might have forgotten, its fine now, i have cleared them.

anishek

On Tue, Mar 22, 2016 at 3:20 PM, Carlos Alonso  wrote:

> I'd say you have snapshots holding disk space.
>
> Check it with nodetool listsnapshots. A snapshot is automatically taken on
> destructive actions (drop, truncate...) and is basically a hard link to the
> involved SSTables, so it's not considered as data load from Cassandra but
> it is effectively using disk space.
>
> Hope this helps.
>
> Carlos Alonso | Software Engineer | @calonso 
>
> On 22 March 2016 at 07:57, Anishek Agarwal  wrote:
>
>> Hello,
>>
>> Using cassandra 2.0.17  on one of the 7 nodes i see that the "Load"
>> column from nodetool status
>> shows around 279.34 GB where as doing df -h on the two mounted disks the
>> total is about 400GB any reason of why this difference could show up and
>> how do i go about finding the cause for this ?
>>
>> Thanks In Advance.
>> Anishek
>>
>
>


Re: disk space used vs nodetool status

2016-03-22 Thread Carlos Alonso
I'd say you have snapshots holding disk space.

Check it with nodetool listsnapshots. A snapshot is automatically taken on
destructive actions (drop, truncate...) and is basically a hard link to the
involved SSTables, so it's not considered as data load from Cassandra but
it is effectively using disk space.

Hope this helps.

Carlos Alonso | Software Engineer | @calonso 

On 22 March 2016 at 07:57, Anishek Agarwal  wrote:

> Hello,
>
> Using cassandra 2.0.17  on one of the 7 nodes i see that the "Load" column
> from nodetool status
> shows around 279.34 GB where as doing df -h on the two mounted disks the
> total is about 400GB any reason of why this difference could show up and
> how do i go about finding the cause for this ?
>
> Thanks In Advance.
> Anishek
>