Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

perfect


Il 02/03/2018 19:18, Igor Fedotov ha scritto:


Yes, by default BlueStore reports 1Gb per OSD as used by BlueFS.


On 3/2/2018 8:10 PM, Max Cuttins wrote:

Umh

Taking a look to your computation I think the ratio OSD/Overhead it's 
really about 1.1Gb per OSD.

Because I have 9 NVMe OSD alive right now. So about 9.5Gb of overhead.
So I guess this is just it's right behaviour.

Fine!


Il 02/03/2018 15:18, David Turner ha scritto:
[1] Here is a ceph starts on a brand new cluster that has never had 
any pools created or data or into it at all. 323GB used out of 
2.3PB. that's 0.01% overhead, but we're using 10TB disks for this 
cluster, and the overhead is moreso per osd than per TB.  It is 
1.1GB overhead per osd. 34 of the osds are pure nvme and the other 
255 have collocated DBs with their WAL on flash.


The used space your string is most likely just osd overhead, but you 
can double check if there are any orphaned rados objects using up 
space with a `rados ls`.  Another thing to note is that deleting a 
pool in ceph is not instant. It goes into garbage collection and is 
taken care of over time.  Most likely you're just looking at osd 
overhead, though.


[1]
$ ceph -s
cluster:
health: HEALTH_OK

services:
mon: 5 daemons, quorum mon1,mon2,mon4,mon3,mon5
mgr: mon1(active), standbys: mon3, mon2, mon5, mon4
osd: 289 osds: 289 up, 289 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail

On Fri, Mar 2, 2018, 6:25 AM Max Cuttins > wrote:


How can I analyze this?


Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:


Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you
reserved some space or you have other service that's taking the
space. But It seems way to much for me.


El 02/03/18 a las 12:09, Max Cuttins escribió:


I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:

2018-03-02 11:21 GMT+01:00 Max Cuttins >:

Hi everybody,

i deleted everything from the cluster after some test
with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are
overhead items and stuff that eat some space
so it would never be zero. At your place, it would seem it is
close to 0.1%, so just live with it and move
on to using your 8TB for what you really needed it to be used
for.

In almost no case will I think that "if only I could get
those 0.1% back and then my cluster would be great
again".

Storage clusters should probably have something like 10%
"admin" margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be
writing orders for more disks and/or more
storage nodes.

At that point, regardless of where the "miscalculation" is,
or where ceph manages to waste
9500M while you think it should be zero, it will be all but
impossible to make anything decent with it
if you were to get those 0.1% back with some magic command.

-- 
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Igor Fedotov

Yes, by default BlueStore reports 1Gb per OSD as used by BlueFS.


On 3/2/2018 8:10 PM, Max Cuttins wrote:

Umh

Taking a look to your computation I think the ratio OSD/Overhead it's 
really about 1.1Gb per OSD.

Because I have 9 NVMe OSD alive right now. So about 9.5Gb of overhead.
So I guess this is just it's right behaviour.

Fine!


Il 02/03/2018 15:18, David Turner ha scritto:
[1] Here is a ceph starts on a brand new cluster that has never had 
any pools created or data or into it at all.  323GB used out of 
2.3PB. that's 0.01% overhead, but we're using 10TB disks for this 
cluster, and the overhead is moreso per osd than per TB.  It is 1.1GB 
overhead per osd. 34 of the osds are pure nvme and the other 255 have 
collocated DBs with their WAL on flash.


The used space your string is most likely just osd overhead, but you 
can double check if there are any orphaned rados objects using up 
space with a `rados ls`.  Another thing to note is that deleting a 
pool in ceph is not instant. It goes into garbage collection and is 
taken care of over time. Most likely you're just looking at osd 
overhead, though.


[1]
$ ceph -s
cluster:
health: HEALTH_OK

services:
mon: 5 daemons, quorum mon1,mon2,mon4,mon3,mon5
mgr: mon1(active), standbys: mon3, mon2, mon5, mon4
osd: 289 osds: 289 up, 289 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail

On Fri, Mar 2, 2018, 6:25 AM Max Cuttins > wrote:


How can I analyze this?


Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:


Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you
reserved some space or you have other service that's taking the
space. But It seems way to much for me.


El 02/03/18 a las 12:09, Max Cuttins escribió:


I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:

2018-03-02 11:21 GMT+01:00 Max Cuttins >:

Hi everybody,

i deleted everything from the cluster after some test with
RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are
overhead items and stuff that eat some space
so it would never be zero. At your place, it would seem it is
close to 0.1%, so just live with it and move
on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those
0.1% back and then my cluster would be great
again".

Storage clusters should probably have something like 10%
"admin" margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be
writing orders for more disks and/or more
storage nodes.

At that point, regardless of where the "miscalculation" is, or
where ceph manages to waste
9500M while you think it should be zero, it will be all but
impossible to make anything decent with it
if you were to get those 0.1% back with some magic command.

-- 
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

Umh

Taking a look to your computation I think the ratio OSD/Overhead it's 
really about 1.1Gb per OSD.

Because I have 9 NVMe OSD alive right now. So about 9.5Gb of overhead.
So I guess this is just it's right behaviour.

Fine!


Il 02/03/2018 15:18, David Turner ha scritto:
[1] Here is a ceph starts on a brand new cluster that has never had 
any pools created or data or into it at all.  323GB used out of 2.3PB. 
that's 0.01% overhead, but we're using 10TB disks for this cluster, 
and the overhead is moreso per osd than per TB.  It is 1.1GB overhead 
per osd. 34 of the osds are pure nvme and the other 255 have 
collocated DBs with their WAL on flash.


The used space your string is most likely just osd overhead, but you 
can double check if there are any orphaned rados objects using up 
space with a `rados ls`.  Another thing to note is that deleting a 
pool in ceph is not instant. It goes into garbage collection and is 
taken care of over time.  Most likely you're just looking at osd 
overhead, though.


[1]
$ ceph -s
cluster:
health: HEALTH_OK

services:
mon: 5 daemons, quorum mon1,mon2,mon4,mon3,mon5
mgr: mon1(active), standbys: mon3, mon2, mon5, mon4
osd: 289 osds: 289 up, 289 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail

On Fri, Mar 2, 2018, 6:25 AM Max Cuttins > wrote:


How can I analyze this?


Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:


Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you
reserved some space or you have other service that's taking the
space. But It seems way to much for me.


El 02/03/18 a las 12:09, Max Cuttins escribió:


I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:

2018-03-02 11:21 GMT+01:00 Max Cuttins >:

Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are
overhead items and stuff that eat some space
so it would never be zero. At your place, it would seem it is
close to 0.1%, so just live with it and move
on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those
0.1% back and then my cluster would be great
again".

Storage clusters should probably have something like 10%
"admin" margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be
writing orders for more disks and/or more
storage nodes.

At that point, regardless of where the "miscalculation" is, or
where ceph manages to waste
9500M while you think it should be zero, it will be all but
impossible to make anything decent with it
if you were to get those 0.1% back with some magic command.

-- 
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread David Turner
[1] Here is a ceph starts on a brand new cluster that has never had any
pools created or data or into it at all.  323GB used out of 2.3PB. that's
0.01% overhead, but we're using 10TB disks for this cluster, and the
overhead is moreso per osd than per TB.  It is 1.1GB overhead per osd. 34
of the osds are pure nvme and the other 255 have collocated DBs with their
WAL on flash.

The used space your string is most likely just osd overhead, but you can
double check if there are any orphaned rados objects using up space with a
`rados ls`.  Another thing to note is that deleting a pool in ceph is not
instant. It goes into garbage collection and is taken care of over time.
Most likely you're just looking at osd overhead, though.

[1]
$ ceph -s
cluster:
health: HEALTH_OK

services:
mon: 5 daemons, quorum mon1,mon2,mon4,mon3,mon5
mgr: mon1(active), standbys: mon3, mon2, mon5, mon4
osd: 289 osds: 289 up, 289 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail

On Fri, Mar 2, 2018, 6:25 AM Max Cuttins  wrote:

> How can I analyze this?
>
> Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:
>
> Hi Max,
>
> No that's not normal. 9GB for an empty cluster. Maybe you reserved some
> space or you have other service that's taking the space. But It seems way
> to much for me.
>
> El 02/03/18 a las 12:09, Max Cuttins escribió:
>
> I don't care of get back those space.
> I just want to know if it's expected or not.
> Because I run several rados bench with the flag --no-cleanup
> And maybe I leaved something in the way.
>
>
>
> Il 02/03/2018 11:35, Janne Johansson ha scritto:
>
> 2018-03-02 11:21 GMT+01:00 Max Cuttins :
>
>> Hi everybody,
>>
>> i deleted everything from the cluster after some test with RBD.
>> Now I see that there something still in use:
>>
>>   data:
>> pools:   0 pools, 0 pgs
>> objects: 0 objects, 0 bytes
>> usage:   *9510 MB used*, 8038 GB / 8048 GB avail
>> pgs:
>>
>> Is this the overhead of the bluestore journal/wall?
>> Or there is something wrong and this should be zero?
>>
>>
>> People setting up new clusters also see this, there are overhead items
> and stuff that eat some space
> so it would never be zero. At your place, it would seem it is close to
> 0.1%, so just live with it and move
> on to using your 8TB for what you really needed it to be used for.
>
> In almost no case will I think that "if only I could get those 0.1% back
> and then my cluster would be great
> again".
>
> Storage clusters should probably have something like 10% "admin" margins
> so if ceph warns and
> whines at OSDs being 85% full, then at 75% you should be writing orders
> for more disks and/or more
> storage nodes.
>
> At that point, regardless of where the "miscalculation" is, or where ceph
> manages to waste
> 9500M while you think it should be zero, it will be all but impossible to
> make anything decent with it
> if you were to get those 0.1% back with some magic command.
>
>
> --
> May the most significant bit of your life be positive.
>
>
>
>
> ___
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> ___
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Igor Fedotov

Hi Max,

how many OSDs do you have?

Are they bluestore?

what's the "cepf df detail" output?



On 3/2/2018 1:21 PM, Max Cuttins wrote:


Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

How can I analyze this?


Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:


Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you reserved 
some space or you have other service that's taking the space. But It 
seems way to much for me.



El 02/03/18 a las 12:09, Max Cuttins escribió:


I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins >:


Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are overhead 
items and stuff that eat some space
so it would never be zero. At your place, it would seem it is close 
to 0.1%, so just live with it and move

on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those 0.1% 
back and then my cluster would be great

again".

Storage clusters should probably have something like 10% "admin" 
margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be writing 
orders for more disks and/or more

storage nodes.

At that point, regardless of where the "miscalculation" is, or where 
ceph manages to waste
9500M while you think it should be zero, it will be all but 
impossible to make anything decent with it

if you were to get those 0.1% back with some magic command.

--
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Gonzalo Aguilar Delgado

Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you reserved some 
space or you have other service that's taking the space. But It seems 
way to much for me.



El 02/03/18 a las 12:09, Max Cuttins escribió:


I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins >:


Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are overhead 
items and stuff that eat some space
so it would never be zero. At your place, it would seem it is close 
to 0.1%, so just live with it and move

on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those 0.1% 
back and then my cluster would be great

again".

Storage clusters should probably have something like 10% "admin" 
margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be writing 
orders for more disks and/or more

storage nodes.

At that point, regardless of where the "miscalculation" is, or where 
ceph manages to waste
9500M while you think it should be zero, it will be all but 
impossible to make anything decent with it

if you were to get those 0.1% back with some magic command.

--
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins >:


Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are overhead items 
and stuff that eat some space
so it would never be zero. At your place, it would seem it is close to 
0.1%, so just live with it and move

on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those 0.1% 
back and then my cluster would be great

again".

Storage clusters should probably have something like 10% "admin" 
margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be writing 
orders for more disks and/or more

storage nodes.

At that point, regardless of where the "miscalculation" is, or where 
ceph manages to waste
9500M while you think it should be zero, it will be all but impossible 
to make anything decent with it

if you were to get those 0.1% back with some magic command.

--
May the most significant bit of your life be positive.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Janne Johansson
2018-03-02 11:21 GMT+01:00 Max Cuttins :

> Hi everybody,
>
> i deleted everything from the cluster after some test with RBD.
> Now I see that there something still in use:
>
>   data:
> pools:   0 pools, 0 pgs
> objects: 0 objects, 0 bytes
> usage:   *9510 MB used*, 8038 GB / 8048 GB avail
> pgs:
>
> Is this the overhead of the bluestore journal/wall?
> Or there is something wrong and this should be zero?
>
>
> People setting up new clusters also see this, there are overhead items and
stuff that eat some space
so it would never be zero. At your place, it would seem it is close to
0.1%, so just live with it and move
on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those 0.1% back
and then my cluster would be great
again".

Storage clusters should probably have something like 10% "admin" margins so
if ceph warns and
whines at OSDs being 85% full, then at 75% you should be writing orders for
more disks and/or more
storage nodes.

At that point, regardless of where the "miscalculation" is, or where ceph
manages to waste
9500M while you think it should be zero, it will be all but impossible to
make anything decent with it
if you were to get those 0.1% back with some magic command.


-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com