Re: [ceph-users] Cephfs free space vs ceph df free space disparity

2019-05-28 Thread Robert Ruge
Thanks for everyone's suggestions which have now helped me to fix the space 
free problem.
The newbie mistake was not knowing anything about rebalancing. Turning on the 
balancer and using upmap I have gone from 7TB free to 50TB free on my cephfs. 
Seeing that the object store is saying 180TB free and I'm using 3 replication 
this would give a theoretical 60TB free so I'm close and pretty happy with 
upmap.

The links provided were a great help.

I also need to look at the bluestore_min_alloc_size_hdd for a second cluster I 
am building which shows
POOLS:
POOLID STORED OBJECTS USED%USED MAX 
AVAIL
cephfs_data  1 36 TiB 107.94M 122 TiB 90.94   
4.1 TiB
cephfs_metadata  2 60 GiB   6.29M  61 GiB  0.49   
4.1 TiB

The stored TiB vs used TiB would indicate that this has many small files, which 
it does, and I presume could be helped with a smaller alloc size. Is that 
correct?

Would anyone also have any experiences with running compression on the cephfs 
pool?

Regards
Robert Ruge


-Original Message-
From: Peter Wienemann 
Sent: Tuesday, 28 May 2019 9:53 PM
To: Robert Ruge 
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cephfs free space vs ceph df free space disparity

On 27.05.19 09:08, Stefan Kooman wrote:
> Quoting Robert Ruge (robert.r...@deakin.edu.au):
>> Ceph newbie question.
>>
>> I have a disparity between the free space that my cephfs file system
>> is showing and what ceph df is showing.  As you can see below my
>> cephfs file system says there is 9.5TB free however ceph df says
>> there is 186TB which with replication size 3 should equate to 62TB
>> free space.  I guess the basic question is how can I get cephfs to
>> see and use all of the available space?  I recently changed my number
>> of pg's on the cephfs_data pool from 2048 to 4096 and this gave me
>> another 8TB so do I keep increasing the number of pg's or is there
>> something else that I am missing? I have only been running ceph for
>> ~6 months so I'm relatively new to it all and not being able to use
>> all of the space is just plain bugging me.
>
> My guess here is you have a lot of small files in your cephfs, is that
> right? Do you have HDD or SDD/NVMe?
>
> Mohamad Gebai gave a talk about this at Cephalocon 2019:
> https://static.sched.com/hosted_files/cephalocon2019/d2/cephalocon-201
> 9-mohamad-gebai.pdf
> for the slides and the recording:
> https://www.youtube.com/watch?v=26FbUEbiUrw=PLrBUGiINAakNCnQUosh6
> 3LpHbf84vegNu=29=0s
>
> TL;DR: there is a bluestore_min_alloc_size_ssd which is 16K default
> for SSD and 64K default for HDD. With lots of small objects this might
> add up to *a lot* of overhead. You can change that to 4k:
>
> bluestore min alloc size ssd = 4096
> bluestore min alloc size hdd = 4096
>
> You will have to rebuild _all_ of your OSDs though.
>
> Here is another thread about this:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/thre
> ad.html#24801
>
> Gr. Stefan

Hi Robert,

some more questions: Are all your OSDs of equal size? If yes, have you enabled 
balancing for your cluster (see [0])?

You might also be interested in this thread [1].

Peter

[0] http://docs.ceph.com/docs/master/rados/operations/balancer
[1]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030765.html

Important Notice: The contents of this email are intended solely for the named 
addressee and are confidential; any unauthorised use, reproduction or storage 
of the contents is expressly prohibited. If you have received this email in 
error, please delete it and any attachments immediately and advise the sender 
by return email or telephone.

Deakin University does not warrant that this email and any attachments are 
error or virus free.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cephfs free space vs ceph df free space disparity

2019-05-28 Thread Peter Wienemann
On 27.05.19 09:08, Stefan Kooman wrote:
> Quoting Robert Ruge (robert.r...@deakin.edu.au):
>> Ceph newbie question.
>>
>> I have a disparity between the free space that my cephfs file system
>> is showing and what ceph df is showing.  As you can see below my
>> cephfs file system says there is 9.5TB free however ceph df says there
>> is 186TB which with replication size 3 should equate to 62TB free
>> space.  I guess the basic question is how can I get cephfs to see and
>> use all of the available space?  I recently changed my number of pg's
>> on the cephfs_data pool from 2048 to 4096 and this gave me another 8TB
>> so do I keep increasing the number of pg's or is there something else
>> that I am missing? I have only been running ceph for ~6 months so I'm
>> relatively new to it all and not being able to use all of the space is
>> just plain bugging me.
> 
> My guess here is you have a lot of small files in your cephfs, is that
> right? Do you have HDD or SDD/NVMe?
> 
> Mohamad Gebai gave a talk about this at Cephalocon 2019:
> https://static.sched.com/hosted_files/cephalocon2019/d2/cephalocon-2019-mohamad-gebai.pdf
> for the slides and the recording:
> https://www.youtube.com/watch?v=26FbUEbiUrw=PLrBUGiINAakNCnQUosh63LpHbf84vegNu=29=0s
> 
> TL;DR: there is a bluestore_min_alloc_size_ssd which is 16K default for
> SSD and 64K default for HDD. With lots of small objects this might add
> up to *a lot* of overhead. You can change that to 4k:
> 
> bluestore min alloc size ssd = 4096
> bluestore min alloc size hdd = 4096
> 
> You will have to rebuild _all_ of your OSDs though.
> 
> Here is another thread about this:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/thread.html#24801
> 
> Gr. Stefan

Hi Robert,

some more questions: Are all your OSDs of equal size? If yes, have you
enabled balancing for your cluster (see [0])?

You might also be interested in this thread [1].

Peter

[0] http://docs.ceph.com/docs/master/rados/operations/balancer
[1]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030765.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cephfs free space vs ceph df free space disparity

2019-05-27 Thread Stefan Kooman
Quoting Robert Ruge (robert.r...@deakin.edu.au):
> Ceph newbie question.
> 
> I have a disparity between the free space that my cephfs file system
> is showing and what ceph df is showing.  As you can see below my
> cephfs file system says there is 9.5TB free however ceph df says there
> is 186TB which with replication size 3 should equate to 62TB free
> space.  I guess the basic question is how can I get cephfs to see and
> use all of the available space?  I recently changed my number of pg's
> on the cephfs_data pool from 2048 to 4096 and this gave me another 8TB
> so do I keep increasing the number of pg's or is there something else
> that I am missing? I have only been running ceph for ~6 months so I'm
> relatively new to it all and not being able to use all of the space is
> just plain bugging me.

My guess here is you have a lot of small files in your cephfs, is that
right? Do you have HDD or SDD/NVMe?

Mohamad Gebai gave a talk about this at Cephalocon 2019:
https://static.sched.com/hosted_files/cephalocon2019/d2/cephalocon-2019-mohamad-gebai.pdf
for the slides and the recording:
https://www.youtube.com/watch?v=26FbUEbiUrw=PLrBUGiINAakNCnQUosh63LpHbf84vegNu=29=0s

TL;DR: there is a bluestore_min_alloc_size_ssd which is 16K default for
SSD and 64K default for HDD. With lots of small objects this might add
up to *a lot* of overhead. You can change that to 4k:

bluestore min alloc size ssd = 4096
bluestore min alloc size hdd = 4096

You will have to rebuild _all_ of your OSDs though.

Here is another thread about this:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/thread.html#24801

Gr. Stefan


-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cephfs free space vs ceph df free space disparity

2019-05-23 Thread Robert Ruge
Ceph newbie question.

I have a disparity between the free space that my cephfs file system is showing 
and what ceph df is showing.
As you can see below my cephfs file system says there is 9.5TB free however 
ceph df says there is 186TB which with replication size 3 should equate to 62TB 
free space.
I guess the basic question is how can I get cephfs to see and use all of the 
available space?
I recently changed my number of pg's on the cephfs_data pool from 2048 to 4096 
and this gave me another 8TB so do I keep increasing the number of pg's or is 
there something else that I am missing? I have only been running ceph for ~6 
months so I'm relatively new to it all and not being able to use all of the 
space is just plain bugging me.

# df -h /ceph
FilesystemSize  Used Avail Use% Mounted on
X,y,z:/  107T   97T  9.5T  92% /ceph
# ceph df
GLOBAL:
SIZEAVAIL   RAW USED %RAW USED
495 TiB 186 TiB  310 TiB 62.51
POOLS:
NAMEID USED%USED MAX AVAIL OBJECTS
cephfs_data 1   97 TiB 91.06   9.5 TiB 156401395
cephfs_metadata 2  385 MiB 0   9.5 TiB530590
# ceph osd pool ls detail
pool 1 'cephfs_data' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 4096 pgp_num 4096 last_change 33914 lfor 0/29945 flags 
hashpspool,nearfull,selfmanaged_snaps stripe_width 0 application cephfs
removed_snaps [2~2]
pool 2 'cephfs_metadata' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 256 pgp_num 256 last_change 33914 lfor 0/30369 flags 
hashpspool,nearfull stripe_width 0 application cephfs


Regards
Robert Ruge


Important Notice: The contents of this email are intended solely for the named 
addressee and are confidential; any unauthorised use, reproduction or storage 
of the contents is expressly prohibited. If you have received this email in 
error, please delete it and any attachments immediately and advise the sender 
by return email or telephone.

Deakin University does not warrant that this email and any attachments are 
error or virus free.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com