That’s good to know as well, I was seeing the same thing. I hope this is just an informational message though.
-Brent -----Original Message----- From: ceph-users <[email protected]> On Behalf Of Mark Schouten Sent: Tuesday, April 16, 2019 3:15 AM To: Igor Podlesny <[email protected]>; Sinan Polat <[email protected]> Cc: Ceph Users <[email protected]> Subject: Re: [ceph-users] 'Missing' capacity root@proxmox01:~# ceph osd df tree | sort -n -k8 | tail -1 1 ssd 0.87000 1.00000 889GiB 721GiB 168GiB 81.14 1.50 82 osd.1 root@proxmox01:~# ceph osd df tree | grep -c osd 68 68*168=11424 That is closer, thanks. I thought that available was the same as the cluster available. But appearantly it is the available on the fullest OSD. Thanks, learned someting again! -- Mark Schouten <[email protected]> Tuxis, Ede, https://www.tuxis.nl T: +31 318 200208 ----- Originele bericht ----- Van: Sinan Polat ([email protected]) Datum: 16-04-2019 06:43 Naar: Igor Podlesny ([email protected]) Cc: Mark Schouten ([email protected]), Ceph Users ([email protected]) Onderwerp: Re: [ceph-users] 'Missing' capacity Probably inbalance of data across your OSDs. Could you show ceph osd df. From there take the disk with lowest available space. Multiply that number with number of OSDs. How much is it? Kind regards, Sinan Polat > Op 16 apr. 2019 om 05:21 heeft Igor Podlesny <[email protected]> het > volgende geschreven: > >> On Tue, 16 Apr 2019 at 06:43, Mark Schouten <[email protected]> wrote: >> [...] >> So where is the rest of the free space? :X > > Makes sense to see: > > sudo ceph osd df tree > > -- > End of message. Next message? > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
