Hello hw,
your cephfs storage limit is your RADOS cluster limit. Each OSD you add
gives you more space to allocate.
> 873 GiB data, 2.7 TiB used, 3.5 TiB / 6.2 TiB avail
>> Actually you have 873GiB used with replica 3 eval to 2.7 TiB
>> "3.5 TiB / 6.2 TiB avail" means you have 3.5 TiB free of 6.2TiB
(with replica 3 about 1TiB usable capacity)
Beware of filling up your cluster more than 85%
Best regards,
Christoph
Am 07.07.20 um 12:23 schrieb hw:
> Hi!
>
> My ceph version is ceph version 15.2.3
> (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)
>
> I have ceph fs and I add new osd to my cluster.
>
> ceph pg stat:
>
> 289 pgs: 1 active+clean+scrubbing+deep, 288 active+clean; 873 GiB
> data, 2.7 TiB used, 3.5 TiB / 6.2 TiB avail
>
> How I can extend my ceph fs from 3.5 TiB to 6.2 TiB avail
>
> Detail information:
>
> ceph fs status
> static - 2 clients
> ======
> RANK STATE MDS ACTIVITY DNS INOS
> 0 active static.ceph02.sgpdiv Reqs: 0 /s 136k 128k
> POOL TYPE USED AVAIL
> static_metadata metadata 10.1G 973G
> static data 2754G 973G
> STANDBY MDS
> static.ceph05.aylgvy
> static.ceph04.wsljnw
> MDS version: ceph version 15.2.3
> (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)
>
> ceph osd pool autoscale-status
> POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO
> TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
> device_health_metrics 896.1k 3.0 6399G
> 0.0000 1.0 1 on
> static 874.0G 3.0 6399G
> 0.4097 1.0 256 on
> static_metadata 3472M 3.0 6399G
> 0.0016 4.0 32 on
>
> ceph df
> --- RAW STORAGE ---
> CLASS SIZE AVAIL USED RAW USED %RAW USED
> hdd 6.2 TiB 3.5 TiB 2.7 TiB 2.7 TiB 43.55
> TOTAL 6.2 TiB 3.5 TiB 2.7 TiB 2.7 TiB 43.55
>
> --- POOLS ---
> POOL ID STORED OBJECTS USED %USED MAX AVAIL
> device_health_metrics 1 896 KiB 20 2.6 MiB 0 973 GiB
> static 14 874 GiB 1.44M 2.7 TiB 48.54 973 GiB
> static_metadata 15 3.4 GiB 2.53M 10 GiB 0.35 973 GiB
>
> ceph fs volume ls
> [
> {
> "name": "static"
> }
> ]
>
> ceph fs subvolume ls static
> []
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
--
Christoph Ackermann | System Engineer
INFOSERVE GmbH | Am Felsbrunnen 15 | D-66119 Saarbrücken
Fon +49 (0)681 88008-59 | Fax +49 (0)681 88008-33
| [email protected]
<mailto:[email protected]> | _www.infoserve.de_
<https://www.infoserve.de/>
INFOSERVE Datenschutzhinweise: _www.infoserve.de/datenschutz_
<https://www.infoserve.de/datenschutz>
Handelsregister: Amtsgericht Saarbrücken, HRB 11001 | Erfüllungsort:
Saarbrücken
Geschäftsführer: Dr. Stefan Leinenbach | Ust-IdNr.: DE168970599
INFOSERVE GmbH | Homepage <https://www.infoserve.de/>
Facebook <https://www.facebook.com/infoserve.de>
Xing <https://www.xing.com/companies/infoservegmbh>
YouTube <https://www.youtube.com/channel/UCUj8C3TGGhQZPVvxu4woXmQ%20>
LinkedIn <https://www.linkedin.com/company-beta/10095540>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]