All;

We set up a CephFS on a Nautilus (14.2.8) cluster in February, to hold backups. 
 We finally have all the backups running, and are just waiting for the system 
reach steady-state.

I'm concerned about usage numbers, in the Dashboard Capacity it shows the 
cluster as 37% used, while under Filesystems --> <FSName> --> Pools -_> <data> 
--> Usage, it shows 71% used.

Does CephFS place a limit on the size of a CephFS?  Is there a limit to how 
large a pool can be in Ceph?  Where is the sizing discrepancy coming from, and 
do I need to address it?

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
[email protected] 
www.PerformAir.com

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to