MAX AVAIL is the amount of data you can still write to the cluster
before *anyone one of your OSDs* becomes near full. If MAX AVAIL is not
what you expect it to be, look at the data distribution using ceph osd
tree and make sure you have a uniform distribution.

Mohamad

On 6/25/19 11:46 AM, Davis Mendoza Paco wrote:
> Hi all,
> I have installed ceph luminous, with 43 OSD(3TB)
>
> Checking pool statistics
>
> ceph df detail
> GLOBAL:
>     SIZE       AVAIL       RAW USED     %RAW USED     OBJECTS
>     117TiB     69.3TiB      48.0TiB         40.91       4.20M
> POOLS:
>     NAME                    ID     QUOTA OBJECTS     QUOTA BYTES    
> USED        %USED     MAX AVAIL     OBJECTS     DIRTY       READ      
>  WRITE       RAW USED
>     images                  9      N/A               N/A            
>  144GiB      1.36       10.2TiB       22379      22.38k     70.0MiB  
>    354KiB       432GiB
>     vms                     10     N/A               N/A            
> 3.36TiB     24.69       10.2TiB      889606     889.61k     3.36GiB  
>   4.61GiB      10.1TiB
>     backups                 12     N/A               N/A            
> 1.00GiB         0       10.2TiB         261         261      103KiB  
>      525B      3.00GiB
>     volumes                 13     N/A               N/A            
> 12.5TiB     55.02       10.2TiB     3289892       3.29M      754MiB  
>    616MiB      37.6TiB
>     
> I can not understand what the column "MAX AVAIL" refers to, according
> to the column "% USED" only 55% of the pool "volumes" is used, that is
> 12.5TiB
>     NAME                    ID    USED        %USED     MAX AVAIL
>     volumes                 13    12.5TiB     55.02       10.2TiB
>
> -- 
> *Davis Mendoza P.*
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to