Hi,

377TiB is the total cluster size, data pool 4:2 ec, stored 66TiB, how can be 
the data pool on 60% used??!!


Some output:
ceph df detail
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
nvme    12 TiB   11 TiB  128 MiB   1.2 TiB       9.81
ssd    377 TiB  269 TiB  100 TiB   108 TiB      28.65
TOTAL  389 TiB  280 TiB  100 TiB   109 TiB      28.06

--- POOLS ---
POOL                    ID  PGS  STORED   (DATA)   (OMAP)   OBJECTS  USED     
(DATA)   (OMAP)   %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY   USED 
COMPR  UNDER COMPR
device_health_metrics    1    1   49 MiB      0 B   49 MiB       50   98 MiB    
  0 B   98 MiB      0     73 TiB  N/A            N/A              50         0 
B          0 B
.rgw.root                2   32  1.1 MiB  1.1 MiB  4.5 KiB      159  3.9 MiB  
3.9 MiB   12 KiB      0     56 TiB  N/A            N/A             159         
0 B          0 B
ash.rgw.log              6   32  1.8 GiB   46 KiB  1.8 GiB   73.83k  4.3 GiB  
4.4 MiB  4.3 GiB      0     59 TiB  N/A            N/A          73.83k         
0 B          0 B
ash.rgw.control          7   32  2.9 KiB      0 B  2.9 KiB        8  7.7 KiB    
  0 B  7.7 KiB      0     56 TiB  N/A            N/A               8         0 
B          0 B
ash.rgw.meta             8    8  554 KiB  531 KiB   23 KiB    1.93k   22 MiB   
22 MiB   70 KiB      0    3.4 TiB  N/A            N/A           1.93k         0 
B          0 B
ash.rgw.buckets.index   10  128  406 GiB      0 B  406 GiB   58.69k  1.2 TiB    
  0 B  1.2 TiB  10.33    3.4 TiB  N/A            N/A          58.69k         0 
B          0 B
ash.rgw.buckets.data    11   32   66 TiB   66 TiB      0 B    1.21G   86 TiB   
86 TiB      0 B  37.16    111 TiB  N/A            N/A           1.21G         0 
B          0 B
ash.rgw.buckets.non-ec  15   32  8.4 MiB    653 B  8.4 MiB       22   23 MiB  
264 KiB   23 MiB      0     54 TiB  N/A            N/A              22         
0 B          0 B




rados df
POOL_NAME                  USED     OBJECTS  CLONES      COPIES  
MISSING_ON_PRIMARY  UNFOUND   DEGRADED       RD_OPS       RD       WR_OPS       
WR  USED COMPR  UNDER COMPR
.rgw.root               3.9 MiB         159       0         477                 
  0        0         60      8905420   20 GiB         8171   19 MiB         0 B 
         0 B
ash.rgw.buckets.data     86 TiB  1205539864       0  7233239184                 
  0        0  904168110  36125678580  153 TiB  55724221429  174 TiB         0 B 
         0 B
ash.rgw.buckets.index   1.2 TiB       58688       0      176064                 
  0        0          0  65848675184   62 TiB  10672532772  6.8 TiB         0 B 
         0 B
ash.rgw.buckets.non-ec   23 MiB          22       0          66                 
  0        0          6      3999256  2.3 GiB      1369730  944 MiB         0 B 
         0 B
ash.rgw.control         7.7 KiB           8       0          24                 
  0        0          3            0      0 B            8      0 B         0 B 
         0 B
ash.rgw.log             4.3 GiB       73830       0      221490                 
  0        0      39282  36922450608   34 TiB   5420884130  1.8 TiB         0 B 
         0 B
ash.rgw.meta             22 MiB        1931       0        5793                 
  0        0          0    692302142  528 GiB      4274154  2.0 GiB         0 B 
         0 B
device_health_metrics    98 MiB          50       0         150                 
  0        0         50        13588   40 MiB        17758   46 MiB         0 B 
         0 B

total_objects    1205674552
total_used       109 TiB
total_avail      280 TiB
total_space      389 TiB



4 osd down because migrating the db to block.

ceph osd tree
ID   CLASS  WEIGHT     TYPE NAME                 STATUS  REWEIGHT  PRI-AFF
-1         398.17001  root default
-11          61.12257      host server01
24   nvme    1.74660          osd.24                up   1.00000  1.00000
  0    ssd   14.84399          osd.0               down   1.00000  1.00000
10    ssd   14.84399          osd.10              down   1.00000  1.00000
14    ssd   14.84399          osd.14              down   1.00000  1.00000
20    ssd   14.84399          osd.20              down   1.00000  1.00000
-5          61.12257      host server02
25   nvme    1.74660          osd.25                up   1.00000  1.00000
  1    ssd   14.84399          osd.1                 up   1.00000  1.00000
  7    ssd   14.84399          osd.7                 up   1.00000  1.00000
13    ssd   14.84399          osd.13                up   1.00000  1.00000
19    ssd   14.84399          osd.19                up   1.00000  1.00000
-9          61.12257      host server03
26   nvme    1.74660          osd.26                up   1.00000  1.00000
  3    ssd   14.84399          osd.3                 up   1.00000  1.00000
  9    ssd   14.84399          osd.9                 up   1.00000  1.00000
16    ssd   14.84399          osd.16                up   1.00000  1.00000
22    ssd   14.84399          osd.22                up   1.00000  1.00000
-3          61.12257      host server04
27   nvme    1.74660          osd.27                up   1.00000  1.00000
  4    ssd   14.84399          osd.4                 up   1.00000  1.00000
  6    ssd   14.84399          osd.6                 up   1.00000  1.00000
12    ssd   14.84399          osd.12                up   1.00000  1.00000
18    ssd   14.84399          osd.18                up   1.00000  1.00000
-13          46.27858      host server05
28   nvme    1.74660          osd.28                up   1.00000  1.00000
  2    ssd   14.84399          osd.2                 up   1.00000  1.00000
  8    ssd   14.84399          osd.8                 up   1.00000  1.00000
15    ssd   14.84399          osd.15                up   1.00000  1.00000
-7          61.12257      host server06
29   nvme    1.74660          osd.29                up   1.00000  1.00000
  5    ssd   14.84399          osd.5                 up   1.00000  1.00000
11    ssd   14.84399          osd.11                up   1.00000  1.00000
17    ssd   14.84399          osd.17                up   1.00000  1.00000
23    ssd   14.84399          osd.23                up   1.00000  1.00000
-22          46.27856      host server07
30   nvme    1.74660          osd.30                up   1.00000  1.00000
31    ssd   14.84399          osd.31                up   1.00000  1.00000
32    ssd   14.84399          osd.32                up   1.00000  1.00000
34    ssd   14.84398          osd.34                up   1.00000  1.00000
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to