Hi Anthony

CEPH OSD DF TREE :: ===========================

ID   CLASS  WEIGHT     REWEIGHT  SIZE     RAW USE  DATA     OMAP
META     AVAIL    %USE   VAR   PGS  STATUS  TYPE NAME
-51          12.22272         -   12 TiB  4.8 TiB  4.8 TiB  112 MiB
20 GiB  7.4 TiB  39.21  3.24    -          root cache_root
-27           0.87329         -  894 GiB  306 GiB  305 GiB  8.1 MiB
1.3 GiB  588 GiB  34.25  2.83    -              host cache_node1
  0    ssd    0.87329   1.00000  894 GiB  306 GiB  305 GiB  8.1 MiB
1.3 GiB  588 GiB  34.25  2.83    4      up          osd.0
-45           0.87299         -  894 GiB  382 GiB  381 GiB  6.3 MiB
1.5 GiB  512 GiB  42.76  3.53    -              host cache_node10
 45    ssd    0.87299   1.00000  894 GiB  382 GiB  381 GiB  6.3 MiB
1.5 GiB  512 GiB  42.76  3.53    3      up          osd.45
-47           0.87299         -  894 GiB  458 GiB  456 GiB  4.1 MiB
1.7 GiB  436 GiB  51.21  4.23    -              host cache_node11
 50    ssd    0.87299   1.00000  894 GiB  458 GiB  456 GiB  4.1 MiB
1.7 GiB  436 GiB  51.21  4.23    6      up          osd.50
-49           0.87299         -  894 GiB  535 GiB  533 GiB  8.0 MiB
1.7 GiB  359 GiB  59.84  4.94    -              host cache_node12
 55    ssd    0.87299   1.00000  894 GiB  535 GiB  533 GiB  8.0 MiB
1.7 GiB  359 GiB  59.84  4.94    7      up          osd.55
-53           0.87329         -  894 GiB  230 GiB  229 GiB   10 MiB
1.2 GiB  664 GiB  25.75  2.13    -              host cache_node13
 60    ssd    0.87329   1.00000  894 GiB  230 GiB  229 GiB   10 MiB
1.2 GiB  664 GiB  25.75  2.13    3      up          osd.60
-59           0.87329         -  894 GiB  231 GiB  230 GiB   11 MiB
1.2 GiB  663 GiB  25.80  2.13    -              host cache_node14
 65    ssd    0.87329   1.00000  894 GiB  231 GiB  230 GiB   11 MiB
1.2 GiB  663 GiB  25.80  2.13    3      up          osd.65
-29           0.87299         -  894 GiB  386 GiB  384 GiB  4.0 MiB
1.7 GiB  509 GiB  43.13  3.56    -              host cache_node2
  5    ssd    0.87299   1.00000  894 GiB  386 GiB  384 GiB  4.0 MiB
1.7 GiB  509 GiB  43.13  3.56    4      up          osd.5
-31           0.87299         -  894 GiB  154 GiB  153 GiB  3.0 MiB
1.0 GiB  740 GiB  17.26  1.42    -              host cache_node3
 10    ssd    0.87299   1.00000  894 GiB  154 GiB  153 GiB  3.0 MiB
1.0 GiB  740 GiB  17.26  1.42    2      up          osd.10
-33           0.87299         -  894 GiB  308 GiB  307 GiB  8.8 MiB
1.4 GiB  586 GiB  34.49  2.85    -              host cache_node4
 15    ssd    0.87299   1.00000  894 GiB  308 GiB  307 GiB  8.8 MiB
1.4 GiB  586 GiB  34.49  2.85    4      up          osd.15
-35           0.87299         -  894 GiB  383 GiB  381 GiB   10 MiB
1.5 GiB  512 GiB  42.79  3.53    -              host cache_node5
 20    ssd    0.87299   1.00000  894 GiB  383 GiB  381 GiB   10 MiB
1.5 GiB  512 GiB  42.79  3.53    4      up          osd.20
-37           0.87299         -  894 GiB  156 GiB  155 GiB  3.2 MiB
1.4 GiB  738 GiB  17.48  1.44    -              host cache_node6
 25    ssd    0.87299   1.00000  894 GiB  156 GiB  155 GiB  3.2 MiB
1.4 GiB  738 GiB  17.48  1.44    2      up          osd.25
-39           0.87299         -  894 GiB  611 GiB  609 GiB   13 MiB
1.9 GiB  284 GiB  68.28  5.64    -              host cache_node7
 30    ssd    0.87299   1.00000  894 GiB  611 GiB  609 GiB   13 MiB
1.9 GiB  284 GiB  68.28  5.64    8      up          osd.30
-41           0.87299         -  894 GiB  156 GiB  155 GiB   13 MiB
1.1 GiB  738 GiB  17.50  1.44    -              host cache_node8
 35    ssd    0.87299   1.00000  894 GiB  156 GiB  155 GiB   13 MiB
1.1 GiB  738 GiB  17.50  1.44    2      up          osd.35
-43           0.87299         -  894 GiB  612 GiB  610 GiB  9.7 MiB
1.8 GiB  282 GiB  68.46  5.65    -              host cache_node9
 40    ssd    0.87299   1.00000  894 GiB  612 GiB  610 GiB  9.7 MiB
1.8 GiB  282 GiB  68.46  5.65    8      up          osd.40
 -1         195.60869         -  196 TiB   20 TiB   20 TiB  810 MiB
77 GiB  175 TiB  10.42  0.86    -          root default
 -3          13.97198         -   14 TiB  1.3 TiB  1.3 TiB   31 MiB
5.1 GiB   13 TiB   9.07  0.75    -              host node1
  1    ssd    3.49300   1.00000  3.5 TiB  309 GiB  308 GiB  7.8 MiB
1.2 GiB  3.2 TiB   8.64  0.71   36      up          osd.1
  2    ssd    3.49300   1.00000  3.5 TiB  331 GiB  330 GiB  5.2 MiB
1.4 GiB  3.2 TiB   9.26  0.76   36      up          osd.2
  3    ssd    3.49300   1.00000  3.5 TiB  292 GiB  291 GiB  8.2 MiB
1.2 GiB  3.2 TiB   8.16  0.67   34      up          osd.3
  4    ssd    3.49300   1.00000  3.5 TiB  365 GiB  364 GiB  9.9 MiB
1.4 GiB  3.1 TiB  10.21  0.84   38      up          osd.4
-21          13.97200         -   14 TiB  1.5 TiB  1.5 TiB   24 MiB
5.8 GiB   12 TiB  10.73  0.89    -              host node10
 46    ssd    3.49300   1.00000  3.5 TiB  332 GiB  331 GiB  3.4 MiB
1.2 GiB  3.2 TiB   9.28  0.77   37      up          osd.46
 47    ssd    3.49300   1.00000  3.5 TiB  368 GiB  366 GiB  8.7 MiB
1.6 GiB  3.1 TiB  10.29  0.85   38      up          osd.47
 48    ssd    3.49300   1.00000  3.5 TiB  459 GiB  457 GiB  5.1 MiB
1.7 GiB  3.0 TiB  12.84  1.06   42      up          osd.48
 49    ssd    3.49300   1.00000  3.5 TiB  376 GiB  374 GiB  6.4 MiB
1.3 GiB  3.1 TiB  10.50  0.87   39      up          osd.49
-23          13.97200         -   14 TiB  1.6 TiB  1.6 TiB   30 MiB
5.4 GiB   12 TiB  11.23  0.93    -              host node11
 51    ssd    3.49300   1.00000  3.5 TiB  426 GiB  424 GiB  6.1 MiB
1.3 GiB  3.1 TiB  11.90  0.98   40      up          osd.51
 52    ssd    3.49300   1.00000  3.5 TiB  482 GiB  481 GiB  9.2 MiB
1.4 GiB  3.0 TiB  13.48  1.11   44      up          osd.52
 53    ssd    3.49300   1.00000  3.5 TiB  330 GiB  329 GiB  4.2 MiB
1.2 GiB  3.2 TiB   9.23  0.76   36      up          osd.53
 54    ssd    3.49300   1.00000  3.5 TiB  369 GiB  368 GiB   10 MiB
1.4 GiB  3.1 TiB  10.32  0.85   40      up          osd.54
-25          13.97200         -   14 TiB  1.6 TiB  1.6 TiB  234 MiB
5.5 GiB   12 TiB  11.30  0.93    -              host node12
 56    ssd    3.49300   1.00000  3.5 TiB  348 GiB  346 GiB  4.7 MiB
1.5 GiB  3.2 TiB   9.72  0.80   36      up          osd.56
 57    ssd    3.49300   1.00000  3.5 TiB  473 GiB  471 GiB  218 MiB
1.4 GiB  3.0 TiB  13.22  1.09   42      up          osd.57
 58    ssd    3.49300   1.00000  3.5 TiB  348 GiB  347 GiB  5.4 MiB
1.2 GiB  3.2 TiB   9.74  0.80   38      up          osd.58
 59    ssd    3.49300   1.00000  3.5 TiB  448 GiB  447 GiB  5.2 MiB
1.4 GiB  3.1 TiB  12.54  1.03   42      up          osd.59
-55          13.97235         -   14 TiB  1.5 TiB  1.5 TiB   26 MiB
5.5 GiB   12 TiB  10.73  0.89    -              host node13
 61    ssd    3.49309   1.00000  3.5 TiB  382 GiB  381 GiB    8 MiB
1.3 GiB  3.1 TiB  10.69  0.88   39      up          osd.61
 62    ssd    3.49309   1.00000  3.5 TiB  338 GiB  337 GiB  6.5 MiB
1.4 GiB  3.2 TiB   9.46  0.78   37      up          osd.62
 63    ssd    3.49309   1.00000  3.5 TiB  444 GiB  442 GiB  5.6 MiB
1.4 GiB  3.1 TiB  12.40  1.02   41      up          osd.63
 64    ssd    3.49309   1.00000  3.5 TiB  372 GiB  370 GiB  6.1 MiB
1.4 GiB  3.1 TiB  10.39  0.86   40      up          osd.64
-57          13.97235         -   14 TiB  1.5 TiB  1.5 TiB   34 MiB
5.5 GiB   12 TiB  10.56  0.87    -              host node14
 66    ssd    3.49309   1.00000  3.5 TiB  387 GiB  385 GiB  9.1 MiB
1.3 GiB  3.1 TiB  10.81  0.89   38      up          osd.66
 67    ssd    3.49309   1.00000  3.5 TiB  350 GiB  349 GiB  7.3 MiB
1.4 GiB  3.2 TiB   9.79  0.81   38      up          osd.67
 68    ssd    3.49309   1.00000  3.5 TiB  444 GiB  443 GiB  7.2 MiB
1.5 GiB  3.1 TiB  12.41  1.03   43      up          osd.68
 69    ssd    3.49309   1.00000  3.5 TiB  330 GiB  329 GiB   11 MiB
1.3 GiB  3.2 TiB   9.23  0.76   36      up          osd.69
 -5          13.97200         -   14 TiB  1.5 TiB  1.5 TiB  242 MiB
5.8 GiB   13 TiB  10.43  0.86    -              host node2
  6    ssd    3.49300   1.00000  3.5 TiB  430 GiB  428 GiB  6.8 MiB
1.7 GiB  3.1 TiB  12.01  0.99   42      up          osd.6
  7    ssd    3.49300   1.00000  3.5 TiB  309 GiB  308 GiB  6.9 MiB
1.5 GiB  3.2 TiB   8.65  0.71   36      up          osd.7
  8    ssd    3.49300   1.00000  3.5 TiB  392 GiB  391 GiB  217 MiB
1.3 GiB  3.1 TiB  10.96  0.91   42      up          osd.8
  9    ssd    3.49300   1.00000  3.5 TiB  361 GiB  360 GiB   11 MiB
1.3 GiB  3.1 TiB  10.10  0.83   38      up          osd.9
 -7          13.97200         -   14 TiB  1.4 TiB  1.4 TiB   28 MiB
5.6 GiB   13 TiB  10.00  0.83    -              host node3
 11    ssd    3.49300   1.00000  3.5 TiB  377 GiB  375 GiB   10 MiB
1.6 GiB  3.1 TiB  10.54  0.87   40      up          osd.11
 12    ssd    3.49300   1.00000  3.5 TiB  347 GiB  346 GiB  6.1 MiB
1.4 GiB  3.2 TiB   9.70  0.80   39      up          osd.12
 13    ssd    3.49300   1.00000  3.5 TiB  393 GiB  392 GiB  5.8 MiB
1.4 GiB  3.1 TiB  10.99  0.91   41      up          osd.13
 14    ssd    3.49300   1.00000  3.5 TiB  313 GiB  312 GiB  5.7 MiB
1.3 GiB  3.2 TiB   8.76  0.72   35      up          osd.14
 -9          13.97200         -   14 TiB  1.5 TiB  1.5 TiB   31 MiB
5.2 GiB   12 TiB  10.64  0.88    -              host node4
 16    ssd    3.49300   1.00000  3.5 TiB  371 GiB  369 GiB  5.2 MiB
1.4 GiB  3.1 TiB  10.36  0.86   39      up          osd.16
 17    ssd    3.49300   1.00000  3.5 TiB  371 GiB  370 GiB   10 MiB
1.3 GiB  3.1 TiB  10.38  0.86   39      up          osd.17
 18    ssd    3.49300   1.00000  3.5 TiB  313 GiB  312 GiB  7.7 MiB
1.2 GiB  3.2 TiB   8.75  0.72   35      up          osd.18
 19    ssd    3.49300   1.00000  3.5 TiB  467 GiB  466 GiB  7.7 MiB
1.3 GiB  3.0 TiB  13.06  1.08   44      up          osd.19
-11          13.97200         -   14 TiB  1.7 TiB  1.7 TiB   28 MiB
5.7 GiB   12 TiB  11.90  0.98    -              host node5
 21    ssd    3.49300   1.00000  3.5 TiB  462 GiB  460 GiB  4.1 MiB
1.6 GiB  3.0 TiB  12.91  1.07   44      up          osd.21
 22    ssd    3.49300   1.00000  3.5 TiB  444 GiB  442 GiB  8.2 MiB
1.4 GiB  3.1 TiB  12.41  1.02   42      up          osd.22
 23    ssd    3.49300   1.00000  3.5 TiB  399 GiB  398 GiB  8.2 MiB
1.4 GiB  3.1 TiB  11.15  0.92   40      up          osd.23
 24    ssd    3.49300   1.00000  3.5 TiB  399 GiB  397 GiB  7.8 MiB
1.3 GiB  3.1 TiB  11.15  0.92   41      up          osd.24
-13          13.97200         -   14 TiB  1.4 TiB  1.4 TiB   28 MiB
5.6 GiB   13 TiB  10.31  0.85    -              host node6
 26    ssd    3.49300   1.00000  3.5 TiB  361 GiB  359 GiB  5.9 MiB
1.3 GiB  3.1 TiB  10.08  0.83   38      up          osd.26
 27    ssd    3.49300   1.00000  3.5 TiB  391 GiB  390 GiB   10 MiB
1.3 GiB  3.1 TiB  10.93  0.90   40      up          osd.27
 28    ssd    3.49300   1.00000  3.5 TiB  393 GiB  391 GiB  5.4 MiB
1.6 GiB  3.1 TiB  10.97  0.91   40      up          osd.28
 29    ssd    3.49300   1.00000  3.5 TiB  331 GiB  330 GiB  6.5 MiB
1.4 GiB  3.2 TiB   9.26  0.76   37      up          osd.29
-15          13.97200         -   14 TiB  1.3 TiB  1.3 TiB   23 MiB
5.6 GiB   13 TiB   9.29  0.77    -              host node7
 31    ssd    3.49300   1.00000  3.5 TiB  326 GiB  325 GiB  7.2 MiB
1.4 GiB  3.2 TiB   9.12  0.75   38      up          osd.31
 32    ssd    3.49300   1.00000  3.5 TiB  329 GiB  327 GiB  5.4 MiB
1.3 GiB  3.2 TiB   9.19  0.76   37      up          osd.32
 33    ssd    3.49300   1.00000  3.5 TiB  293 GiB  291 GiB  5.8 MiB
1.6 GiB  3.2 TiB   8.18  0.68   34      up          osd.33
 34    ssd    3.49300   1.00000  3.5 TiB  381 GiB  380 GiB  4.7 MiB
1.4 GiB  3.1 TiB  10.65  0.88   41      up          osd.34
-17          13.97200         -   14 TiB  1.4 TiB  1.4 TiB   25 MiB
5.3 GiB   13 TiB  10.25  0.85    -              host node8
 36    ssd    3.49300   1.00000  3.5 TiB  336 GiB  335 GiB  3.6 MiB
1.3 GiB  3.2 TiB   9.39  0.78   37      up          osd.36
 37    ssd    3.49300   1.00000  3.5 TiB  397 GiB  396 GiB  7.2 MiB
1.3 GiB  3.1 TiB  11.11  0.92   40      up          osd.37
 38    ssd    3.49300   1.00000  3.5 TiB  365 GiB  363 GiB  7.9 MiB
1.4 GiB  3.1 TiB  10.20  0.84   40      up          osd.38
 39    ssd    3.49300   1.00000  3.5 TiB  368 GiB  367 GiB  5.8 MiB
1.3 GiB  3.1 TiB  10.29  0.85   38      up          osd.39
-19          13.97200         -   14 TiB  1.3 TiB  1.3 TiB   26 MiB
5.0 GiB   13 TiB   9.41  0.78    -              host node9
 41    ssd    3.49300   1.00000  3.5 TiB  333 GiB  332 GiB  7.3 MiB
1.3 GiB  3.2 TiB   9.31  0.77   39      up          osd.41
 42    ssd    3.49300   1.00000  3.5 TiB  346 GiB  345 GiB  6.2 MiB
1.3 GiB  3.2 TiB   9.69  0.80   39      up          osd.42
 43    ssd    3.49300   1.00000  3.5 TiB  336 GiB  335 GiB  8.4 MiB
1.2 GiB  3.2 TiB   9.40  0.78   37      up          osd.43
 44    ssd    3.49300   1.00000  3.5 TiB  330 GiB  329 GiB  4.2 MiB
1.2 GiB  3.2 TiB   9.23  0.76   36      up          osd.44
                          TOTAL  208 TiB   25 TiB   25 TiB  922 MiB
97 GiB  183 TiB  12.11
MIN/MAX VAR: 0.67/5.65  STDDEV: 14.49

CEPH DF DETAIL :: =============================
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED    RAW USED  %RAW USED
ssd    208 TiB  183 TiB  25 TiB    25 TiB      12.11
TOTAL  208 TiB  183 TiB  25 TiB    25 TiB      12.11

--- POOLS ---
POOL                   ID  PGS   STORED   (DATA)   (OMAP)   OBJECTS
USED     (DATA)   (OMAP)   %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA
BYTES  DIRTY    USED COMPR  UNDER COMPR
device_health_metrics   2     1  214 MiB      0 B  214 MiB       73
429 MiB      0 B  429 MiB      0     80 TiB  N/A            N/A
      N/A         0 B          0 B
volumes                 3  1024  8.5 TiB  8.5 TiB   54 MiB    2.37M
17 TiB   17 TiB  108 MiB   9.67     80 TiB  N/A            N/A
     N/A         0 B          0 B
volumes_cache           4    32  2.4 TiB  2.4 TiB   52 MiB    1.09M
4.8 TiB  4.8 TiB  104 MiB  59.54    1.6 TiB  N/A            N/A
  499.25k         0 B          0 B
images                  5    32  712 GiB  712 GiB   42 MiB   90.97k
1.4 TiB  1.4 TiB   85 MiB   0.86     80 TiB  N/A            N/A
      N/A         0 B          0 B
internal                6    32  941 GiB  941 GiB   11 MiB  247.76k
1.8 TiB  1.8 TiB   22 MiB   1.14     80 TiB  N/A            N/A
      N/A         0 B          0 B


CEPH OSD POOL LS DETAIL :: ====================
pool 2 'device_health_metrics' replicated size 2 min_size 1 crush_rule
0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on
last_change 44903 flags hashpspool stripe_width 0 pg_num_min 1
application mgr_devicehealth
pool 3 'volumes' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 autoscale_mode off last_change 63235
lfor 353/353/62134 flags hashpspool,selfmanaged_snaps tiers 4
read_tier 4 write_tier 4 stripe_width 0 application rbd
pool 4 'volumes_cache' replicated size 2 min_size 1 crush_rule 1
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off
last_change 63235 lfor 353/353/353 flags
hashpspool,incomplete_clones,selfmanaged_snaps tier_of 3 cache_mode
writeback target_bytes 3298534883328 hit_set
bloom{false_positive_probability: 0.05, target_size: 0, seed: 0}
14400s x4 decay_rate 0 search_last_n 0 stripe_width 0 application rbd
pool 5 'images' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 51925 lfor
0/0/368 flags hashpspool,selfmanaged_snaps stripe_width 0 application
rbd
pool 6 'internal' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on
last_change 49949 flags hashpspool,selfmanaged_snaps stripe_width 0
application rbd




Thanks and Regards
Vishnu Bhaskar
Acceleron Labs Pvt Ltd
Bangalore, India


On Wed, Aug 13, 2025 at 11:28 PM Anthony D'Atri <anthony.da...@gmail.com>
wrote:

> Ah.  Note “average”.   These warnings can be a bit alarmist.  Please send
>
> `ceph df`
> `ceph osd dump | grep pool`
>
>
>
> > On Aug 13, 2025, at 2:24 AM, Vishnu Bhaskar <vishn...@acceleronlabs.com>
> wrote:
> >
> > Hi,
> > My cluster is showing this warning
> >
> > [WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than
> average
> >    pool volumes_cache objects per pg (34082) is more than 10.0448
> > times cluster average (3393)
> >
> >
> >
> > Thanks and Regards
> > Vishnu Bhaskar
> > Acceleron Labs Pvt Ltd
> > Bangalore, India
> >
> >
> >> On Wed, Aug 13, 2025 at 1:35 PM Anthony D'Atri <a...@dreamsnake.net>
> wrote:
> >>
> >> Cache tiers are deprecated, I strongly advise finding a way to factor it
> >> out of your deployment.
> >>
> >> There was some discussion of this in the past, e.g.
> >>
> >>
> https://ceph-users.ceph.narkive.com/0LxBSHEQ/changing-pg-num-on-cache-pool
> >>
> >> The PG to RADOS object ratio isn't necessarily in and of itself a
> problem,
> >> as the dynamics of pg_num for a pool depend on the data and the access
> >> modality.  When multiple pools share the same OSDs, there are tradeoffs
> >> that include what the pools are for, and how much data each stores.  RGW
> >> index pools, for example, need more PGs than their data volume would
> >> otherwise indicate.
> >>
> >> Is your cache pool for sure limited to *only* the SSDs you expect, or
> does
> >> it specify a CRUSH rule that also lands on HDDs?
> >>
> >> -- the former Cepher known as Anthony
> >>
> >> On Aug 13, 2025, at 12:15 AM, Vishnu Bhaskar <
> vishn...@acceleronlabs.com>
> >> wrote:
> >>
> >> Hi Team,
> >>
> >> I have an OpenStack Ceph cluster where the Cinder volumes pool uses a
> cache
> >> pool named *volumes_cache*.
> >>
> >> Due to performance issues, I observed that the cache pool has fewer PGs
> and
> >> a high PG-to-object ratio, approximately 10x higher than normal. For
> >> instance, one PG contains around 35,000 objects. I am planning to
> increase
> >> the PG number for the cache pool.
> >>
> >> For the base pool, I was able to increase the PG count without any
> issues.
> >> However, for the cache pool, I encountered a warning indicating that a
> >> force argument is required. In my lab environment, I found that changing
> >> the cache pool mode to *none* allowed me to modify the PG number.
> >>
> >> Since this is my production setup, I would like to know if there is a
> safe
> >> and recommended procedure to increase the PG number of a *cache pool*
> >> without impacting the environment.
> >>
> >> Kindly advise on the best approach.
> >>
> >> Thanks and Regards
> >> Vishnu Bhaskar
> >> Acceleron Labs Pvt Ltd
> >> Bangalore, India
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >>
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to