Hi Eric,
FYI , Ceph osd df in Nautilus reports metadata and Omap. We updated to
Nautilis 14.2.1
Im going to create a issue in tracket about timeout after a return.
[root@CEPH001 ~]# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL
%USE VAR PGS STATUS TYPE NAME
-41 654.84045 - 655 TiB 556 TiB 555 TiB 24 MiB 1.0 TiB 99 TiB
84.88 1.01 - root archive
-37 130.96848 - 131 TiB 111 TiB 111 TiB 698 KiB 209 GiB 20 TiB
84.93 1.01 - host CEPH-ARCH-R03-07
100 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 28 KiB 17 GiB 1.8 TiB
83.49 0.99 199 up osd.100
101 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 20 KiB 18 GiB 1.6 TiB
85.21 1.01 197 up osd.101
102 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 16 KiB 18 GiB 1.5 TiB
86.56 1.03 219 up osd.102
103 archive 10.91399 1.00000 11 TiB 9.6 TiB 9.6 TiB 112 KiB 18 GiB 1.3 TiB
87.88 1.05 240 up osd.103
104 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 48 KiB 18 GiB 1.5 TiB
85.85 1.02 212 up osd.104
105 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.0 TiB 16 KiB 18 GiB 1.8 TiB
83.07 0.99 195 up osd.105
106 archive 10.91409 1.00000 11 TiB 9.3 TiB 9.3 TiB 16 KiB 18 GiB 1.6 TiB
85.51 1.02 202 up osd.106
107 archive 10.91409 1.00000 11 TiB 9.1 TiB 9.1 TiB 129 KiB 17 GiB 1.8 TiB
83.33 0.99 193 up osd.107
108 archive 10.91409 1.00000 11 TiB 9.3 TiB 9.3 TiB 76 KiB 17 GiB 1.6 TiB
85.51 1.02 211 up osd.108
109 archive 10.91409 1.00000 11 TiB 9.3 TiB 9.2 TiB 140 KiB 17 GiB 1.6 TiB
84.89 1.01 210 up osd.109
110 archive 10.91409 1.00000 11 TiB 9.1 TiB 9.1 TiB 4 KiB 17 GiB 1.8 TiB
83.84 1.00 190 up osd.110
111 archive 10.91409 1.00000 11 TiB 9.2 TiB 9.2 TiB 93 KiB 17 GiB 1.7 TiB
84.04 1.00 201 up osd.111
-23 130.96800 - 131 TiB 112 TiB 111 TiB 324 KiB 209 GiB 19 TiB
85.26 1.01 - host CEPH005
4 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 108 KiB 17 GiB 1.5 TiB
85.82 1.02 226 up osd.4
41 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 20 KiB 18 GiB 1.5 TiB
86.11 1.02 203 up osd.41
74 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 36 KiB 17 GiB 1.8 TiB
83.36 0.99 198 up osd.74
75 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 12 KiB 18 GiB 1.7 TiB
84.25 1.00 205 up osd.75
81 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 48 KiB 17 GiB 1.7 TiB
84.48 1.01 203 up osd.81
82 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 36 KiB 17 GiB 1.5 TiB
86.57 1.03 210 up osd.82
83 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 16 KiB 18 GiB 1.6 TiB
85.23 1.01 200 up osd.83
84 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 4 KiB 17 GiB 1.8 TiB
83.83 1.00 205 up osd.84
85 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 12 KiB 18 GiB 1.6 TiB
85.06 1.01 202 up osd.85
86 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.2 TiB 12 KiB 18 GiB 1.6 TiB
84.90 1.01 204 up osd.86
87 archive 10.91399 1.00000 11 TiB 9.5 TiB 9.5 TiB 4 KiB 18 GiB 1.4 TiB
87.16 1.04 223 up osd.87
88 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 16 KiB 18 GiB 1.5 TiB
86.35 1.03 208 up osd.88
-17 130.96800 - 131 TiB 111 TiB 111 TiB 6.6 MiB 203 GiB 20 TiB
84.65 1.01 - host CEPH006
7 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 1.4 MiB 17 GiB 1.7 TiB
84.49 1.01 201 up osd.7
8 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.2 TiB 2.2 MiB 17 GiB 1.7 TiB
84.79 1.01 206 up osd.8
9 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 2.7 MiB 17 GiB 1.7 TiB
84.28 1.00 0 down osd.9
10 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 24 KiB 17 GiB 1.7 TiB
84.66 1.01 190 up osd.10
12 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 16 KiB 17 GiB 1.7 TiB
84.38 1.00 203 up osd.12
13 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 24 KiB 17 GiB 1.7 TiB
84.34 1.00 202 up osd.13
42 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 8 KiB 17 GiB 1.8 TiB
83.73 1.00 198 up osd.42
43 archive 10.91399 1.00000 11 TiB 9.5 TiB 9.4 TiB 36 KiB 17 GiB 1.5 TiB
86.62 1.03 213 up osd.43
51 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 80 KiB 17 GiB 1.6 TiB
84.99 1.01 204 up osd.51
53 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 64 KiB 17 GiB 1.6 TiB
85.05 1.01 217 up osd.53
76 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 72 KiB 17 GiB 1.7 TiB
84.27 1.00 196 up osd.76
80 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 8 KiB 17 GiB 1.7 TiB
84.17 1.00 198 up osd.80
-26 130.96800 - 131 TiB 111 TiB 111 TiB 6.0 MiB 210 GiB 20 TiB
84.86 1.01 - host CEPH007
14 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 8 KiB 17 GiB 1.6 TiB
84.99 1.01 207 up osd.14
15 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 4 KiB 17 GiB 1.6 TiB
85.36 1.02 210 up osd.15
16 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 72 KiB 17 GiB 1.8 TiB
83.78 1.00 202 up osd.16
39 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 3.4 MiB 17 GiB 1.7 TiB
84.23 1.00 199 up osd.39
40 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 24 KiB 17 GiB 1.8 TiB
83.52 0.99 191 up osd.40
44 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 2.2 MiB 18 GiB 1.6 TiB
85.14 1.01 205 up osd.44
48 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.2 TiB 48 KiB 17 GiB 1.7 TiB
84.80 1.01 200 up osd.48
49 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.2 TiB 48 KiB 17 GiB 1.6 TiB
84.89 1.01 213 up osd.49
52 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 28 KiB 17 GiB 1.5 TiB
86.14 1.03 215 up osd.52
77 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 16 KiB 18 GiB 1.6 TiB
85.35 1.02 209 up osd.77
89 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 60 KiB 17 GiB 1.6 TiB
85.20 1.01 206 up osd.89
90 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 24 KiB 19 GiB 1.6 TiB
84.95 1.01 198 up osd.90
-31 130.96800 - 131 TiB 111 TiB 111 TiB 11 MiB 209 GiB 20 TiB
84.71 1.01 - host CEPH008
5 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 2.5 MiB 18 GiB 1.5 TiB
86.37 1.03 205 up osd.5
6 archive 10.91399 1.00000 11 TiB 9.3 TiB 9.3 TiB 2.6 MiB 18 GiB 1.6 TiB
85.09 1.01 211 up osd.6
11 archive 10.91399 1.00000 11 TiB 9.5 TiB 9.5 TiB 16 KiB 18 GiB 1.4 TiB
86.79 1.03 211 up osd.11
45 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 1.5 MiB 17 GiB 1.8 TiB
83.59 0.99 194 up osd.45
46 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 3.8 MiB 17 GiB 1.8 TiB
83.81 1.00 205 up osd.46
47 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 16 KiB 17 GiB 1.8 TiB
83.73 1.00 201 up osd.47
55 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 44 KiB 18 GiB 1.7 TiB
84.66 1.01 208 up osd.55
70 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 16 KiB 17 GiB 1.7 TiB
84.40 1.00 197 up osd.70
71 archive 10.91399 1.00000 11 TiB 9.4 TiB 9.4 TiB 6 KiB 18 GiB 1.5 TiB
86.12 1.02 215 up osd.71
78 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 80 KiB 17 GiB 1.7 TiB
84.42 1.00 194 up osd.78
79 archive 10.91399 1.00000 11 TiB 9.1 TiB 9.1 TiB 76 KiB 17 GiB 1.8 TiB
83.54 0.99 194 up osd.79
91 archive 10.91399 1.00000 11 TiB 9.2 TiB 9.2 TiB 8 KiB 17 GiB 1.7 TiB
84.01 1.00 206 up osd.91
-1 28.29785 - 30 TiB 19 TiB 19 TiB 54 GiB 63 GiB 10 TiB
65.11 0.77 - root default
-30 6.98199 - 7.0 TiB 4.8 TiB 4.8 TiB 10 GiB 11 GiB 2.2 TiB
69.06 0.82 - host CEPH-SSD-004
92 ssd 0.87299 1.00000 894 GiB 619 GiB 617 GiB 1.0 GiB 1.3 GiB 275 GiB
69.28 0.82 196 up osd.92
93 ssd 0.87299 1.00000 894 GiB 607 GiB 605 GiB 938 MiB 1.4 GiB 286 GiB
67.95 0.81 199 up osd.93
94 ssd 0.87299 1.00000 894 GiB 618 GiB 616 GiB 1.1 GiB 1.4 GiB 276 GiB
69.17 0.82 196 up osd.94
95 ssd 0.87299 1.00000 894 GiB 612 GiB 609 GiB 1.6 GiB 1.4 GiB 282 GiB
68.44 0.81 201 up osd.95
96 ssd 0.87299 1.00000 894 GiB 618 GiB 616 GiB 687 MiB 1.4 GiB 276 GiB
69.17 0.82 213 up osd.96
97 ssd 0.87299 1.00000 894 GiB 634 GiB 632 GiB 937 MiB 1.4 GiB 260 GiB
70.92 0.84 164 up osd.97
98 ssd 0.87299 1.00000 894 GiB 608 GiB 606 GiB 918 MiB 1.3 GiB 286 GiB
68.00 0.81 191 up osd.98
99 ssd 0.87299 1.00000 894 GiB 621 GiB 617 GiB 3.0 GiB 1.5 GiB 272 GiB
69.51 0.83 206 up osd.99
-3 7.16943 - 7.6 TiB 4.8 TiB 4.8 TiB 16 GiB 17 GiB 2.8 TiB
63.25 0.75 - host CEPH001
112 nvme 0.04790 1.00000 49 GiB 1.9 GiB 56 MiB 1.8 GiB 46 MiB 47 GiB
3.88 0.05 0 up osd.112
113 nvme 0.04790 1.00000 49 GiB 2.1 GiB 56 MiB 2.0 GiB 60 MiB 47 GiB
4.37 0.05 0 up osd.113
114 nvme 0.04790 1.00000 49 GiB 3.8 GiB 56 MiB 2.5 GiB 1.2 GiB 45 GiB
7.71 0.09 0 up osd.114
115 nvme 0.04790 1.00000 49 GiB 1.8 GiB 55 MiB 1.7 GiB 61 MiB 47 GiB
3.74 0.04 0 up osd.115
1 ssd 0.43599 1.00000 447 GiB 308 GiB 307 GiB 295 MiB 1.0 GiB 138 GiB
69.04 0.82 96 up osd.1
17 ssd 0.43599 1.00000 894 GiB 312 GiB 311 GiB 634 MiB 995 MiB 582 GiB
34.93 0.42 102 up osd.17
18 ssd 0.43599 1.00000 447 GiB 303 GiB 302 GiB 219 MiB 1002 MiB 144 GiB
67.80 0.81 105 up osd.18
19 ssd 0.43599 1.00000 447 GiB 309 GiB 308 GiB 208 MiB 1.0 GiB 137 GiB
69.21 0.82 112 up osd.19
20 ssd 0.43599 1.00000 447 GiB 304 GiB 302 GiB 876 MiB 1.2 GiB 142 GiB
68.17 0.81 90 up osd.20
21 ssd 0.43599 1.00000 447 GiB 308 GiB 307 GiB 269 MiB 1008 MiB 138 GiB
69.06 0.82 94 up osd.21
22 ssd 0.43599 1.00000 447 GiB 315 GiB 314 GiB 12 MiB 1018 MiB 132 GiB
70.43 0.84 102 up osd.22
23 ssd 0.43599 1.00000 447 GiB 310 GiB 308 GiB 990 MiB 1.1 GiB 137 GiB
69.38 0.83 94 up osd.23
37 ssd 0.87299 1.00000 894 GiB 610 GiB 607 GiB 1.7 GiB 1.4 GiB 284 GiB
68.21 0.81 202 up osd.37
54 ssd 0.87299 1.00000 894 GiB 611 GiB 608 GiB 773 MiB 1.5 GiB 283 GiB
68.32 0.81 193 up osd.54
56 ssd 0.43599 1.00000 447 GiB 299 GiB 297 GiB 323 MiB 1.0 GiB 148 GiB
66.87 0.80 132 up osd.56
60 ssd 0.43599 1.00000 447 GiB 302 GiB 301 GiB 109 KiB 1.0 GiB 145 GiB
67.53 0.80 91 up osd.60
61 ssd 0.43599 1.00000 447 GiB 315 GiB 314 GiB 221 MiB 1.1 GiB 131 GiB
70.60 0.84 102 up osd.61
62 ssd 0.43599 1.00000 447 GiB 312 GiB 310 GiB 1.4 GiB 1.0 GiB 135 GiB
69.85 0.83 102 up osd.62
-5 7.16943 - 7.6 TiB 4.8 TiB 4.8 TiB 18 GiB 18 GiB 2.8 TiB
63.36 0.75 - host CEPH002
116 nvme 0.04790 1.00000 49 GiB 2.1 GiB 55 MiB 2.0 GiB 64 MiB 47 GiB
4.28 0.05 0 up osd.116
117 nvme 0.04790 1.00000 49 GiB 1.8 GiB 55 MiB 1.7 GiB 40 MiB 47 GiB
3.74 0.04 0 up osd.117
118 nvme 0.04790 1.00000 49 GiB 2.4 GiB 56 MiB 2.3 GiB 65 MiB 47 GiB
4.96 0.06 0 up osd.118
119 nvme 0.04790 1.00000 49 GiB 2.1 GiB 56 MiB 2.0 GiB 56 MiB 47 GiB
4.36 0.05 0 up osd.119
2 ssd 0.43599 1.00000 447 GiB 314 GiB 313 GiB 857 MiB 1022 MiB 132 GiB
70.40 0.84 87 up osd.2
24 ssd 0 1.00000 447 GiB 1.1 GiB 89 MiB 16 KiB 1024 MiB 446 GiB
0.24 0.00 0 up osd.24
25 ssd 0.43599 1.00000 447 GiB 318 GiB 317 GiB 351 MiB 1007 MiB 128 GiB
71.28 0.85 101 up osd.25
26 ssd 0.43599 1.00000 447 GiB 306 GiB 305 GiB 836 MiB 1011 MiB 140 GiB
68.59 0.82 103 up osd.26
27 ssd 0.43599 1.00000 447 GiB 303 GiB 301 GiB 959 MiB 1.1 GiB 144 GiB
67.80 0.81 104 up osd.27
28 ssd 0.43599 1.00000 447 GiB 307 GiB 306 GiB 9.9 MiB 1023 MiB 140 GiB
68.68 0.82 84 up osd.28
29 ssd 0.43599 1.00000 447 GiB 303 GiB 301 GiB 293 MiB 1.1 GiB 144 GiB
67.76 0.81 117 up osd.29
30 ssd 0.43599 1.00000 447 GiB 298 GiB 297 GiB 338 MiB 1005 MiB 149 GiB
66.69 0.79 108 up osd.30
38 ssd 0.87299 1.00000 894 GiB 628 GiB 625 GiB 1.1 GiB 1.6 GiB 266 GiB
70.28 0.84 211 up osd.38
57 ssd 0.43599 1.00000 447 GiB 302 GiB 300 GiB 1.2 GiB 994 MiB 144 GiB
67.66 0.81 113 up osd.57
63 ssd 0.43599 1.00000 447 GiB 295 GiB 293 GiB 914 MiB 1.1 GiB 152 GiB
66.07 0.79 120 up osd.63
64 ssd 0.43599 1.00000 447 GiB 312 GiB 309 GiB 1.2 GiB 1.2 GiB 135 GiB
69.81 0.83 105 up osd.64
65 ssd 0.43599 1.00000 447 GiB 313 GiB 311 GiB 427 MiB 1.1 GiB 134 GiB
69.97 0.83 108 up osd.65
66 ssd 0.43599 1.00000 447 GiB 314 GiB 311 GiB 1.2 GiB 1.2 GiB 133 GiB
70.24 0.84 93 up osd.66
72 ssd 0.87299 1.00000 894 GiB 613 GiB 610 GiB 682 MiB 2.6 GiB 280 GiB
68.62 0.82 197 up osd.72
-7 6.97699 - 7.4 TiB 4.8 TiB 4.8 TiB 9.6 GiB 17 GiB 2.6 TiB
65.09 0.77 - host CEPH003
0 ssd 0.43599 1.00000 447 GiB 309 GiB 307 GiB 586 MiB 1.0 GiB 138 GiB
69.17 0.82 112 up osd.0
3 ssd 0.43599 1.00000 447 GiB 310 GiB 309 GiB 224 MiB 1.0 GiB 137 GiB
69.36 0.83 81 up osd.3
31 ssd 0 1.00000 447 GiB 1.1 GiB 89 MiB 16 KiB 1024 MiB 446 GiB
0.24 0.00 0 up osd.31
32 ssd 0.43599 1.00000 447 GiB 304 GiB 303 GiB 12 MiB 1012 MiB 143 GiB
68.01 0.81 104 up osd.32
33 ssd 0.43599 1.00000 447 GiB 316 GiB 315 GiB 49 MiB 985 MiB 130 GiB
70.83 0.84 87 up osd.33
34 ssd 0.43599 1.00000 447 GiB 309 GiB 308 GiB 455 MiB 1009 MiB 137 GiB
69.25 0.82 116 up osd.34
35 ssd 0.43599 1.00000 447 GiB 309 GiB 306 GiB 2.0 GiB 1.0 GiB 138 GiB
69.21 0.82 101 up osd.35
36 ssd 0.43599 1.00000 447 GiB 303 GiB 301 GiB 1.0 GiB 1.0 GiB 143 GiB
67.93 0.81 79 up osd.36
50 ssd 0.87299 1.00000 894 GiB 602 GiB 598 GiB 2.5 GiB 1.7 GiB 291 GiB
67.39 0.80 198 up osd.50
58 ssd 0.43599 1.00000 447 GiB 326 GiB 324 GiB 433 MiB 1.0 GiB 121 GiB
72.92 0.87 92 up osd.58
59 ssd 0.43599 1.00000 447 GiB 321 GiB 320 GiB 1.1 MiB 1.2 GiB 125 GiB
71.96 0.86 107 up osd.59
67 ssd 0.43599 1.00000 447 GiB 312 GiB 311 GiB 324 MiB 1.2 GiB 134 GiB
69.91 0.83 97 up osd.67
68 ssd 0.43599 1.00000 447 GiB 303 GiB 301 GiB 809 MiB 1.1 GiB 144 GiB
67.82 0.81 86 up osd.68
69 ssd 0.43599 1.00000 447 GiB 303 GiB 301 GiB 822 MiB 1.0 GiB 144 GiB
67.83 0.81 113 up osd.69
73 ssd 0.87299 1.00000 894 GiB 614 GiB 612 GiB 426 MiB 1.5 GiB 280 GiB
68.67 0.82 177 up osd.73
TOTAL 684 TiB 575 TiB 574 TiB 54 GiB 1.1 TiB 109 TiB
84.03
MIN/MAX VAR: 0.00/1.05 STDDEV: 25.52
-----Mensaje original-----
De: J. Eric Ivancich <[email protected]>
Enviado el: miƩrcoles, 15 de mayo de 2019 18:12
Para: EDH - Manuel Rios Fernandez <[email protected]>; 'Casey Bodley'
<[email protected]>; [email protected]
Asunto: Re: [ceph-users] Ceph Bucket strange issues rgw.none + id and marker
diferent.
Hi Manuel,
My response is interleaved below.
On 5/8/19 3:17 PM, EDH - Manuel Rios Fernandez wrote:
> Eric,
>
> Yes we do :
>
> time s3cmd ls s3://[BUCKET]/ --no-ssl and we get near 2min 30 secs for list
> the bucket.
We're adding an --allow-unordered option to `radosgw-admin bucket list`.
That would likely speed up your listing. If you want to follow the trackers,
they are:
https://tracker.ceph.com/issues/39637 [feature added to master]
https://tracker.ceph.com/issues/39730 [nautilus backport]
https://tracker.ceph.com/issues/39731 [mimic backport]
https://tracker.ceph.com/issues/39732 [luminous backport]
> If we instantly hit again the query it normally timeouts.
That's interesting. I don't have an explanation for that behavior. I would
suggest creating a tracker for the issue, ideally with the minimal steps to
reproduce the issue. My concern is that your bucket has so many objects, and if
that's related to the issue, it would not be easy to reproduce.
> Could you explain a little more "
>
> With respect to your earlier message in which you included the output
> of `ceph df`, I believe the reason that default.rgw.buckets.index
> shows as
> 0 bytes used is that the index uses the metadata branch of the object to
> store its data.
> "
Each object in ceph has three components. The data itself plus two types of
metadata (omap and xattr). The `ceph df` command doesn't count the metadata.
The bucket indexes that track the objects in each bucket use only the metadata.
So you won't see that reported in `ceph df`.
> I read in IRC today that in Nautilus release now is well calculated and no
> show more 0B. Is it correct?
I don't know. I wasn't aware of any changes in nautilus that report metadata in
`ceph df`.
> Thanks for your response.
You're welcome,
Eric
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com