Hello!

I'm using Nautilus 14.2.16 with Multisite RGW setup.
I have 2 zones. Working as Active-Passive(Read-Only)

On master zone "ceph df" result is:
    POOL                        ID     PGS      STORED      OBJECTS
 USED        %USED     MAX AVAIL
    prod.rgw.buckets.index      54      128     844 GiB     437.52k     844
GiB      6.43       4.0 TiB
    prod.rgw.buckets.non-ec     55       32     195 MiB       2.70k     246
MiB         0       4.0 TiB
*    prod.rgw.buckets.data       56     2048     856 TiB       1.08G
 1.3 PiB     65.75       553 TiB*

On secondary zone "ceph df" result is:
    POOL                       ID     PGS      STORED      OBJECTS
 USED        %USED     MAX AVAIL
    bck.rgw.buckets.index      20      256     137 GiB     467.21k     137
GiB      0.55       8.0 TiB
    bck.rgw.buckets.non-ec     21       32         0 B           0
 0 B         0       8.0 TiB
*    bck.rgw.buckets.data       22     1024     931 TiB     653.67M     1.3
PiB     85.85       178 TiB*

As you can see master zone stored is *856TiB* and Secondary zone *931TiB*
but used size is equal.
Both cluster are 10 nodes with 8+2 EC pools. The only differences are:
- Master pool has 2048PG
- Secondary pool has 1024PG
- On the master zone's bucket pool compression is
*off. *- On the secondary zone's bucket pool compression is *on*,

How can I dig into this?
What am I missing?
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to