I think this might be related to a problem I'm having with "ceph osd
pool autoscale-status". SIZE appears to be raw usage (data * 3 in our
case) while TARGET SIZE seems to be expecting user-facing size. For
example, I have a 87TiB dataset that I'm currently copying into a
CephFS. "du -sh" shows that 15TiB of it has been copied so far. SIZE
reports 44608G - this is consistent with 15 TiB at size=3. But if I
set TARGET SIZE to 87TiB * 3, this pool has a RATIO of over 1 - ie
larger than the cluster.

I currently have TARGET_SIZE set to 87TiB, and it seems to be working
- but it's odd that SIZE and TARGET_SIZE seem to use very different
measurements. Does anyone know if this is intended?

On Mon, Oct 7, 2019 at 12:03 PM Igor Fedotov <[email protected]> wrote:
>
> Hi Yordan,
>
> this is mimic documentation and these snippets aren't valid for Nautilus any 
> more.  They are still present  in Nautilus pages though..
>
> Going to create a corresponding ticket to fix that.
>
> Relevant Nautilus changes for 'ceph df [detail]' command can be found in 
> Nautilus release notes: https://docs.ceph.com/docs/nautilus/releases/nautilus/
>
> In short - USED field accounts for all the overhead data including replicas 
> etc. It's STORED field which now represents pure data user put into a pool.
>
>
> Thanks,
>
> Igor
>
> On 10/2/2019 8:33 AM, Yordan Yordanov (Innologica) wrote:
>
> The documentation states:
> https://docs.ceph.com/docs/mimic/rados/operations/monitoring/
>
> The POOLS section of the output provides a list of pools and the notional 
> usage of each pool. The output from this section DOES NOT reflect replicas, 
> clones or snapshots. For example, if you store an object with 1MB of data, 
> the notional usage will be 1MB, but the actual usage may be 2MB or more 
> depending on the number of replicas, clones and snapshots.
>
>
> However in our case we are clearly seeing the USAGE field multiplying the 
> total object sizes to the number of replicas.
>
> [root@blackmirror ~]# ceph df
> RAW STORAGE:
>     CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED
>     hdd       80 TiB     34 TiB     46 TiB       46 TiB         58.10
>     TOTAL     80 TiB     34 TiB     46 TiB       46 TiB         58.10
>
> POOLS:
>     POOL      ID     STORED      OBJECTS     USED        %USED     MAX AVAIL
>     one        2      15 TiB       4.05M      46 TiB     68.32       7.2 TiB
>     bench      5     250 MiB          67     250 MiB         0        22 TiB
>
> [root@blackmirror ~]# rbd du -p one
> NAME           PROVISIONED USED
> ...
> <TOTAL>             20 TiB  15 TiB
>
> This is causing several apps (including ceph dashboard) to display inaccurate 
> percentages, because they calculate the total pool capacity as USED + MAX 
> AVAIL, which in this case yields 53.2TB, which is way off. 7.2TB is about 13% 
> of that, so we receive alarms and this is bugging us for quite some time now.
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to