Hello Jason,

Thanks for the quick reply, this was copied from an VM instance snapshot to
my backup pool (rbd snap create, rbd cp (to backup pool), rbd snap rm).
I've tried piping through grep per your recommendation and it still reports
the same usage

$ rbd diff backup/cd4e5d37-3023-4640-be5a-5577d3f9307e | grep data | awk '{
SUM += $2 } END { print SUM/1024/1024 " MB" }'
49345.4 MB

Thanks for the help.

On Wed, Apr 27, 2016 at 12:22 PM, Jason Dillaman <[email protected]>
wrote:

> On Wed, Apr 27, 2016 at 2:07 PM, Tyler Wilson <[email protected]>
> wrote:
> > $ rbd diff backup/cd4e5d37-3023-4640-be5a-5577d3f9307e | awk '{ SUM +=
> $2 }
> > END { print SUM/1024/1024 " MB" }'
> > 49345.4 MB
>
> Is this a cloned image?  That awk trick doesn't account for discarded
> regions (i.e. when column three says "zero" instead of "data"). Does
> the number change when you pipe the "rbd diff" results through "grep
> data" before piping to awk?
>
> > Could this be affected by replica counts some how? It seems to be twice
> as
> > large as what is reported in the filesystem which matches my replica
> count.
>
> No, the "rbd diff" output is only reporting image data and zeroed
> extents -- so the replication factor is not included.
>
> --
> Jason
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to