On Wed, Mar 25, 2015 at 3:14 AM, Frédéric Nass
<frederic.n...@univ-lorraine.fr> wrote:
> Hello,
>
>
> I have a few questions regarding snapshots and fstrim with cache tiers.
>
>
> In the "cache tier and erasure coding FAQ" related to ICE 1.2 (based on
> Firefly), Inktank says "Snapshots are not supported in conjunction with
> cache tiers."
>
> What are the risks of using snapshots with cache tiers ? Would this "better
> not use it recommandation" still be true with Giant or Hammer ?
>
>
> Regarding the fstrim command, it doesn't seem to work with cache tiers. The
> freed up blocks don't get back in the ceph cluster.
> Can someone confirm this ? Is there something we can do to get those freed
> up blocks back in the cluster ?

It does work, but there are two effects you're missing here:
1) The object can be deleted in the cache tier, but it won't get
deleted from the backing pool until it gets flushed out of the cache
pool. Depending on your workload this can take a while.
2) On erasure-coded pool, the OSD makes sure it can roll back a
certain number of operations per PG. In the case of deletions, this
means keeping the object data around for a while. This can also take a
while if you're not doing many operations. This has been discussed on
the list before; I think you'll want to look for a thread about
rollback and pg log size.
-Greg

>
>
> Also, can we run an fstrim task from the cluster side ? That is, without
> having to map and mount each rbd image or rely on the client to operate this
> task ?
>
>
> Best regards,
>
>
> --
>
> Frédéric Nass
>
> Sous-direction Infrastructures
> Direction du Numérique
> Université de Lorraine
>
> email : frederic.n...@univ-lorraine.fr
> Tél : +33 3 83 68 53 83
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to