Sounds like you've got deleted objects in the cache tier getting flushed (i.e., deleted) in the base tier. -Greg
On Thursday, June 16, 2016, Christian Balzer <[email protected]> wrote: > > Hello devs and other sage(sic) people, > > Ceph 0.94.5, cache tier in writeback mode. > > As mentioned before, I'm running a cron job every day at 23:40 dropping > the flush dirty target by 4% (0.60 to 0.56) and then re-setting it to the > previous value 10 minutes later. > The idea is to have all the flushing done during off-peak hours and that > works beautifully. > No flushes during day time, only lightweight evicts. > > Now I'm graphing all kinds of Ceph and system related info with graphite > and noticed something odd. > > When the flushes are initiated, the HDD space of the OSDs in the backing > store drops by a few GB, pretty much the amount of dirty objects over the > threshold accumulated during a day, so no surprise there. > This happens every time when that cron job runs. > > However only on some days this drop (more pronounced on those days) is > accompanied by actual: > a) flushes according to the respective Ceph counters > b) network traffic from the cache-tier to the backing OSDs > c) HDD OSD writes (both from OSD perspective and actual HDD) > d) cache pool SSD reads (both from OSD perspective and actual SSD) > > So what is happening on the other days? > > The space clearly is gone and triggered by the "flush", but no data was > actually transfered to the HDD OSD nodes, nor was there anything (newly) > written. > > Dazed and confused, > > Christian > -- > Christian Balzer Network/Systems Engineer > [email protected] <javascript:;> Global OnLine Japan/Rakuten > Communications > http://www.gol.com/ > _______________________________________________ > ceph-users mailing list > [email protected] <javascript:;> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
