Hello, On Tue, 17 Jun 2014 10:25:13 -0700 Gregory Farnum wrote:
> You probably have sparse objects from RBD. The PG statistics are built > off of file size, but the total data used spaces are looking at df > output. > Ah yes, that could be it, as others here tested fstrim within RBD provided VM images. Unfortunate discrepancy, will have to teach my subconscious to ignore it. ^o^ Cristian > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Mon, Jun 16, 2014 at 7:34 PM, Christian Balzer <[email protected]> wrote: > > > > Hello, > > > > this is is a 0.80.1 cluster, upgraded from emperor. I'm mentioning the > > later since I don't recall seeing this back with emperor, it was a > > perfect match then. > > The pools are all set to a replication of 2, only the rbd one is used. > > So a having less than 2x the amount of actual data being used gives me > > quite the pause and cause to worries: > > > > pgmap v2876480: 1152 pgs, 3 pools, 642 GB data, 168 kobjects > > 1246 GB used, 98932 GB / 100178 GB avail > > 1152 active+clean > > > > My test cluster and every other ceph -s output I've seen always used > > double (tripple) or more than that compared to the actual data, never > > less than the replication factor. > > > > So are there some objects that are not replicated twice, despite > > having a clean health and after several scrubs including deep ones? > > > > Or is that some stale data that very much intentionally isn't getting > > replicated? (I never used snapshots, FWIW) > > > > Either way how can I find out what is going on here? > > > > Regards, > > > > Christian > > -- > > Christian Balzer Network/Systems Engineer > > [email protected] Global OnLine Japan/Fusion Communications > > http://www.gol.com/ > > _______________________________________________ > > ceph-users mailing list > > [email protected] > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian Balzer Network/Systems Engineer [email protected] Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
