Yeah -- as I said, 4KB was a generous number. It's going to vary some
though, based on the actual length of the names you're using, whether you
have symlinks or hard links, snapshots, etc.
-Greg
On Sat, Apr 25, 2015 at 11:34 AM Adam Tygart <[email protected]> wrote:

> Probably the case. I've check a 10% of the objects in the metadata
> pool (rbd -p metadata stat $objname). They've all been 0 byte objects.
> Most of them have 1-10 omapvals usually 408 bytes each.
>
> Based on the usage of the other pools on the SSDs, that comes out to
> about ~46GB of omap/leveldb stuff. Assuming all of that usage is for
> the metadata, it comes out to ~1.4KB per file. Still *much* less than
> the 4K estimate, but probably more reasonable than a few bytes per
> file :).
>
> --
> Adam
>
> On Sat, Apr 25, 2015 at 1:03 PM, Gregory Farnum <[email protected]> wrote:
> > That's odd -- I almost want to think the pg statistics reporting is going
> > wrong somehow.
> > ...I bet the leveldb/omap stuff isn't being included in the of
> statistics.
> > That could be why and would make sense with what you've got here. :)
> > -Greg
> > On Sat, Apr 25, 2015 at 10:32 AM Adam Tygart <[email protected]> wrote:
> >>
> >> cephfs (really ec84pool) is an ec pool (k=8 m=4), cachepool is a
> >> writeback cachetier in front of ec84pool. As far as I know, we've not
> >> done any strange configuration.
> >>
> >> Potentially relevant configuration details:
> >> ceph osd crush dump >
> >> http://people.beocat.cis.ksu.edu/~mozes/ceph/crush_dump.txt
> >> ceph osd pool ls detail >
> >> http://people.beocat.cis.ksu.edu/~mozes/ceph/pool_ls_detail.txt
> >> ceph mds dump >
> http://people.beocat.cis.ksu.edu/~mozes/ceph/mds_dump.txt
> >> getfattr -d -m '.*' /tmp/cephfs >
> >> http://people.beocat.cis.ksu.edu/~mozes/ceph/getfattr_cephfs.txt
> >>
> >> rsync is ongoing, moving data into cephfs. It would seem the data is
> >> truly there, both with metadata and file data. md5sums match for files
> >> that I've tested.
> >> --
> >> Adam
> >>
> >> On Sat, Apr 25, 2015 at 12:16 PM, Gregory Farnum <[email protected]>
> wrote:
> >> > That doesn't make sense -- 50MB for 36 million files is <1.5 bytes
> each.
> >> > How
> >> > do you have things configured, exactly?
> >> >
> >> > On Sat, Apr 25, 2015 at 9:32 AM Adam Tygart <[email protected]>
> wrote:
> >> >>
> >> >> We're currently putting data into our cephfs pool (cachepool in front
> >> >> of it as a caching tier), but the metadata pool contains ~50MB of
> data
> >> >> for 36 million files. If that were an accurate estimation, we'd have
> a
> >> >> metadata pool closer to ~140GB. Here is a ceph df detail:
> >> >>
> >> >> http://people.beocat.cis.ksu.edu/~mozes/ceph_df_detail.txt
> >> >>
> >> >> I'm not saying it won't get larger, I have no idea of the code behind
> >> >> it. This is just what it happens to be for us.
> >> >> --
> >> >> Adam
> >> >>
> >> >>
> >> >> On Sat, Apr 25, 2015 at 11:29 AM, François Lafont <
> [email protected]>
> >> >> wrote:
> >> >> > Thanks Greg and Steffen for your answer. I will make some tests.
> >> >> >
> >> >> > Gregory Farnum wrote:
> >> >> >
> >> >> >> Yeah. The metadata pool will contain:
> >> >> >> 1) MDS logs, which I think by default will take up to 200MB per
> >> >> >> logical MDS. (You should have only one logical MDS.)
> >> >> >> 2) directory metadata objects, which contain the dentries and
> inodes
> >> >> >> of the system; ~4KB is probably generous for each?
> >> >> >
> >> >> > So one file in the cephfs generates one inode of ~4KB in the
> >> >> > "metadata" pool, correct? So that (number-of-files-in-cephfs) x 4KB
> >> >> > gives me an (approximative) estimation of the amount of data in the
> >> >> > "metadata" pool?
> >> >> >
> >> >> >> 3) Some smaller data structures about the allocated inode range
> and
> >> >> >> current client sessions.
> >> >> >>
> >> >> >> The data pool contains all of the file data. Presumably this is
> much
> >> >> >> larger, but it will depend on your average file size and we've not
> >> >> >> done any real study of it.
> >> >> >
> >> >> > --
> >> >> > François Lafont
> >> >> > _______________________________________________
> >> >> > ceph-users mailing list
> >> >> > [email protected]
> >> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >> _______________________________________________
> >> >> ceph-users mailing list
> >> >> [email protected]
> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to