Hey John...

First of all. thank you for the nice talks you have been giving around.

See the feedback on your suggestions bellow, plus some additional questions.

    However, please note that in my example I am not doing only
    deletions but also creating and updating files, which afaiu,
    should have an immediate effect (let us say a couple of seconds)
    in the system. This is not what I am experiencing where sometimes,
    my perception is that sizes are never updated until a new
    operation is triggered



I'm not sure we've really defined what is supposed to trigger recursive statistics (rstat) updates yet: if you're up for playing with this a bit more, it would be useful to check if A) unmounting a client or B) executing "ceph daemon mds.<id> flush journal" causes the stats to be immediately updated. Not suggesting that you should actually have to do those things, but it will give a clearer sense of exactly where we should be updating things more proactively.

- Remounting the filesystem seems to trigger the update of a directory size. Here a simple example:

   1) # getfattr -d -m ceph.*
   /cephfs/objectsize4M_stripeunit512K_stripecount8/
   (...)
   ceph.dir.rbytes="549763203076"

   2) # echo "d" > /cephfs/objectsize4M_stripeunit512K_stripecount8/d.txt


   3) # getfattr -d -m ceph.*
   /cephfs/objectsize4M_stripeunit512K_stripecount8/
   ceph.dir.rbytes="549763203076"   (It was like that for several seconds)

   4) # umount /cephfs; mount -t ceph XX.XX.XX.XX:6789:/  /cephfs -o
   name=admin,secretfile=/etc/ceph/admin.secret

   5) # getfattr -d -m ceph.*
   /cephfs/objectsize4M_stripeunit512K_stripecount8/
   (...)
   ceph.dir.rbytes="549763203078"

- However, the flush journal did not had any effect

   1) # getfattr -d -m ceph.*
   /cephfs/objectsize4M_stripeunit512K_stripecount8/
   (...)
   ceph.dir.rbytes="549763203079"


   2) # echo "ee" > /cephfs/objectsize4M_stripeunit512K_stripecount8/ee.txt

   3) # getfattr -d -m ceph.*
   /cephfs/objectsize4M_stripeunit512K_stripecount8/
   (...)
   ceph.dir.rbytes="549763203079" (It was like that for several seconds)

   4) ]# ceph daemon mds.rccephmds flush journal (in mds node)
   {
        "message": "",
        "return_code": 0
   }

   5) # getfattr -d -m ceph.*
   /cephfs/objectsize4M_stripeunit512K_stripecount8/
   (...)
   ceph.dir.rbytes="549763203079"


Now my questions:

1) I performed all of this testing to understand what would be the minimum size (reported by df) of a file of 1 char and I am still not able to find a clear answer. In a regular posix file system, the size of a 1 char (1 byte) file is actually constrained by the filesystem block size. A 1 char file would occupy 4 KB in a filesystem configured with a 4 KB blocksize. In ceph / cephfs I would expect that a 1 char file would be constrained by the Object size x Number of replicas. However, I was not able to understand the numbers I was getting and that was why I started to dig on this topic. Can you actually also clarify this question?

2) I have a data and metadata pool,. It is possible to associate a file to the object in the data pool via its inode. However, I have failed to find a way to associate a file with its metadata pool object. Is there a way to do that?

Thanks in Advance
Cheers
Goncalo

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to