Hey all,
It's a bit of a broken record, but we are again trying to kickstart
CephFS development going forward. To that end, I've created one
blueprint for the next CDS on CephFS "Forward Scrub"
(https://wiki.ceph.com/Planning/Blueprints/Submissions/CephFS%3A_Forward_Scrub)
based off discussions on the mailing last early last year (it still
seems to apply). Other blueprints I'm mulling over:
*) beginning to gather requirements and discuss possible
implementations of multiple FSes within a single RADOS cluster
*) changes we may want to make to backtraces (having clients write
them, optionally storing authoritative copies in the metadata pool, or
anything else of interest)
*) implementing file locking in the userspace client
*) an expansion on MDS dupability
(https://wiki.ceph.com/Planning/Sideboard/mds%3A_dumpability) which
also includes options to "kick" blocked parts of the tree
*) better methods for synchronizing CephFS clocks (for purposes of
mtime/atime/et al) across a cluster.

And I'm missing some other ideas I had earlier today. If you'd like to
see one of these, or some other blueprint, on the agenda for CephFS
discussion please let me know!
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to