There may also be more memory coping involved instead of just passing
pointers around as well, but I'm not 100% sure.
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Mon, Jun 24, 2019 at 10:28 AM Jeff Layton <[email protected]>
wrote:

> On Mon, 2019-06-24 at 15:51 +0200, Hervé Ballans wrote:
> > Hi everyone,
> >
> > We successfully use Ceph here for several years now, and since recently,
> > CephFS.
> >
> >  From the same CephFS server, I notice a big difference between a fuse
> > mount and a kernel mount (10 times faster for kernel mount). It makes
> > sense to me (an additional fuse library versus a direct access to a
> > device...), but recently, one of our users asked me to explain him in
> > more detail the reason for this big difference...Hum...
> >
> > I then realized that I didn't really know how to explain the reasons to
> > him !!
> >
> > As well, does anyone have a more detailed explanation in a few words or
> > know a good web resource on this subject (I guess it's not specific to
> > Ceph but it's generic to all filesystems ?..)
> >
> > Thanks in advance,
> > Hervé
> >
>
> A lot of it is the context switching.
>
> Every time you make a system call (or other activity) that accesses a
> FUSE mount, it has to dispatch that request to the fuse device, the
> userland ceph-fuse daemon then has to wake up and do its thing (at least
> once) and then send the result back down to the kernel which then wakes
> up the original task so it can get the result.
>
> FUSE is a wonderful thing, but it's not really built for speed.
>
> --
> Jeff Layton <[email protected]>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to