Re: [PATCH v5 3/3] virtiofsd: Add support for FUSE_SYNCFS request without announce_submounts

2022-02-15 Thread Vivek Goyal
On Tue, Feb 15, 2022 at 10:18:03AM +0100, Greg Kurz wrote:
> On Mon, 14 Feb 2022 14:09:47 -0500
> Vivek Goyal  wrote:
> 
> > On Mon, Feb 14, 2022 at 01:56:08PM -0500, Vivek Goyal wrote:
> > > On Mon, Feb 14, 2022 at 01:27:22PM -0500, Vivek Goyal wrote:
> > > > On Mon, Feb 14, 2022 at 02:58:20PM +0100, Greg Kurz wrote:
> > > > > This adds the missing bits to support FUSE_SYNCFS in the case 
> > > > > submounts
> > > > > aren't announced to the client.
> > > > > 
> > > > > Iterate over all inodes and call syncfs() on the ones marked as 
> > > > > submounts.
> > > > > Since syncfs() can block for an indefinite time, we cannot call it 
> > > > > with
> > > > > lo->mutex held as it would prevent the server to process other 
> > > > > requests.
> > > > > This is thus broken down in two steps. First build a list of submounts
> > > > > with lo->mutex held, drop the mutex and finally process the list. A
> > > > > reference is taken on the inodes to ensure they don't go away when
> > > > > lo->mutex is dropped.
> > > > > 
> > > > > Signed-off-by: Greg Kurz 
> > > > > ---
> > > > >  tools/virtiofsd/passthrough_ll.c | 38 
> > > > > ++--
> > > > >  1 file changed, 36 insertions(+), 2 deletions(-)
> > > > > 
> > > > > diff --git a/tools/virtiofsd/passthrough_ll.c 
> > > > > b/tools/virtiofsd/passthrough_ll.c
> > > > > index e94c4e6f8635..7ce944bfe2a0 100644
> > > > > --- a/tools/virtiofsd/passthrough_ll.c
> > > > > +++ b/tools/virtiofsd/passthrough_ll.c
> > > > > @@ -3400,8 +3400,42 @@ static void lo_syncfs(fuse_req_t req, 
> > > > > fuse_ino_t ino)
> > > > >  err = lo_do_syncfs(lo, inode);
> > > > >  lo_inode_put(lo, );
> > > > >  } else {
> > > > > -/* Requires the sever to track submounts. Not implemented 
> > > > > yet */
> > > > > -err = ENOSYS;
> > > > > +g_autoptr(GSList) submount_list = NULL;
> > > > > +GSList *elem;
> > > > > +GHashTableIter iter;
> > > > > +gpointer key, value;
> > > > > +
> > > > > +pthread_mutex_lock(>mutex);
> > > > > +
> > > > > +g_hash_table_iter_init(, lo->inodes);
> > > > > +while (g_hash_table_iter_next(, , )) {
> > > > 
> > > > Going through all the inodes sounds very inefficient. If there are large
> > > > number of inodes (say 1 million or more), and if frequent syncfs 
> > > > requests
> > > > are coming this can consume lot of cpu cycles.
> > > > 
> > > > Given C virtiofsd is slowly going away, so I don't want to be too
> > > > particular about it. But, I would have thought to put submount
> > > > inodes into another list or hash map (using mount id as key) and just
> > > > traverse through that list instead. Given number of submounts should
> > > > be small, it should be pretty quick to walk through that list.
> > > > 
> > > > > +struct lo_inode *inode = value;
> > > > > +
> > > > > +if (inode->is_submount) {
> > > > > +g_atomic_int_inc(>refcount);
> > > > > +submount_list = g_slist_prepend(submount_list, 
> > > > > inode);
> > > > > +}
> > > > > +}
> > > > > +
> > > > > +pthread_mutex_unlock(>mutex);
> > > > > +
> > > > > +/* The root inode is always present and not tracked in the 
> > > > > hash table */
> > > > > +err = lo_do_syncfs(lo, >root);
> > > > > +
> > > > > +for (elem = submount_list; elem; elem = g_slist_next(elem)) {
> > > > > +struct lo_inode *inode = elem->data;
> > > > > +int r;
> > > > > +
> > > > > +r = lo_do_syncfs(lo, inode);
> > > > > +if (r) {
> > > > > +/*
> > > > > + * Try to sync as much as possible. Only one error 
> > > > > can be
> > > > > + * reported to the client though, arbitrarily the 
> > > > > last one.
> > > > > + */
> > > > > +err = r;
> > > > > +}
> > > > > +lo_inode_put(lo, );
> > > > > +}
> > > > 
> > > > One more minor nit. What happens if virtiofsd is processing syncfs list
> > > > and then somebody hard reboots qemu and mounts virtiofs again. That
> > > > will trigger FUSE_INIT and will call lo_destroy() first.
> > > > 
> > > > fuse_lowlevel.c
> > > > 
> > > > fuse_session_process_buf_int()
> > > > {
> > > > fuse_log(FUSE_LOG_DEBUG, "%s: reinit\n", __func__);
> > > > se->got_destroy = 1;
> > > > se->got_init = 0;
> > > > if (se->op.destroy) {
> > > > se->op.destroy(se->userdata);
> > > > }
> > > > }
> > > > 
> > > > IIUC, there is no synchronization with this path. If we are running with
> > > > thread pool enabled, it could very well happen that one thread is still
> > > > doing syncfs while other thread is executing do_init(). That sounds
> > > > like little bit of a problem. It will be good if there is a way
> > > > to either abort syncfs() or do_destroy() waits for all the 

Re: [PATCH v5 3/3] virtiofsd: Add support for FUSE_SYNCFS request without announce_submounts

2022-02-15 Thread Greg Kurz
On Mon, 14 Feb 2022 14:09:47 -0500
Vivek Goyal  wrote:

> On Mon, Feb 14, 2022 at 01:56:08PM -0500, Vivek Goyal wrote:
> > On Mon, Feb 14, 2022 at 01:27:22PM -0500, Vivek Goyal wrote:
> > > On Mon, Feb 14, 2022 at 02:58:20PM +0100, Greg Kurz wrote:
> > > > This adds the missing bits to support FUSE_SYNCFS in the case submounts
> > > > aren't announced to the client.
> > > > 
> > > > Iterate over all inodes and call syncfs() on the ones marked as 
> > > > submounts.
> > > > Since syncfs() can block for an indefinite time, we cannot call it with
> > > > lo->mutex held as it would prevent the server to process other requests.
> > > > This is thus broken down in two steps. First build a list of submounts
> > > > with lo->mutex held, drop the mutex and finally process the list. A
> > > > reference is taken on the inodes to ensure they don't go away when
> > > > lo->mutex is dropped.
> > > > 
> > > > Signed-off-by: Greg Kurz 
> > > > ---
> > > >  tools/virtiofsd/passthrough_ll.c | 38 ++--
> > > >  1 file changed, 36 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/tools/virtiofsd/passthrough_ll.c 
> > > > b/tools/virtiofsd/passthrough_ll.c
> > > > index e94c4e6f8635..7ce944bfe2a0 100644
> > > > --- a/tools/virtiofsd/passthrough_ll.c
> > > > +++ b/tools/virtiofsd/passthrough_ll.c
> > > > @@ -3400,8 +3400,42 @@ static void lo_syncfs(fuse_req_t req, fuse_ino_t 
> > > > ino)
> > > >  err = lo_do_syncfs(lo, inode);
> > > >  lo_inode_put(lo, );
> > > >  } else {
> > > > -/* Requires the sever to track submounts. Not implemented yet 
> > > > */
> > > > -err = ENOSYS;
> > > > +g_autoptr(GSList) submount_list = NULL;
> > > > +GSList *elem;
> > > > +GHashTableIter iter;
> > > > +gpointer key, value;
> > > > +
> > > > +pthread_mutex_lock(>mutex);
> > > > +
> > > > +g_hash_table_iter_init(, lo->inodes);
> > > > +while (g_hash_table_iter_next(, , )) {
> > > 
> > > Going through all the inodes sounds very inefficient. If there are large
> > > number of inodes (say 1 million or more), and if frequent syncfs requests
> > > are coming this can consume lot of cpu cycles.
> > > 
> > > Given C virtiofsd is slowly going away, so I don't want to be too
> > > particular about it. But, I would have thought to put submount
> > > inodes into another list or hash map (using mount id as key) and just
> > > traverse through that list instead. Given number of submounts should
> > > be small, it should be pretty quick to walk through that list.
> > > 
> > > > +struct lo_inode *inode = value;
> > > > +
> > > > +if (inode->is_submount) {
> > > > +g_atomic_int_inc(>refcount);
> > > > +submount_list = g_slist_prepend(submount_list, inode);
> > > > +}
> > > > +}
> > > > +
> > > > +pthread_mutex_unlock(>mutex);
> > > > +
> > > > +/* The root inode is always present and not tracked in the 
> > > > hash table */
> > > > +err = lo_do_syncfs(lo, >root);
> > > > +
> > > > +for (elem = submount_list; elem; elem = g_slist_next(elem)) {
> > > > +struct lo_inode *inode = elem->data;
> > > > +int r;
> > > > +
> > > > +r = lo_do_syncfs(lo, inode);
> > > > +if (r) {
> > > > +/*
> > > > + * Try to sync as much as possible. Only one error can 
> > > > be
> > > > + * reported to the client though, arbitrarily the last 
> > > > one.
> > > > + */
> > > > +err = r;
> > > > +}
> > > > +lo_inode_put(lo, );
> > > > +}
> > > 
> > > One more minor nit. What happens if virtiofsd is processing syncfs list
> > > and then somebody hard reboots qemu and mounts virtiofs again. That
> > > will trigger FUSE_INIT and will call lo_destroy() first.
> > > 
> > > fuse_lowlevel.c
> > > 
> > > fuse_session_process_buf_int()
> > > {
> > > fuse_log(FUSE_LOG_DEBUG, "%s: reinit\n", __func__);
> > > se->got_destroy = 1;
> > > se->got_init = 0;
> > > if (se->op.destroy) {
> > > se->op.destroy(se->userdata);
> > > }
> > > }
> > > 
> > > IIUC, there is no synchronization with this path. If we are running with
> > > thread pool enabled, it could very well happen that one thread is still
> > > doing syncfs while other thread is executing do_init(). That sounds
> > > like little bit of a problem. It will be good if there is a way
> > > to either abort syncfs() or do_destroy() waits for all the previous
> > > syncfs() to finish.
> > > 
> > > Greg, if you like, you could break down this work in two patch series.
> > > First patch series just issues syncfs() on inode id sent with FUSE_SYNCFS.
> > > That's easy fix and can get merged now.
> > 
> > Actually I think even single "syncfs" will have synchronization issue
> > with 

Re: [PATCH v5 3/3] virtiofsd: Add support for FUSE_SYNCFS request without announce_submounts

2022-02-15 Thread Greg Kurz
On Mon, 14 Feb 2022 13:27:22 -0500
Vivek Goyal  wrote:

> On Mon, Feb 14, 2022 at 02:58:20PM +0100, Greg Kurz wrote:
> > This adds the missing bits to support FUSE_SYNCFS in the case submounts
> > aren't announced to the client.
> > 
> > Iterate over all inodes and call syncfs() on the ones marked as submounts.
> > Since syncfs() can block for an indefinite time, we cannot call it with
> > lo->mutex held as it would prevent the server to process other requests.
> > This is thus broken down in two steps. First build a list of submounts
> > with lo->mutex held, drop the mutex and finally process the list. A
> > reference is taken on the inodes to ensure they don't go away when
> > lo->mutex is dropped.
> > 
> > Signed-off-by: Greg Kurz 
> > ---
> >  tools/virtiofsd/passthrough_ll.c | 38 ++--
> >  1 file changed, 36 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/virtiofsd/passthrough_ll.c 
> > b/tools/virtiofsd/passthrough_ll.c
> > index e94c4e6f8635..7ce944bfe2a0 100644
> > --- a/tools/virtiofsd/passthrough_ll.c
> > +++ b/tools/virtiofsd/passthrough_ll.c
> > @@ -3400,8 +3400,42 @@ static void lo_syncfs(fuse_req_t req, fuse_ino_t ino)
> >  err = lo_do_syncfs(lo, inode);
> >  lo_inode_put(lo, );
> >  } else {
> > -/* Requires the sever to track submounts. Not implemented yet */
> > -err = ENOSYS;
> > +g_autoptr(GSList) submount_list = NULL;
> > +GSList *elem;
> > +GHashTableIter iter;
> > +gpointer key, value;
> > +
> > +pthread_mutex_lock(>mutex);
> > +
> > +g_hash_table_iter_init(, lo->inodes);
> > +while (g_hash_table_iter_next(, , )) {
> 
> Going through all the inodes sounds very inefficient. If there are large
> number of inodes (say 1 million or more), and if frequent syncfs requests
> are coming this can consume lot of cpu cycles.
> 

Yes.

> Given C virtiofsd is slowly going away, so I don't want to be too
> particular about it. But, I would have thought to put submount
> inodes into another list or hash map (using mount id as key) and just
> traverse through that list instead. Given number of submounts should
> be small, it should be pretty quick to walk through that list.
> 

I don't think this requires a hash, but yes we could manage a list
of these so that we don't have to create the list at syncfs() time.

> > +struct lo_inode *inode = value;
> > +
> > +if (inode->is_submount) {
> > +g_atomic_int_inc(>refcount);
> > +submount_list = g_slist_prepend(submount_list, inode);
> > +}
> > +}
> > +
> > +pthread_mutex_unlock(>mutex);
> > +
> > +/* The root inode is always present and not tracked in the hash 
> > table */
> > +err = lo_do_syncfs(lo, >root);
> > +
> > +for (elem = submount_list; elem; elem = g_slist_next(elem)) {
> > +struct lo_inode *inode = elem->data;
> > +int r;
> > +
> > +r = lo_do_syncfs(lo, inode);
> > +if (r) {
> > +/*
> > + * Try to sync as much as possible. Only one error can be
> > + * reported to the client though, arbitrarily the last one.
> > + */
> > +err = r;
> > +}
> > +lo_inode_put(lo, );
> > +}
> 
> One more minor nit. What happens if virtiofsd is processing syncfs list
> and then somebody hard reboots qemu and mounts virtiofs again. That
> will trigger FUSE_INIT and will call lo_destroy() first.
> 
> fuse_lowlevel.c
> 
> fuse_session_process_buf_int()
> {
> fuse_log(FUSE_LOG_DEBUG, "%s: reinit\n", __func__);
> se->got_destroy = 1;
> se->got_init = 0;
> if (se->op.destroy) {
> se->op.destroy(se->userdata);
> }
> }
> 
> IIUC, there is no synchronization with this path. If we are running with
> thread pool enabled, it could very well happen that one thread is still
> doing syncfs while other thread is executing do_init(). That sounds
> like little bit of a problem. It will be good if there is a way
> to either abort syncfs() or do_destroy() waits for all the previous
> syncfs() to finish.
> 

Ah, this is a problem indeed but how does it differ from any other
outstanding request ? It seems that do_destroy() should drain all
of them.

> Greg, if you like, you could break down this work in two patch series.
> First patch series just issues syncfs() on inode id sent with FUSE_SYNCFS.
> That's easy fix and can get merged now.
> 

Sure. I can quickly repost patch 1 that matches what the rust
implementation is doing as suggested by German. 

> And second patch series take care of above issues and will be little bit
> more work.
> 

This might take some more time as I really only have very few cycles to
work on this.

Cheers,

--
Greg

> Thanks
> Vivek
> 




Re: [PATCH v5 3/3] virtiofsd: Add support for FUSE_SYNCFS request without announce_submounts

2022-02-14 Thread Vivek Goyal
On Mon, Feb 14, 2022 at 01:56:08PM -0500, Vivek Goyal wrote:
> On Mon, Feb 14, 2022 at 01:27:22PM -0500, Vivek Goyal wrote:
> > On Mon, Feb 14, 2022 at 02:58:20PM +0100, Greg Kurz wrote:
> > > This adds the missing bits to support FUSE_SYNCFS in the case submounts
> > > aren't announced to the client.
> > > 
> > > Iterate over all inodes and call syncfs() on the ones marked as submounts.
> > > Since syncfs() can block for an indefinite time, we cannot call it with
> > > lo->mutex held as it would prevent the server to process other requests.
> > > This is thus broken down in two steps. First build a list of submounts
> > > with lo->mutex held, drop the mutex and finally process the list. A
> > > reference is taken on the inodes to ensure they don't go away when
> > > lo->mutex is dropped.
> > > 
> > > Signed-off-by: Greg Kurz 
> > > ---
> > >  tools/virtiofsd/passthrough_ll.c | 38 ++--
> > >  1 file changed, 36 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/tools/virtiofsd/passthrough_ll.c 
> > > b/tools/virtiofsd/passthrough_ll.c
> > > index e94c4e6f8635..7ce944bfe2a0 100644
> > > --- a/tools/virtiofsd/passthrough_ll.c
> > > +++ b/tools/virtiofsd/passthrough_ll.c
> > > @@ -3400,8 +3400,42 @@ static void lo_syncfs(fuse_req_t req, fuse_ino_t 
> > > ino)
> > >  err = lo_do_syncfs(lo, inode);
> > >  lo_inode_put(lo, );
> > >  } else {
> > > -/* Requires the sever to track submounts. Not implemented yet */
> > > -err = ENOSYS;
> > > +g_autoptr(GSList) submount_list = NULL;
> > > +GSList *elem;
> > > +GHashTableIter iter;
> > > +gpointer key, value;
> > > +
> > > +pthread_mutex_lock(>mutex);
> > > +
> > > +g_hash_table_iter_init(, lo->inodes);
> > > +while (g_hash_table_iter_next(, , )) {
> > 
> > Going through all the inodes sounds very inefficient. If there are large
> > number of inodes (say 1 million or more), and if frequent syncfs requests
> > are coming this can consume lot of cpu cycles.
> > 
> > Given C virtiofsd is slowly going away, so I don't want to be too
> > particular about it. But, I would have thought to put submount
> > inodes into another list or hash map (using mount id as key) and just
> > traverse through that list instead. Given number of submounts should
> > be small, it should be pretty quick to walk through that list.
> > 
> > > +struct lo_inode *inode = value;
> > > +
> > > +if (inode->is_submount) {
> > > +g_atomic_int_inc(>refcount);
> > > +submount_list = g_slist_prepend(submount_list, inode);
> > > +}
> > > +}
> > > +
> > > +pthread_mutex_unlock(>mutex);
> > > +
> > > +/* The root inode is always present and not tracked in the hash 
> > > table */
> > > +err = lo_do_syncfs(lo, >root);
> > > +
> > > +for (elem = submount_list; elem; elem = g_slist_next(elem)) {
> > > +struct lo_inode *inode = elem->data;
> > > +int r;
> > > +
> > > +r = lo_do_syncfs(lo, inode);
> > > +if (r) {
> > > +/*
> > > + * Try to sync as much as possible. Only one error can be
> > > + * reported to the client though, arbitrarily the last 
> > > one.
> > > + */
> > > +err = r;
> > > +}
> > > +lo_inode_put(lo, );
> > > +}
> > 
> > One more minor nit. What happens if virtiofsd is processing syncfs list
> > and then somebody hard reboots qemu and mounts virtiofs again. That
> > will trigger FUSE_INIT and will call lo_destroy() first.
> > 
> > fuse_lowlevel.c
> > 
> > fuse_session_process_buf_int()
> > {
> > fuse_log(FUSE_LOG_DEBUG, "%s: reinit\n", __func__);
> > se->got_destroy = 1;
> > se->got_init = 0;
> > if (se->op.destroy) {
> > se->op.destroy(se->userdata);
> > }
> > }
> > 
> > IIUC, there is no synchronization with this path. If we are running with
> > thread pool enabled, it could very well happen that one thread is still
> > doing syncfs while other thread is executing do_init(). That sounds
> > like little bit of a problem. It will be good if there is a way
> > to either abort syncfs() or do_destroy() waits for all the previous
> > syncfs() to finish.
> > 
> > Greg, if you like, you could break down this work in two patch series.
> > First patch series just issues syncfs() on inode id sent with FUSE_SYNCFS.
> > That's easy fix and can get merged now.
> 
> Actually I think even single "syncfs" will have synchronization issue
> with do_init() upon hard reboot if we drop lo->mutex during syncfs().

Actually, we have similar issues with ->fsync(). We take lo->mutex,
and then take a reference on inode. Call fsync() on this. Now it is
possible that guest hard reboots, triggers, FUSE_INIT and lo_destroy()
is called. It will take lo->mutex and 

Re: [PATCH v5 3/3] virtiofsd: Add support for FUSE_SYNCFS request without announce_submounts

2022-02-14 Thread Vivek Goyal
On Mon, Feb 14, 2022 at 01:27:22PM -0500, Vivek Goyal wrote:
> On Mon, Feb 14, 2022 at 02:58:20PM +0100, Greg Kurz wrote:
> > This adds the missing bits to support FUSE_SYNCFS in the case submounts
> > aren't announced to the client.
> > 
> > Iterate over all inodes and call syncfs() on the ones marked as submounts.
> > Since syncfs() can block for an indefinite time, we cannot call it with
> > lo->mutex held as it would prevent the server to process other requests.
> > This is thus broken down in two steps. First build a list of submounts
> > with lo->mutex held, drop the mutex and finally process the list. A
> > reference is taken on the inodes to ensure they don't go away when
> > lo->mutex is dropped.
> > 
> > Signed-off-by: Greg Kurz 
> > ---
> >  tools/virtiofsd/passthrough_ll.c | 38 ++--
> >  1 file changed, 36 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/virtiofsd/passthrough_ll.c 
> > b/tools/virtiofsd/passthrough_ll.c
> > index e94c4e6f8635..7ce944bfe2a0 100644
> > --- a/tools/virtiofsd/passthrough_ll.c
> > +++ b/tools/virtiofsd/passthrough_ll.c
> > @@ -3400,8 +3400,42 @@ static void lo_syncfs(fuse_req_t req, fuse_ino_t ino)
> >  err = lo_do_syncfs(lo, inode);
> >  lo_inode_put(lo, );
> >  } else {
> > -/* Requires the sever to track submounts. Not implemented yet */
> > -err = ENOSYS;
> > +g_autoptr(GSList) submount_list = NULL;
> > +GSList *elem;
> > +GHashTableIter iter;
> > +gpointer key, value;
> > +
> > +pthread_mutex_lock(>mutex);
> > +
> > +g_hash_table_iter_init(, lo->inodes);
> > +while (g_hash_table_iter_next(, , )) {
> 
> Going through all the inodes sounds very inefficient. If there are large
> number of inodes (say 1 million or more), and if frequent syncfs requests
> are coming this can consume lot of cpu cycles.
> 
> Given C virtiofsd is slowly going away, so I don't want to be too
> particular about it. But, I would have thought to put submount
> inodes into another list or hash map (using mount id as key) and just
> traverse through that list instead. Given number of submounts should
> be small, it should be pretty quick to walk through that list.
> 
> > +struct lo_inode *inode = value;
> > +
> > +if (inode->is_submount) {
> > +g_atomic_int_inc(>refcount);
> > +submount_list = g_slist_prepend(submount_list, inode);
> > +}
> > +}
> > +
> > +pthread_mutex_unlock(>mutex);
> > +
> > +/* The root inode is always present and not tracked in the hash 
> > table */
> > +err = lo_do_syncfs(lo, >root);
> > +
> > +for (elem = submount_list; elem; elem = g_slist_next(elem)) {
> > +struct lo_inode *inode = elem->data;
> > +int r;
> > +
> > +r = lo_do_syncfs(lo, inode);
> > +if (r) {
> > +/*
> > + * Try to sync as much as possible. Only one error can be
> > + * reported to the client though, arbitrarily the last one.
> > + */
> > +err = r;
> > +}
> > +lo_inode_put(lo, );
> > +}
> 
> One more minor nit. What happens if virtiofsd is processing syncfs list
> and then somebody hard reboots qemu and mounts virtiofs again. That
> will trigger FUSE_INIT and will call lo_destroy() first.
> 
> fuse_lowlevel.c
> 
> fuse_session_process_buf_int()
> {
> fuse_log(FUSE_LOG_DEBUG, "%s: reinit\n", __func__);
> se->got_destroy = 1;
> se->got_init = 0;
> if (se->op.destroy) {
> se->op.destroy(se->userdata);
> }
> }
> 
> IIUC, there is no synchronization with this path. If we are running with
> thread pool enabled, it could very well happen that one thread is still
> doing syncfs while other thread is executing do_init(). That sounds
> like little bit of a problem. It will be good if there is a way
> to either abort syncfs() or do_destroy() waits for all the previous
> syncfs() to finish.
> 
> Greg, if you like, you could break down this work in two patch series.
> First patch series just issues syncfs() on inode id sent with FUSE_SYNCFS.
> That's easy fix and can get merged now.

Actually I think even single "syncfs" will have synchronization issue
with do_init() upon hard reboot if we drop lo->mutex during syncfs().

Vivek

> 
> And second patch series take care of above issues and will be little bit
> more work.
> 
> Thanks
> Vivek




Re: [PATCH v5 3/3] virtiofsd: Add support for FUSE_SYNCFS request without announce_submounts

2022-02-14 Thread Vivek Goyal
On Mon, Feb 14, 2022 at 02:58:20PM +0100, Greg Kurz wrote:
> This adds the missing bits to support FUSE_SYNCFS in the case submounts
> aren't announced to the client.
> 
> Iterate over all inodes and call syncfs() on the ones marked as submounts.
> Since syncfs() can block for an indefinite time, we cannot call it with
> lo->mutex held as it would prevent the server to process other requests.
> This is thus broken down in two steps. First build a list of submounts
> with lo->mutex held, drop the mutex and finally process the list. A
> reference is taken on the inodes to ensure they don't go away when
> lo->mutex is dropped.
> 
> Signed-off-by: Greg Kurz 
> ---
>  tools/virtiofsd/passthrough_ll.c | 38 ++--
>  1 file changed, 36 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/virtiofsd/passthrough_ll.c 
> b/tools/virtiofsd/passthrough_ll.c
> index e94c4e6f8635..7ce944bfe2a0 100644
> --- a/tools/virtiofsd/passthrough_ll.c
> +++ b/tools/virtiofsd/passthrough_ll.c
> @@ -3400,8 +3400,42 @@ static void lo_syncfs(fuse_req_t req, fuse_ino_t ino)
>  err = lo_do_syncfs(lo, inode);
>  lo_inode_put(lo, );
>  } else {
> -/* Requires the sever to track submounts. Not implemented yet */
> -err = ENOSYS;
> +g_autoptr(GSList) submount_list = NULL;
> +GSList *elem;
> +GHashTableIter iter;
> +gpointer key, value;
> +
> +pthread_mutex_lock(>mutex);
> +
> +g_hash_table_iter_init(, lo->inodes);
> +while (g_hash_table_iter_next(, , )) {

Going through all the inodes sounds very inefficient. If there are large
number of inodes (say 1 million or more), and if frequent syncfs requests
are coming this can consume lot of cpu cycles.

Given C virtiofsd is slowly going away, so I don't want to be too
particular about it. But, I would have thought to put submount
inodes into another list or hash map (using mount id as key) and just
traverse through that list instead. Given number of submounts should
be small, it should be pretty quick to walk through that list.

> +struct lo_inode *inode = value;
> +
> +if (inode->is_submount) {
> +g_atomic_int_inc(>refcount);
> +submount_list = g_slist_prepend(submount_list, inode);
> +}
> +}
> +
> +pthread_mutex_unlock(>mutex);
> +
> +/* The root inode is always present and not tracked in the hash 
> table */
> +err = lo_do_syncfs(lo, >root);
> +
> +for (elem = submount_list; elem; elem = g_slist_next(elem)) {
> +struct lo_inode *inode = elem->data;
> +int r;
> +
> +r = lo_do_syncfs(lo, inode);
> +if (r) {
> +/*
> + * Try to sync as much as possible. Only one error can be
> + * reported to the client though, arbitrarily the last one.
> + */
> +err = r;
> +}
> +lo_inode_put(lo, );
> +}

One more minor nit. What happens if virtiofsd is processing syncfs list
and then somebody hard reboots qemu and mounts virtiofs again. That
will trigger FUSE_INIT and will call lo_destroy() first.

fuse_lowlevel.c

fuse_session_process_buf_int()
{
fuse_log(FUSE_LOG_DEBUG, "%s: reinit\n", __func__);
se->got_destroy = 1;
se->got_init = 0;
if (se->op.destroy) {
se->op.destroy(se->userdata);
}
}

IIUC, there is no synchronization with this path. If we are running with
thread pool enabled, it could very well happen that one thread is still
doing syncfs while other thread is executing do_init(). That sounds
like little bit of a problem. It will be good if there is a way
to either abort syncfs() or do_destroy() waits for all the previous
syncfs() to finish.

Greg, if you like, you could break down this work in two patch series.
First patch series just issues syncfs() on inode id sent with FUSE_SYNCFS.
That's easy fix and can get merged now.

And second patch series take care of above issues and will be little bit
more work.

Thanks
Vivek




[PATCH v5 3/3] virtiofsd: Add support for FUSE_SYNCFS request without announce_submounts

2022-02-14 Thread Greg Kurz
This adds the missing bits to support FUSE_SYNCFS in the case submounts
aren't announced to the client.

Iterate over all inodes and call syncfs() on the ones marked as submounts.
Since syncfs() can block for an indefinite time, we cannot call it with
lo->mutex held as it would prevent the server to process other requests.
This is thus broken down in two steps. First build a list of submounts
with lo->mutex held, drop the mutex and finally process the list. A
reference is taken on the inodes to ensure they don't go away when
lo->mutex is dropped.

Signed-off-by: Greg Kurz 
---
 tools/virtiofsd/passthrough_ll.c | 38 ++--
 1 file changed, 36 insertions(+), 2 deletions(-)

diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c
index e94c4e6f8635..7ce944bfe2a0 100644
--- a/tools/virtiofsd/passthrough_ll.c
+++ b/tools/virtiofsd/passthrough_ll.c
@@ -3400,8 +3400,42 @@ static void lo_syncfs(fuse_req_t req, fuse_ino_t ino)
 err = lo_do_syncfs(lo, inode);
 lo_inode_put(lo, );
 } else {
-/* Requires the sever to track submounts. Not implemented yet */
-err = ENOSYS;
+g_autoptr(GSList) submount_list = NULL;
+GSList *elem;
+GHashTableIter iter;
+gpointer key, value;
+
+pthread_mutex_lock(>mutex);
+
+g_hash_table_iter_init(, lo->inodes);
+while (g_hash_table_iter_next(, , )) {
+struct lo_inode *inode = value;
+
+if (inode->is_submount) {
+g_atomic_int_inc(>refcount);
+submount_list = g_slist_prepend(submount_list, inode);
+}
+}
+
+pthread_mutex_unlock(>mutex);
+
+/* The root inode is always present and not tracked in the hash table 
*/
+err = lo_do_syncfs(lo, >root);
+
+for (elem = submount_list; elem; elem = g_slist_next(elem)) {
+struct lo_inode *inode = elem->data;
+int r;
+
+r = lo_do_syncfs(lo, inode);
+if (r) {
+/*
+ * Try to sync as much as possible. Only one error can be
+ * reported to the client though, arbitrarily the last one.
+ */
+err = r;
+}
+lo_inode_put(lo, );
+}
 }
 
 fuse_reply_err(req, err);
-- 
2.34.1