[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_RGW - Add back a close() op
>From Daniel Gryniewicz: Daniel Gryniewicz has uploaded a new change for review. https://review.gerrithub.io/294713 Change subject: FSAL_RGW - Add back a close() op .. FSAL_RGW - Add back a close() op Even support_ex() FSALs need the base close() op at this point. Add it back in for FSAL_RGW. Change-Id: If2229de97e7ad850461f953c21e10ac355342bd0 Signed-off-by: Daniel Gryniewicz --- M src/FSAL/FSAL_RGW/handle.c 1 file changed, 17 insertions(+), 0 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/13/294713/1 -- To view, visit https://review.gerrithub.io/294713 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-MessageType: newchange Gerrit-Change-Id: If2229de97e7ad850461f953c21e10ac355342bd0 Gerrit-PatchSet: 1 Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-Owner: Daniel Gryniewicz -- ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Fix fileid, fsid and type decoding in NFS protocol
>From Patrice LUCAS: Patrice LUCAS has uploaded a new change for review. https://review.gerrithub.io/294698 Change subject: Fix fileid, fsid and type decoding in NFS protocol .. Fix fileid, fsid and type decoding in NFS protocol FSAL_PROXY tests showed null inode numbers. It came from xdr decoding into Fattr4_To_FSAL_attr function. The xdr_attrs_args structure contains both fileid directly and into attrs. Fattr4_To_FSAL_attr was waiting for an update of attrs whereas xdr decoding was updating xdr_attrs_args direct fields. Change-Id: I4d21d7f584c3346efdf09a0c9498346b62b48ef0 Signed-off-by: Patrice LUCAS --- M src/Protocols/NFS/nfs_proto_tools.c 1 file changed, 8 insertions(+), 0 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/98/294698/1 -- To view, visit https://review.gerrithub.io/294698 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-MessageType: newchange Gerrit-Change-Id: I4d21d7f584c3346efdf09a0c9498346b62b48ef0 Gerrit-PatchSet: 1 Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-Owner: Patrice LUCAS -- ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: Use chk_verifier_stat to verifier comparision ...
>From Soumya: Soumya has uploaded a new change for review. https://review.gerrithub.io/294679 Change subject: FSAL_GLUSTER: Use chk_verifier_stat to verifier comparision in open2() .. FSAL_GLUSTER: Use chk_verifier_stat to verifier comparision in open2() In case of exclusive create, for verifier comparision, use chk_verifier_stat instead as the attributes are already available. Change-Id: Id495ae1a978b4e2b84f8c7c6201a4b0f3ab8316f Signed-off-by: Soumya Koduri --- M src/FSAL/FSAL_GLUSTER/handle.c 1 file changed, 14 insertions(+), 12 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/79/294679/1 -- To view, visit https://review.gerrithub.io/294679 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-MessageType: newchange Gerrit-Change-Id: Id495ae1a978b4e2b84f8c7c6201a4b0f3ab8316f Gerrit-PatchSet: 1 Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-Owner: Soumya -- ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: avoid fetching attributes in setattr2
>From Soumya: Soumya has uploaded a new change for review. https://review.gerrithub.io/294677 Change subject: FSAL_GLUSTER: avoid fetching attributes in setattr2 .. FSAL_GLUSTER: avoid fetching attributes in setattr2 The attrributes read post successful setattr are currently not consumed by md-cache which again does getattrs to refresh the object's attributes. Hence avoid that call in FSAL setattr. Change-Id: If984a8ed4d8437b4e3d0dd0425d4588b8d59a309 Signed-off-by: Soumya Koduri --- M src/FSAL/FSAL_GLUSTER/handle.c 1 file changed, 0 insertions(+), 7 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/77/294677/1 -- To view, visit https://review.gerrithub.io/294677 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-MessageType: newchange Gerrit-Change-Id: If984a8ed4d8437b4e3d0dd0425d4588b8d59a309 Gerrit-PatchSet: 1 Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-Owner: Soumya -- ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: Avoid redundant fsync operation
>From Soumya: Soumya has uploaded a new change for review. https://review.gerrithub.io/294676 Change subject: FSAL_GLUSTER: Avoid redundant fsync operation .. FSAL_GLUSTER: Avoid redundant fsync operation Currently we do fsync operation post write if the fsal_stable is set to FSAL_O_SYNC flag. But this is not needed as glusterfs write operation can accept O_SYNC flag which guarantees synchronous writes. Change-Id: I6a1eb4e9ca97043e969ec63c7e784005fc2bfda2 Signed-off-by: Soumya Koduri --- M src/FSAL/FSAL_GLUSTER/handle.c 1 file changed, 0 insertions(+), 9 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/76/294676/1 -- To view, visit https://review.gerrithub.io/294676 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-MessageType: newchange Gerrit-Change-Id: I6a1eb4e9ca97043e969ec63c7e784005fc2bfda2 Gerrit-PatchSet: 1 Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-Owner: Soumya -- ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: Clear ATTR_RDATTR_ERR mask bit in case of succ...
>From Soumya: Soumya has uploaded a new change for review. https://review.gerrithub.io/294678 Change subject: FSAL_GLUSTER: Clear ATTR_RDATTR_ERR mask bit in case of successful read .. FSAL_GLUSTER: Clear ATTR_RDATTR_ERR mask bit in case of successful read In case if the attributes of an entry are fetched successfully we need to clear 'ATTR_RDATTR_ERR' mask bit from them. This was missing at few places resulting in md-cache trying to re-invoke getattrs. This patch fixes the same. Change-Id: Ib0459d21c9d19dbac34f70d235f7461d7dcd7a34 Signed-off-by: Soumya Koduri --- M src/FSAL/FSAL_GLUSTER/handle.c 1 file changed, 4 insertions(+), 0 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/78/294678/1 -- To view, visit https://review.gerrithub.io/294678 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-MessageType: newchange Gerrit-Change-Id: Ib0459d21c9d19dbac34f70d235f7461d7dcd7a34 Gerrit-PatchSet: 1 Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-Owner: Soumya -- ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Fix readdir FSID/FILEID all the same
>From Daniel Gryniewicz: Daniel Gryniewicz has uploaded a new change for review. https://review.gerrithub.io/294680 Change subject: Fix readdir FSID/FILEID all the same .. Fix readdir FSID/FILEID all the same Set the FSID/FILEID to the found object, not the current object. Fix found by pl...@blackmilk.fr Change-Id: I3d6bc48bb0d80b12d0b7de86f2f84bdbf7428c60 Signed-off-by: Daniel Gryniewicz --- M src/Protocols/NFS/nfs4_op_readdir.c 1 file changed, 2 insertions(+), 2 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/80/294680/1 -- To view, visit https://review.gerrithub.io/294680 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-MessageType: newchange Gerrit-Change-Id: I3d6bc48bb0d80b12d0b7de86f2f84bdbf7428c60 Gerrit-PatchSet: 1 Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-Owner: Daniel Gryniewicz -- ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER : spec file changes for FSAL_GLUSTER
>From Jiffin Tony Thottan: Jiffin Tony Thottan has uploaded a new change for review. https://review.gerrithub.io/294627 Change subject: FSAL_GLUSTER : spec file changes for FSAL_GLUSTER .. FSAL_GLUSTER : spec file changes for FSAL_GLUSTER NFS-Ganesha 2.4 has dependency over GlusterFS 3.8. Add changes to spec to reflect that change Change-Id: Ibf2a60975e97204f31c7c27c154f1ab382e3b3f0 Signed-off-by: Jiffin Tony Thottan --- M src/nfs-ganesha.spec-in.cmake 1 file changed, 1 insertion(+), 1 deletion(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/27/294627/1 -- To view, visit https://review.gerrithub.io/294627 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-MessageType: newchange Gerrit-Change-Id: Ibf2a60975e97204f31c7c27c154f1ab382e3b3f0 Gerrit-PatchSet: 1 Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-Owner: Jiffin Tony Thottan -- ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
Re: [Nfs-ganesha-devel] Threading models and the thread fridge
On 9/14/16 1:03 PM, Daniel Gryniewicz wrote: > Bill has some ideas and some outstanding (although outdated) work on > consolidating and cleaning up the fridge. Bill, can you post a short > description of your ideas on that? > Over a year ago, I developed a partial replacement for the thread fridge, written the same time as Dan was working on the cache. This was my part of "napalming the forest". After discussion, this group decided that the cache was more important for immediate performance improvements, the thread fridge re-write was too disruptive, and we'd do the threads in the following release. At the time, we thought this would be December. Instead, I was tasked with limiting my effort to RDMA, which was integrated separately with its own thread pool in ntirpc. A small amount of my patch for removing some excess locking in the dispatch loop was integrated, too. I've re-written my code a second time to only do UDP, moving the UDP-related threads into ntirpc using the same pool as RDMA. They are very similar control structures. Then was directed to discard that effort and started a third time to include both UDP and TCP in parallel. But have never finished I still believe that the fridge and its locking and control structures are a significant performance barrier. But let's continue getting a stable release out. Then I'll re-write yet again. That will take 2-3 weeks intensive effort. My plan was that I could have something ready to demonstrate at Fall Bake-a-thon. That was premised on the release having been done by now. I'm no longer in the Storage group, spending most of my time on Open Standards efforts, Red Hat infrastructure, and related budgets. (Sadly, lots and lots of my time on budgets.) > On 09/14/2016 12:42 PM, Frank Filz wrote: >> We have a number of uses of a "worker thread" model where a pool of threads >> looks on a queue for work to do. One question is if every one of these uses >> needs its own queue. The nfs_worker_thread threads probably do benefit from >> being their own pool and queue, but do all the other uses. >> This is the part that I've redone. >> Then there are some threads that wake up periodically to do work that is not >> exactly queued (the reaper thread for example). It's not clear if these >> threads avoid any wakeup at all If there is not any work to be done. >> Not currently. A better design would be to have a task that would be placed on a general thread as needed. That's a major redesign of Ganesha itself, and I've not touched that code. >> Then there are some threads that live to sit blocked on some system call to >> wait for work to do. These threads particularly don't make sense to be in a >> thread pool unless they block intermittently (for example the proposed >> threads to actually block on locks). This is also a current problem for TCP. Each connection has its own output thread in ntirpc that sits and waits for the system. My design consolidates all those threads into a single TCP output so that all the system calls (and following memory cleanup) have less contention. I'm of the opinion that this contention causes considerable performance problems, but have no measurements to support that opinion (yet). We did put in some hooks for before and after measurements. The ideal design would be to have a thread per physical interface. This would also assist thread and processor locality, as some system designs have processors that are "closer" to a particular interface. But until we integrate something like DPDK, we have no method of determining how many (non-RDMA) interfaces exist, and which TCP or UDP connections are associated with which interface. (RDMA knows about its interfaces, as we talk directly.) >> For shutdown purposes, we need to >> examine if any of these threads is not able to be cancelled, and also find >> the best way to cancel each one (for example, a thread blocking on an fcntl >> F_SETLKW can be interrupted with a signal). >> For RDMA, my design is that each system call is its own tasklet, so that there are no threads waiting for interfaces. I'd hoped to split up the TCP (and UDP) threads into separate sub-tasks in the same fashion, but have never gotten that far. I agree that we need to think about shutdown, and have not put enough effort into it. >> I also wonder if the multi-purpose threads have too many locks. It looks >> like fridge has two separate mutexes. >> This was one of the things that I'd found, too. >> Typically, when I implement a producer/consumer queue from scratch, I >> protect the queue with the same mutex as is paired with the condition >> variable used to signal the worker thread(s). Cancelling the thread can >> either be accomplished with a separate "cancel" flag (also protected by the >> mutex) or a special work item (perhaps put at the head of the queue instead >> of the tail, depending on if you want to drain the queue before shutdown or >> not). >> I did implement from