Re: [Nfs-ganesha-devel] XID missing in error path for RPC AUTH failure.
That sounds right, I'm uncertain whether this has regressed in the text, or maybe in the likelihood of inlining in the new dispatch model. Bill? Matt On Wed, Dec 13, 2017 at 9:38 AM, Pradeep wrote: > Hello, > > When using krb5 exports, I noticed that TIRPC does not send XID in response > - see xdr_reply_encode() for MSG_DENIED case. Looks like Linux clients can't > decode the message and go in to an infinite loop retrying the same NFS > operation. I tried adding XID back (like it is done for normal case) and it > seems to have fixed the problem. Is this the right thing to do? > > diff --git a/src/rpc_dplx_msg.c b/src/rpc_dplx_msg.c > index 01e5a5c..a585e8a 100644 > --- a/src/rpc_dplx_msg.c > +++ b/src/rpc_dplx_msg.c > @@ -194,9 +194,12 @@ xdr_reply_encode(XDR *xdrs, struct rpc_msg *dmsg) > __warnx(TIRPC_DEBUG_FLAG_RPC_MSG, > "%s:%u DENIED AUTH", > __func__, __LINE__); > - buf = XDR_INLINE(xdrs, 2 * BYTES_PER_XDR_UNIT); > + buf = XDR_INLINE(xdrs, 5 * BYTES_PER_XDR_UNIT); > > if (buf != NULL) { > + IXDR_PUT_INT32(buf, dmsg->rm_xid); > + IXDR_PUT_ENUM(buf, dmsg->rm_direction); > + IXDR_PUT_ENUM(buf, dmsg->rm_reply.rp_stat); > IXDR_PUT_ENUM(buf, rr->rj_stat); > IXDR_PUT_ENUM(buf, rr->rj_why); > } else if (!xdr_putenum(xdrs, rr->rj_stat)) { > > -- > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > ___ > Nfs-ganesha-devel mailing list > Nfs-ganesha-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel > -- Matt Benjamin Red Hat, Inc. 315 West Huron Street, Suite 140A Ann Arbor, Michigan 48103 http://www.redhat.com/en/technologies/storage tel. 734-821-5101 fax. 734-769-8938 cel. 734-216-5309 -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] XID missing in error path for RPC AUTH failure.
Hello, When using krb5 exports, I noticed that TIRPC does not send XID in response - see xdr_reply_encode() for MSG_DENIED case. Looks like Linux clients can't decode the message and go in to an infinite loop retrying the same NFS operation. I tried adding XID back (like it is done for normal case) and it seems to have fixed the problem. Is this the right thing to do? diff --git a/src/rpc_dplx_msg.c b/src/rpc_dplx_msg.c index 01e5a5c..a585e8a 100644 --- a/src/rpc_dplx_msg.c +++ b/src/rpc_dplx_msg.c @@ -194,9 +194,12 @@ xdr_reply_encode(XDR *xdrs, struct rpc_msg *dmsg) __warnx(TIRPC_DEBUG_FLAG_RPC_MSG, "%s:%u DENIED AUTH", __func__, __LINE__); - buf = XDR_INLINE(xdrs, 2 * BYTES_PER_XDR_UNIT); + buf = XDR_INLINE(xdrs, 5 * BYTES_PER_XDR_UNIT); if (buf != NULL) { + IXDR_PUT_INT32(buf, dmsg->rm_xid); + IXDR_PUT_ENUM(buf, dmsg->rm_direction); + IXDR_PUT_ENUM(buf, dmsg->rm_reply.rp_stat); IXDR_PUT_ENUM(buf, rr->rj_stat); IXDR_PUT_ENUM(buf, rr->rj_why); } else if (!xdr_putenum(xdrs, rr->rj_stat)) { -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
Re: [Nfs-ganesha-devel] dev.20 segfault on shutdown
> I was testing code I'd written over the weekend, but it segfaulted on > shutdown after running pynfs (pynfs itself was successful.) No problems > simply starting and pkilling without doing any work. > > Gradually backed things out, until I'm at the 1a75e52 V2.6-dev.20, but still > seeing the problem on shutdown. Ran it twice to be sure. Took quite a bit of > time to run pynfs over and over. Ok, so I've fixed the crash, but looking at some debug, the reason we are getting to where it could crash is that we are leaking export references. I'm doing some code examination and finding export and obj_handle reference leaks... So far they are all in NFS v4. I hope to post some patches early tomorrow. It would really help if things that expected everything to cleanup actually checked if everything was cleaned up... destroy_fsals should never find any exports to call shutdown_export on. Frank > Error: couldn't complete write to the log file > /home/bill/rdma/install/var/log/ganesha.log status=9 (Bad file descriptor) > message was: > 11/12/2017 19:13:01 : epoch 5a2f193a : simpson91 : ganesha.nfsd- > 13288[Admin] rpc :TIRPC :DEBUG :svc_destroy_it() 0x6198bb80 fd 19 > xp_refs 1 af 0 port 4294967295 @ svc_xprt_shutdown:364 > > Thread 271 "ganesha.nfsd" received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7fff68053700 (LWP 31096)] > 0x7fffef8ca739 in release (exp_hdl=0x6130cec0) > at /home/bill/rdma/nfs-ganesha/src/FSAL/FSAL_VFS/export.c:79 > 79LogDebug(COMPONENT_FSAL, "Releasing VFS export for > %s", > (gdb) bt > #0 0x7fffef8ca739 in release (exp_hdl=0x6130cec0) > at /home/bill/rdma/nfs-ganesha/src/FSAL/FSAL_VFS/export.c:79 > #1 0x0044799d in shutdown_export (export=0x6130cec0) > at /home/bill/rdma/nfs-ganesha/src/FSAL/fsal_destroyer.c:152 > #2 0x00447d66 in destroy_fsals () > at /home/bill/rdma/nfs-ganesha/src/FSAL/fsal_destroyer.c:194 > #3 0x0047d9c3 in do_shutdown () > at /home/bill/rdma/nfs-ganesha/src/MainNFSD/nfs_admin_thread.c:511 > #4 0x0047de09 in admin_thread (UnusedArg=0x0) > at /home/bill/rdma/nfs-ganesha/src/MainNFSD/nfs_admin_thread.c:531 > #5 0x760b373a in start_thread (arg=0x7fff68053700) > at pthread_create.c:333 > #6 0x7598ae7f in clone () > at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97 > (gdb) quit > A debugging session is active. > > Inferior 1 [process 30823] will be killed. > > Quit anyway? (y or n) y > [root@simpson91 install]# > > > Thread 270 "ganesha.nfsd" received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7fff68087700 (LWP 6650)] > 0x7fffef8ca739 in release (exp_hdl=0x6130cec0) > at /home/bill/rdma/nfs-ganesha/src/FSAL/FSAL_VFS/export.c:79 > 79LogDebug(COMPONENT_FSAL, "Releasing VFS export for > %s", > (gdb) bt > #0 0x7fffef8ca739 in release (exp_hdl=0x6130cec0) > at /home/bill/rdma/nfs-ganesha/src/FSAL/FSAL_VFS/export.c:79 > #1 0x0044799d in shutdown_export (export=0x6130cec0) > at /home/bill/rdma/nfs-ganesha/src/FSAL/fsal_destroyer.c:152 > #2 0x00447d66 in destroy_fsals () > at /home/bill/rdma/nfs-ganesha/src/FSAL/fsal_destroyer.c:194 > #3 0x0047d9c3 in do_shutdown () > at /home/bill/rdma/nfs-ganesha/src/MainNFSD/nfs_admin_thread.c:511 > #4 0x0047de09 in admin_thread (UnusedArg=0x0) > at /home/bill/rdma/nfs-ganesha/src/MainNFSD/nfs_admin_thread.c:531 > #5 0x760b373a in start_thread (arg=0x7fff68087700) > at pthread_create.c:333 > #6 0x75989e7f in clone () > at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97 > (gdb) quit > A debugging session is active. > > Inferior 1 [process 6378] will be killed. > > Quit anyway? (y or n) y > [root@simpson91 install]# > > > > -- > Check out the vibrant tech community on one of the world's most engaging > tech sites, Slashdot.org! http://sdm.link/slashdot > ___ > Nfs-ganesha-devel mailing list > Nfs-ganesha-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Please rebase patch submissions and push to Gerrit again
--- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Announce Push of V2.6-dev.21
Branch next Tag:V2.6-dev.21 Release Highlights * new version of checkpatch * checkpatch fixes for existing code Signed-off-by: Frank S. Filz Contents: dd59241 Frank S. Filz V2.6-dev.21 889276d Frank S. Filz TEST and TOOLS: Cleanup new checkpatch errors 594d593 Frank S. Filz SUPPORT: Cleanup new checkpatch errors 403cf41 Frank S. Filz LOG: Cleanup new checkpatch errors 3abe23d Frank S. Filz HASHTABLE: Cleanup new checkpatch errors fd29a1a Frank S. Filz DBUS: Cleanup new checkpatch errors 5cae1d1 Frank S. Filz SAL: Cleanup new checkpatch errors d8bb201 Frank S. Filz RPCAL: Cleanup new checkpatch errors d0bc529 Frank S. Filz RQUOTA: Cleanup new checkpatch errors 6f28db3 Frank S. Filz NLM: Cleanup new checkpatch errors cfe1ac0 Frank S. Filz NFS: Cleanup new checkpatch errors 4c6771e Frank S. Filz NFS4: Cleanup new checkpatch errors aa834c5 Frank S. Filz NFS3: Cleanup new checkpatch errors 3e06f82 Frank S. Filz MNT: Cleanup new checkpatch errors c47c377 Frank S. Filz 9P: Cleanup new checkpatch errors 66a09b1 Frank S. Filz MainNFSD: Cleanup new checkpatch errors 34156be Frank S. Filz FSAL and FSAL_UP: Cleanup new checkpatch errors afcbe17 Frank S. Filz NULL: Cleanup new checkpatch errors afd89fa Frank S. Filz MDCACHE: Cleanup new checkpatch errors 646bf9a Frank S. Filz RGW: Cleanup new checkpatch errors e2b3ef4 Frank S. Filz PROXY: Cleanup new checkpatch errors 7c4c9da Frank S. Filz VFS: Cleanup new checkpatch errors bb94b29 Frank S. Filz GPFS: Clean up new checkpatch errors 83e6a16 Frank S. Filz GLUSTER: Clean up new checkpatch errors de4b45e Frank S. Filz CEPH: Cleanup new checkpatch errors f1321a0 Frank S. Filz Update checkpatch.pl from kernel v4.15-rc2 --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
Re: [Nfs-ganesha-devel] test over FSAl_NULL
Okay, with this fix, stacking NULL works for me: https://review.gerrithub.io/391463 Daniel On 12/12/2017 11:52 AM, Daniel Gryniewicz wrote: Okay, I'm able to reproduce. I'm looking at this, but the problem is that the export being set before mdcache is called is NULL's export, not MDCACHE's export, so the double un-stack causes VFS to see a NULL export. Somewhere, the top of the export stack is being lost. Daniel On 12/08/2017 10:59 AM, Daniel Gryniewicz wrote: I run NULL semi-regularly. The last time I ran it was a couple of months ago, so something may have crept in. I'll try again. That said, the code in that callpath looks correct. Daniel On 12/08/2017 05:46 AM, LUCAS Patrice wrote: Hi, Does anyone recently test the FSAL_NULL stackable FSAL ? Before using it as example of coding a stackable FSAL, I simply tried to use FSAL_NULL over FSAL_VFS and I got the following segmentation fault when running cthon04 basic test7 ('link and rename'). Best regards, Patrice Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x72daf700 (LWP 22397)] 0x0041c23e in posix2fsal_attributes (buffstat=0x72dad590, fsalattr=0x72dad780) at /opt/nfs-ganesha/src/FSAL/fsal_convert.c:432 432 fsalattr->supported = op_ctx->fsal_export->exp_ops.fs_supported_attrs( Missing separate debuginfos, use: debuginfo-install glibc-2.17-157.el7_3.5.x86_64 gssproxy-0.4.1-13.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 kr b5-libs-1.14.1-27.ocean1.el7.centos.x86_64 libcom_err-1.42.13.wc6-8.ocean1.el7.centos.x86_64 libselinux-2.5-6.el7.x86_64 pcre-8.32-15.el7_2.1.x86_ 64 (gdb) where #0 0x0041c23e in posix2fsal_attributes (buffstat=0x72dad590, fsalattr=0x72dad780) at /opt/nfs-ganesha/src/FSAL/fsal_convert.c:432 #1 0x0041c21c in posix2fsal_attributes_all (buffstat=0x72dad590, fsalattr=0x72dad780) at /opt/nfs-ganesha/src/FSAL/fsal_convert.c:422 #2 0x73f155dc in fetch_attrs (myself=0x7fffd801baf0, my_fd=35, attrs=0x72dad780) at /opt/nfs-ganesha/src/FSAL/FSAL_VFS/file.c:325 #3 0x73f1927a in vfs_getattr2 (obj_hdl=0x7fffd801baf0, attrs=0x72dad780) at /opt/nfs-ganesha/src/FSAL/FSAL_VFS/file.c:1595 #4 0x741255d3 in getattrs (obj_hdl=0x7fffd800fdd0, attrib_get=0x72dad780) at /opt/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_NULL/handle.c:503 #5 0x00531459 in mdcache_refresh_attrs (entry=0x7fffd80175e0, need_acl=false, invalidate=false) at /opt/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1048 #6 0x0052d4a1 in mdcache_refresh_attrs_no_invalidate (entry=0x7fffd80175e0) at /opt/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:445 #7 0x005310be in mdcache_rename (obj_hdl=0x7fffe4037888, olddir_hdl=0x7fffd8017618, old_name=0x7fffd8002b80 "file.0", newdir_hdl=0x7fffd8017618, new_name=0x7fffd800f730 "newfile.0") at /opt/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:991 #8 0x00431b26 in fsal_rename (dir_src=0x7fffd8017618, oldname=0x7fffd8002b80 "file.0", dir_dest=0x7fffd8017618, newname=0x7fffd800f730 "newfile.0") at /opt/nfs-ganesha/src/FSAL/fsal_helper.c:1412 #9 0x00475947 in nfs4_op_rename (op=0x7fffd80153f0, data=0x72dadae0, resp=0x7fffd8018220) at /opt/nfs-ganesha/src/Protocols/NFS/nfs4_op_rename.c:122 #10 0x00459b84 in nfs4_Compound (arg=0x7fffd800c538, req=0x7fffd800be30, res=0x7fffd800a950) at /opt/nfs-ganesha/src/Protocols/NFS/nfs4_Compound.c:752 #11 0x0044ab75 in nfs_rpc_process_request (reqdata=0x7fffd800be30) at /opt/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1338 #12 0x0044b77a in nfs_rpc_valid_NFS (req=0x7fffd800be30) at /opt/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1736 #13 0x76c28546 in svc_vc_decode (req=0x7fffd800be30) at /opt/nfs-ganesha/src/libntirpc/src/svc_vc.c:812 #14 0x0044fb64 in nfs_rpc_decode_request (xprt=0x7fffe4000bc0, xdrs=0x7fffd8017be0) at /opt/nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1625 #15 0x76c28458 in svc_vc_recv (xprt=0x7fffe4000bc0) at /opt/nfs-ganesha/src/libntirpc/src/svc_vc.c:785 #16 0x76c24bce in svc_rqst_xprt_task (wpe=0x7fffe4000dd8) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:753 #17 0x76c25048 in svc_rqst_epoll_events (sr_rec=0x7ef210, n_events=1) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:925 #18 0x76c252ea in svc_rqst_epoll_loop (sr_rec=0x7ef210) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:998 #19 0x76c2539d in svc_rqst_run_task (wpe=0x7ef210) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:1034 #20 0x76c2e9f1 in work_pool_thread (arg=0x7fffe8c0) at /opt/nfs-ganesha/src/libntirpc/src/work_pool.c:176 #21 0x77058dc5 in start_thread () from /lib64/libpthread.so.0 #22 0x7671a76d in clone () from /lib64/libc.so.6 (gdb) -
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: MDCACHE - Fix stacking over NULL
>From Daniel Gryniewicz : Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/391463 Change subject: MDCACHE - Fix stacking over NULL .. MDCACHE - Fix stacking over NULL Two cases of stacking over NULL were broken. 1) when comparing sub_handles, actuall pass subhandles for both, rather than an MDCACHE handle for the second one. 2) Don't call a sub-fsal's export ops without a subcall. Change-Id: I039a5558e1e0bd845bed74a9158f3c732097463e Signed-off-by: Daniel Gryniewicz --- M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c 1 file changed, 12 insertions(+), 5 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/63/391463/1 -- To view, visit https://review.gerrithub.io/391463 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-MessageType: newchange Gerrit-Change-Id: I039a5558e1e0bd845bed74a9158f3c732097463e Gerrit-Change-Number: 391463 Gerrit-PatchSet: 1 Gerrit-Owner: Daniel Gryniewicz -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
Re: [Nfs-ganesha-devel] test over FSAl_NULL
Okay, I'm able to reproduce. I'm looking at this, but the problem is that the export being set before mdcache is called is NULL's export, not MDCACHE's export, so the double un-stack causes VFS to see a NULL export. Somewhere, the top of the export stack is being lost. Daniel On 12/08/2017 10:59 AM, Daniel Gryniewicz wrote: I run NULL semi-regularly. The last time I ran it was a couple of months ago, so something may have crept in. I'll try again. That said, the code in that callpath looks correct. Daniel On 12/08/2017 05:46 AM, LUCAS Patrice wrote: Hi, Does anyone recently test the FSAL_NULL stackable FSAL ? Before using it as example of coding a stackable FSAL, I simply tried to use FSAL_NULL over FSAL_VFS and I got the following segmentation fault when running cthon04 basic test7 ('link and rename'). Best regards, Patrice Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x72daf700 (LWP 22397)] 0x0041c23e in posix2fsal_attributes (buffstat=0x72dad590, fsalattr=0x72dad780) at /opt/nfs-ganesha/src/FSAL/fsal_convert.c:432 432 fsalattr->supported = op_ctx->fsal_export->exp_ops.fs_supported_attrs( Missing separate debuginfos, use: debuginfo-install glibc-2.17-157.el7_3.5.x86_64 gssproxy-0.4.1-13.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 kr b5-libs-1.14.1-27.ocean1.el7.centos.x86_64 libcom_err-1.42.13.wc6-8.ocean1.el7.centos.x86_64 libselinux-2.5-6.el7.x86_64 pcre-8.32-15.el7_2.1.x86_ 64 (gdb) where #0 0x0041c23e in posix2fsal_attributes (buffstat=0x72dad590, fsalattr=0x72dad780) at /opt/nfs-ganesha/src/FSAL/fsal_convert.c:432 #1 0x0041c21c in posix2fsal_attributes_all (buffstat=0x72dad590, fsalattr=0x72dad780) at /opt/nfs-ganesha/src/FSAL/fsal_convert.c:422 #2 0x73f155dc in fetch_attrs (myself=0x7fffd801baf0, my_fd=35, attrs=0x72dad780) at /opt/nfs-ganesha/src/FSAL/FSAL_VFS/file.c:325 #3 0x73f1927a in vfs_getattr2 (obj_hdl=0x7fffd801baf0, attrs=0x72dad780) at /opt/nfs-ganesha/src/FSAL/FSAL_VFS/file.c:1595 #4 0x741255d3 in getattrs (obj_hdl=0x7fffd800fdd0, attrib_get=0x72dad780) at /opt/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_NULL/handle.c:503 #5 0x00531459 in mdcache_refresh_attrs (entry=0x7fffd80175e0, need_acl=false, invalidate=false) at /opt/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1048 #6 0x0052d4a1 in mdcache_refresh_attrs_no_invalidate (entry=0x7fffd80175e0) at /opt/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:445 #7 0x005310be in mdcache_rename (obj_hdl=0x7fffe4037888, olddir_hdl=0x7fffd8017618, old_name=0x7fffd8002b80 "file.0", newdir_hdl=0x7fffd8017618, new_name=0x7fffd800f730 "newfile.0") at /opt/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:991 #8 0x00431b26 in fsal_rename (dir_src=0x7fffd8017618, oldname=0x7fffd8002b80 "file.0", dir_dest=0x7fffd8017618, newname=0x7fffd800f730 "newfile.0") at /opt/nfs-ganesha/src/FSAL/fsal_helper.c:1412 #9 0x00475947 in nfs4_op_rename (op=0x7fffd80153f0, data=0x72dadae0, resp=0x7fffd8018220) at /opt/nfs-ganesha/src/Protocols/NFS/nfs4_op_rename.c:122 #10 0x00459b84 in nfs4_Compound (arg=0x7fffd800c538, req=0x7fffd800be30, res=0x7fffd800a950) at /opt/nfs-ganesha/src/Protocols/NFS/nfs4_Compound.c:752 #11 0x0044ab75 in nfs_rpc_process_request (reqdata=0x7fffd800be30) at /opt/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1338 #12 0x0044b77a in nfs_rpc_valid_NFS (req=0x7fffd800be30) at /opt/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1736 #13 0x76c28546 in svc_vc_decode (req=0x7fffd800be30) at /opt/nfs-ganesha/src/libntirpc/src/svc_vc.c:812 #14 0x0044fb64 in nfs_rpc_decode_request (xprt=0x7fffe4000bc0, xdrs=0x7fffd8017be0) at /opt/nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1625 #15 0x76c28458 in svc_vc_recv (xprt=0x7fffe4000bc0) at /opt/nfs-ganesha/src/libntirpc/src/svc_vc.c:785 #16 0x76c24bce in svc_rqst_xprt_task (wpe=0x7fffe4000dd8) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:753 #17 0x76c25048 in svc_rqst_epoll_events (sr_rec=0x7ef210, n_events=1) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:925 #18 0x76c252ea in svc_rqst_epoll_loop (sr_rec=0x7ef210) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:998 #19 0x76c2539d in svc_rqst_run_task (wpe=0x7ef210) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:1034 #20 0x76c2e9f1 in work_pool_thread (arg=0x7fffe8c0) at /opt/nfs-ganesha/src/libntirpc/src/work_pool.c:176 #21 0x77058dc5 in start_thread () from /lib64/libpthread.so.0 #22 0x7671a76d in clone () from /lib64/libc.so.6 (gdb) -- Check out the vibrant tech community on one of the world's most enga
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: NFS4.1 - Allow client to specifiy slot count
>From Daniel Gryniewicz : Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/391440 Change subject: NFS4.1 - Allow client to specifiy slot count .. NFS4.1 - Allow client to specifiy slot count The attributes on create_session allow specifying the number of slots the client wants, and there is a pyNFS test (SEQ8) that tests this. Use the minimum of the Ganesha configured slot size and the client requested slot size when setting the max number of slots for the session. Change-Id: Ia48857fddab0a334d3c3a815a677745dc6f7d51c Signed-off-by: Daniel Gryniewicz --- M src/Protocols/NFS/nfs4_op_create_session.c 1 file changed, 2 insertions(+), 1 deletion(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/40/391440/1 -- To view, visit https://review.gerrithub.io/391440 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-MessageType: newchange Gerrit-Change-Id: Ia48857fddab0a334d3c3a815a677745dc6f7d51c Gerrit-Change-Number: 391440 Gerrit-PatchSet: 1 Gerrit-Owner: Daniel Gryniewicz -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: MDCAHCE: fix bad setting of MDCACHE_TRUST_ATTRS when refresh...
>From Kinglong Mee : Kinglong Mee has uploaded this change for review. ( https://review.gerrithub.io/391415 Change subject: MDCAHCE: fix bad setting of MDCACHE_TRUST_ATTRS when refresh attrs .. MDCAHCE: fix bad setting of MDCACHE_TRUST_ATTRS when refresh attrs Two WRITEs of the same file, the second WRITE returns bad attrs as the first. Thread one thread two mdcache_write2() atomic_clear MDCACHE_TRUST_ATTRS mdcache_getattrs() wrlock(&entry->attr_lock) mdcache_refresh_attrs() obj_ops.getattrs() mdcache_write2() atomic_clear MDCACHE_TRUST_ATTRS mdc_fixup_md() set MDCACHE_TRUST_ATTRS unlock(&entry->attr_lock) mdcache_getattrs() tusts the attrs in MDCACHE This patch records the count of atomic_clear(MDCACHE_TRUST_ATTRS) during attrs refreshing, sets MDCACHE_TRUST_ATTRS only if the count is zero after attrs is refreshed. Change-Id: I2d47aab44e5f8e89611b22f8f2fdf81e849c8e85 Signed-off-by: Kinglong Mee --- M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_file.c M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_up.c 5 files changed, 35 insertions(+), 37 deletions(-) git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/15/391415/1 -- To view, visit https://review.gerrithub.io/391415 To unsubscribe, visit https://review.gerrithub.io/settings Gerrit-Project: ffilz/nfs-ganesha Gerrit-Branch: next Gerrit-MessageType: newchange Gerrit-Change-Id: I2d47aab44e5f8e89611b22f8f2fdf81e849c8e85 Gerrit-Change-Number: 391415 Gerrit-PatchSet: 1 Gerrit-Owner: Kinglong Mee -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot___ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel