In general, MDCACHE assumes it has op_ctx set, and I'd prefer to not
have that assumption violated, as it will complicate the code a lot.

It appears that the export passed into the upcalls is already the
MDCACHE export, not the sub-export.  I've uploaded a new version of
the patch with that change.  Coud you try it again?

On Fri, Aug 19, 2016 at 4:56 PM, Marc Eshel <es...@us.ibm.com> wrote:
> I am not sure you need to set op_ctx
> I fixed it for this path by not calling  mdc_check_mapping() from
> mdcache_find_keyed() if op_ctx is NULL
> I think the mapping should already exist for calls that are coming from
> up-call.
> Marc.
>
>
>
> From:   Daniel Gryniewicz <d...@redhat.com>
> To:     Marc Eshel/Almaden/IBM@IBMUS
> Cc:     Frank Filz <ffilz...@mindspring.com>,
> nfs-ganesha-devel@lists.sourceforge.net
> Date:   08/19/2016 06:13 AM
> Subject:        Re: MDC up call
>
>
>
> Marc, could you try with this patch: https://review.gerrithub.io/287904
>
> Daniel
>
> On 08/18/2016 06:55 PM, Marc Eshel wrote:
>> Was up-call with MDC tested?
>> It looks like it is trying to use op_ctx which is NULL.
>> Marc.
>>
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 0x7fe867fff700 (LWP 18907)]
>> 0x0000000000532b76 in mdc_cur_export () at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
>> 376             return mdc_export(op_ctx->fsal_export);
>> (gdb) where
>> #0  0x0000000000532b76 in mdc_cur_export () at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
>> #1  0x00000000005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
>> #2  0x000000000053584c in mdcache_find_keyed (key=0x7fe867ffe470,
>> entry=0x7fe867ffe468) at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:636
>> #3  0x00000000005358c1 in mdcache_locate_keyed (key=0x7fe867ffe470,
>> export=0x12d8f40, entry=0x7fe867ffe468, attrs_out=0x0)
>>     at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:670
>> #4  0x000000000052feac in mdcache_create_handle (exp_hdl=0x12d8f40,
>> hdl_desc=0x7fe880001088, handle=0x7fe867ffe4e8, attrs_out=0x0)
>>     at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1629
>> #5  0x0000000000433f36 in lock_avail (export=0x12d8f40,
>> file=0x7fe880001088, owner=0x7fe87c302dc0, lock_param=0x7fe8800010a0) at
>> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_top.c:172
>> #6  0x0000000000438142 in queue_lock_avail (ctx=0x7fe880001100) at
>> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_async.c:243
>> #7  0x000000000050156f in fridgethr_start_routine (arg=0x7fe880001100)
> at
>> /nas/ganesha/new-ganesha/src/support/fridgethr.c:550
>> #8  0x00007fea288a0df3 in start_thread (arg=0x7fe867fff700) at
>> pthread_create.c:308
>> #9  0x00007fea27f603dd in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>> (gdb) up
>> #1  0x00000000005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
>> 210             struct mdcache_fsal_export *export = mdc_cur_export();
>> (gdb) p op_ctx
>> $1 = (struct req_op_context *) 0x0
>>
>>
>>
>> From:   Marc Eshel/Almaden/IBM@IBMUS
>> To:     "Frank Filz" <ffilz...@mindspring.com>
>> Cc:     nfs-ganesha-devel@lists.sourceforge.net
>> Date:   08/18/2016 09:21 AM
>> Subject:        Re: [Nfs-ganesha-devel] multi fd support
>>
>>
>>
>> Using NFSv4 I get read lock on the same file from two different NFS
>> clients. The server get the two locks using the two different owners
>> (state), when I unlock the lock on one client that results in closing
> the
>> file I get fsal_close() with no owner id so I am forced to release all
>> locks which is wrong.
>> Marc.
>>
>>
>>
>> From:   "Frank Filz" <ffilz...@mindspring.com>
>> To:     Marc Eshel/Almaden/IBM@IBMUS
>> Cc:     <nfs-ganesha-devel@lists.sourceforge.net>
>> Date:   08/17/2016 10:04 PM
>> Subject:        RE: multi fd support
>>
>>
>>
>>> Hi Frank,
>>> Don't we need fsal_close() to call close2() ?
>>> We need the owner so we can release only the locks for this fd before
>>> closing it.
>>> Marc.
>>
>> With support_ex enabled, fsal_close is only called when the
>> fsal_obj_handle
>> is being disposed of or when the LRU thread is closing open file
>> descriptors
>> (which will now only be those open file descriptors not associated with
>> state), and it's purpose is only to close the global/anonymous file
>> descriptor. There should be no locks associated with the global file
>> descriptor.
>>
>> A few notes for you:
>>
>> 1. Not having a delegation aware FSAL to work on, I did not explore all
>> the
>> implications of delegations with support_ex. A delegation probably
> should
>> inherit the file descriptor from the initial open state, but maybe it
>> needs
>> it's own.
>>
>> 2. For NFS v4 locks, the support_ex API SHOULD allow you to just have an
>> open file descriptor associated with the open state and not have to have
>> one
>> per lock state (per lock owner) since your locks already have owners
>> associated without having to have separate file descriptors. For NFS v3
>> locks of course there is no way (currently) to tie to an open state
> (even
>> if
>> there is an NLM_SHARE from the same process). I would like to eventually
>> look for ties and create them if possible. Of course if it benefits you
> to
>> have an open fd per lock owner, that's fine too. And actually, you can
>> even
>> fall back to using the global file descriptor (and note that now the
> FSAL
>> actually gets to control when that's opened or closed).
>>
>> 3. I'm not sure you caught that you need to protect the global file
>> descriptor with the fsal_obj_handle->lock since the content_lock is no
>> more...
>>
>> I'm on vacation the rest of the week so I may not be able to respond
> until
>> next week.
>>
>> Frank
>>
>>
>> ---
>> This email has been checked for viruses by Avast antivirus software.
>> https://www.avast.com/antivirus
>>
>>
>>
>>
>>
>>
>>
> ------------------------------------------------------------------------------
>> _______________________________________________
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>>
>>
>>
>>
>
>
>
>
>

------------------------------------------------------------------------------
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to