[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Set key len.

2016-08-21 Thread GerritHub
>From :

es...@us.ibm.com has uploaded a new change for review.

  https://review.gerrithub.io/288016

Change subject: Set key len.
..

Set key len.

Change-Id: I5b932aaf90a97abf5306c1934b2ffb816378fec2
Signed-off-by: Marc Eshel 
---
M src/SAL/nlm_state.c
1 file changed, 1 insertion(+), 0 deletions(-)


  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/16/288016/1
-- 
To view, visit https://review.gerrithub.io/288016
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I5b932aaf90a97abf5306c1934b2ffb816378fec2
Gerrit-PatchSet: 1
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Owner: es...@us.ibm.com

--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] MDC up call

2016-08-21 Thread Marc Eshel
This time it did work.
Marc.



From:   Daniel Gryniewicz 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: Frank Filz , NFS Ganesha Developers 

Date:   08/21/2016 02:45 PM
Subject:Re: MDC up call



In general, MDCACHE assumes it has op_ctx set, and I'd prefer to not
have that assumption violated, as it will complicate the code a lot.

It appears that the export passed into the upcalls is already the
MDCACHE export, not the sub-export.  I've uploaded a new version of
the patch with that change.  Coud you try it again?

On Fri, Aug 19, 2016 at 4:56 PM, Marc Eshel  wrote:
> I am not sure you need to set op_ctx
> I fixed it for this path by not calling  mdc_check_mapping() from
> mdcache_find_keyed() if op_ctx is NULL
> I think the mapping should already exist for calls that are coming from
> up-call.
> Marc.
>
>
>
> From:   Daniel Gryniewicz 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: Frank Filz ,
> nfs-ganesha-devel@lists.sourceforge.net
> Date:   08/19/2016 06:13 AM
> Subject:Re: MDC up call
>
>
>
> Marc, could you try with this patch: https://review.gerrithub.io/287904
>
> Daniel
>
> On 08/18/2016 06:55 PM, Marc Eshel wrote:
>> Was up-call with MDC tested?
>> It looks like it is trying to use op_ctx which is NULL.
>> Marc.
>>
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 0x7fe867fff700 (LWP 18907)]
>> 0x00532b76 in mdc_cur_export () at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
>> 376 return mdc_export(op_ctx->fsal_export);
>> (gdb) where
>> #0  0x00532b76 in mdc_cur_export () at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
>> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
>> #2  0x0053584c in mdcache_find_keyed (key=0x7fe867ffe470,
>> entry=0x7fe867ffe468) at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:636
>> #3  0x005358c1 in mdcache_locate_keyed (key=0x7fe867ffe470,
>> export=0x12d8f40, entry=0x7fe867ffe468, attrs_out=0x0)
>> at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:670
>> #4  0x0052feac in mdcache_create_handle (exp_hdl=0x12d8f40,
>> hdl_desc=0x7fe880001088, handle=0x7fe867ffe4e8, attrs_out=0x0)
>> at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1629
>> #5  0x00433f36 in lock_avail (export=0x12d8f40,
>> file=0x7fe880001088, owner=0x7fe87c302dc0, lock_param=0x7fe8800010a0) 
at
>> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_top.c:172
>> #6  0x00438142 in queue_lock_avail (ctx=0x7fe880001100) at
>> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_async.c:243
>> #7  0x0050156f in fridgethr_start_routine (arg=0x7fe880001100)
> at
>> /nas/ganesha/new-ganesha/src/support/fridgethr.c:550
>> #8  0x7fea288a0df3 in start_thread (arg=0x7fe867fff700) at
>> pthread_create.c:308
>> #9  0x7fea27f603dd in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>> (gdb) up
>> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
>> 210 struct mdcache_fsal_export *export = mdc_cur_export();
>> (gdb) p op_ctx
>> $1 = (struct req_op_context *) 0x0
>>
>>
>>
>> From:   Marc Eshel/Almaden/IBM@IBMUS
>> To: "Frank Filz" 
>> Cc: nfs-ganesha-devel@lists.sourceforge.net
>> Date:   08/18/2016 09:21 AM
>> Subject:Re: [Nfs-ganesha-devel] multi fd support
>>
>>
>>
>> Using NFSv4 I get read lock on the same file from two different NFS
>> clients. The server get the two locks using the two different owners
>> (state), when I unlock the lock on one client that results in closing
> the
>> file I get fsal_close() with no owner id so I am forced to release all
>> locks which is wrong.
>> Marc.
>>
>>
>>
>> From:   "Frank Filz" 
>> To: Marc Eshel/Almaden/IBM@IBMUS
>> Cc: 
>> Date:   08/17/2016 10:04 PM
>> Subject:RE: multi fd support
>>
>>
>>
>>> Hi Frank,
>>> Don't we need fsal_close() to call close2() ?
>>> We need the owner so we can release only the locks for this fd before
>>> closing it.
>>> Marc.
>>
>> With support_ex enabled, fsal_close is only called when the
>> fsal_obj_handle
>> is being disposed of or when the LRU thread is closing open file
>> descriptors
>> (which will now only be those open file descriptors not associated with
>> state), and it's purpose is only to close the global/anonymous file
>> descriptor. There should be no locks associated with the global file
>> 

Re: [Nfs-ganesha-devel] MDC up call

2016-08-21 Thread Daniel Gryniewicz
In general, MDCACHE assumes it has op_ctx set, and I'd prefer to not
have that assumption violated, as it will complicate the code a lot.

It appears that the export passed into the upcalls is already the
MDCACHE export, not the sub-export.  I've uploaded a new version of
the patch with that change.  Coud you try it again?

On Fri, Aug 19, 2016 at 4:56 PM, Marc Eshel  wrote:
> I am not sure you need to set op_ctx
> I fixed it for this path by not calling  mdc_check_mapping() from
> mdcache_find_keyed() if op_ctx is NULL
> I think the mapping should already exist for calls that are coming from
> up-call.
> Marc.
>
>
>
> From:   Daniel Gryniewicz 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: Frank Filz ,
> nfs-ganesha-devel@lists.sourceforge.net
> Date:   08/19/2016 06:13 AM
> Subject:Re: MDC up call
>
>
>
> Marc, could you try with this patch: https://review.gerrithub.io/287904
>
> Daniel
>
> On 08/18/2016 06:55 PM, Marc Eshel wrote:
>> Was up-call with MDC tested?
>> It looks like it is trying to use op_ctx which is NULL.
>> Marc.
>>
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 0x7fe867fff700 (LWP 18907)]
>> 0x00532b76 in mdc_cur_export () at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
>> 376 return mdc_export(op_ctx->fsal_export);
>> (gdb) where
>> #0  0x00532b76 in mdc_cur_export () at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
>> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
>> #2  0x0053584c in mdcache_find_keyed (key=0x7fe867ffe470,
>> entry=0x7fe867ffe468) at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:636
>> #3  0x005358c1 in mdcache_locate_keyed (key=0x7fe867ffe470,
>> export=0x12d8f40, entry=0x7fe867ffe468, attrs_out=0x0)
>> at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:670
>> #4  0x0052feac in mdcache_create_handle (exp_hdl=0x12d8f40,
>> hdl_desc=0x7fe880001088, handle=0x7fe867ffe4e8, attrs_out=0x0)
>> at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1629
>> #5  0x00433f36 in lock_avail (export=0x12d8f40,
>> file=0x7fe880001088, owner=0x7fe87c302dc0, lock_param=0x7fe8800010a0) at
>> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_top.c:172
>> #6  0x00438142 in queue_lock_avail (ctx=0x7fe880001100) at
>> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_async.c:243
>> #7  0x0050156f in fridgethr_start_routine (arg=0x7fe880001100)
> at
>> /nas/ganesha/new-ganesha/src/support/fridgethr.c:550
>> #8  0x7fea288a0df3 in start_thread (arg=0x7fe867fff700) at
>> pthread_create.c:308
>> #9  0x7fea27f603dd in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>> (gdb) up
>> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
>>
> /nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
>> 210 struct mdcache_fsal_export *export = mdc_cur_export();
>> (gdb) p op_ctx
>> $1 = (struct req_op_context *) 0x0
>>
>>
>>
>> From:   Marc Eshel/Almaden/IBM@IBMUS
>> To: "Frank Filz" 
>> Cc: nfs-ganesha-devel@lists.sourceforge.net
>> Date:   08/18/2016 09:21 AM
>> Subject:Re: [Nfs-ganesha-devel] multi fd support
>>
>>
>>
>> Using NFSv4 I get read lock on the same file from two different NFS
>> clients. The server get the two locks using the two different owners
>> (state), when I unlock the lock on one client that results in closing
> the
>> file I get fsal_close() with no owner id so I am forced to release all
>> locks which is wrong.
>> Marc.
>>
>>
>>
>> From:   "Frank Filz" 
>> To: Marc Eshel/Almaden/IBM@IBMUS
>> Cc: 
>> Date:   08/17/2016 10:04 PM
>> Subject:RE: multi fd support
>>
>>
>>
>>> Hi Frank,
>>> Don't we need fsal_close() to call close2() ?
>>> We need the owner so we can release only the locks for this fd before
>>> closing it.
>>> Marc.
>>
>> With support_ex enabled, fsal_close is only called when the
>> fsal_obj_handle
>> is being disposed of or when the LRU thread is closing open file
>> descriptors
>> (which will now only be those open file descriptors not associated with
>> state), and it's purpose is only to close the global/anonymous file
>> descriptor. There should be no locks associated with the global file
>> descriptor.
>>
>> A few notes for you:
>>
>> 1. Not having a delegation aware FSAL to work on, I did not explore all
>> the
>> implications of delegations with support_ex. A delegation probably
> should
>> inherit the file descriptor from the initial open state, but maybe it
>> needs
>> it's own.