Re: [Nfs-ganesha-devel] How to disable MDCACHE completely in ganesha 2.5.1 from the configuration?

2017-08-11 Thread Frank Filz
What code level?



Your own FSAL?



Unfortunately, mdcache can not be completely disabled because the handle cache 
is required.



Are you also setting fileid in the fsal_obj_handle? The fileid, fsid, and type 
attributes are taken from the fsal_obj_handle rather than struct attrlist.



This allows several places that only need these immutable attributes to access 
without having to acquire the attribute_lock.



Frank



From: Jyoti Sharma [mailto:jyoti.mic...@gmail.com]
Sent: Friday, August 11, 2017 11:56 AM
To: nfs-ganesha-devel@lists.sourceforge.net
Subject: Re: [Nfs-ganesha-devel] How to disable MDCACHE completely in ganesha 
2.5.1 from the configuration?




The observation that I am trying to troubleshoot is this:

I am setting fsalattr->valid_mask |= ATTRS_POSIX; and fsalattr->fileid to a 
valid non-zero value. I print those in FSAL log and it comes as expected before 
going on wire.
But in pcap trace I see that the value sent is zero. On the client trace it 
shows up as "nfs_get_root: get root inode failed".

Any direction to troubleshoot is appreciated.



Regards.



On Sat, Aug 12, 2017 at 12:11 AM, Jyoti Sharma  > wrote:

Hi,



Is there an option to disable caching from the configuration files?

I am debugging an issue and want to rule out that caching is causing trouble.

Thanks.





---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Remove non-support_ex setattrs FSAL method

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373993


Change subject: Remove non-support_ex setattrs FSAL method
..

Remove non-support_ex setattrs FSAL method

Change-Id: I7c986c444f17c83aecd730569f105f6e649b1ca3
Signed-off-by: Frank S. Filz 
---
M src/FSAL/FSAL_PROXY/handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/handle.c
M src/FSAL/default_methods.c
M src/include/fsal_api.h
5 files changed, 0 insertions(+), 198 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/93/373993/1
-- 
To view, visit https://review.gerrithub.io/373993
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I7c986c444f17c83aecd730569f105f6e649b1ca3
Gerrit-Change-Number: 373993
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Remove the non-support_ex read and write methods

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373992


Change subject: Remove the non-support_ex read and write methods
..

Remove the non-support_ex read and write methods

Change-Id: I24b8c9d292d2869540df0c22d420541283f16662
Signed-off-by: Frank S. Filz 
---
M src/FSAL/FSAL_PROXY/handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_file.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h
M src/FSAL/Stackable_FSALs/FSAL_NULL/file.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/handle.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/nullfs_methods.h
M src/FSAL/default_methods.c
M src/include/fsal_api.h
9 files changed, 0 insertions(+), 501 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/92/373992/1
-- 
To view, visit https://review.gerrithub.io/373992
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I24b8c9d292d2869540df0c22d420541283f16662
Gerrit-Change-Number: 373992
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Remove non-support_ex FSAL share_op method

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373996


Change subject: Remove non-support_ex FSAL share_op method
..

Remove non-support_ex FSAL share_op method

Change-Id: I042a4b3352926fc5484f2aac9fcc0c8b2ff0cf14
Signed-off-by: Frank S. Filz 
---
M src/FSAL/FSAL_GPFS/CMakeLists.txt
M src/FSAL/FSAL_GPFS/fsal_internal.h
D src/FSAL/FSAL_GPFS/fsal_share.c
M src/FSAL/FSAL_GPFS/handle.c
M src/FSAL/FSAL_GPFS/main.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_file.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h
M src/FSAL/Stackable_FSALs/FSAL_NULL/nullfs_methods.h
M src/FSAL/commonlib.c
M src/FSAL/default_methods.c
M src/FSAL/fsal_config.c
M src/include/fsal_api.h
M src/include/fsal_types.h
14 files changed, 0 insertions(+), 169 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/96/373996/1
-- 
To view, visit https://review.gerrithub.io/373996
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I042a4b3352926fc5484f2aac9fcc0c8b2ff0cf14
Gerrit-Change-Number: 373996
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Replace calls to state_share_anonymous_io_start with state_d...

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373988


Change subject: Replace calls to state_share_anonymous_io_start with 
state_deleg_conflict
..

Replace calls to state_share_anonymous_io_start with state_deleg_conflict

Anonymous I/O interaction with share reservations is now handled
within the FSALs (in fact, while an anonymous I/O is in progress,
we can't even grant share reservations because the fsal_obj_handle
lock is held for the duration of the I/O).

We still need to check for delegation conflict, HOWEVER, it should
be noted that since support_ex, delegations actually really don't
work.

Also strip out the remaining share management code.

Change-Id: I54f5a2ad56ce6feb0bcbd52950c741174c1c4b93
Signed-off-by: Frank S. Filz 
---
M src/Protocols/NFS/nfs3_read.c
M src/Protocols/NFS/nfs3_setattr.c
M src/Protocols/NFS/nfs3_write.c
M src/Protocols/NFS/nfs4_op_read.c
M src/Protocols/NFS/nfs4_op_setattr.c
M src/Protocols/NFS/nfs4_op_write.c
M src/SAL/state_share.c
M src/include/sal_functions.h
8 files changed, 22 insertions(+), 289 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/88/373988/1
-- 
To view, visit https://review.gerrithub.io/373988
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I54f5a2ad56ce6feb0bcbd52950c741174c1c4b93
Gerrit-Change-Number: 373988
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Assume support_ex in NLM_SHARE/NLM_UNSHARE handling

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373987


Change subject: Assume support_ex in NLM_SHARE/NLM_UNSHARE handling
..

Assume support_ex in NLM_SHARE/NLM_UNSHARE handling

Change-Id: I3a6f021d344ddafd95f1309c8189e91a2faf9aa1
Signed-off-by: Frank S. Filz 
---
M src/Protocols/NLM/nlm_Share.c
M src/Protocols/NLM/nlm_Unshare.c
M src/SAL/nlm_state.c
M src/SAL/state_lock.c
M src/SAL/state_share.c
M src/include/sal_functions.h
6 files changed, 34 insertions(+), 415 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/87/373987/1
-- 
To view, visit https://review.gerrithub.io/373987
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I3a6f021d344ddafd95f1309c8189e91a2faf9aa1
Gerrit-Change-Number: 373987
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Remove non-support_ex FSAL open, reopen, and status methods

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373998


Change subject: Remove non-support_ex FSAL open, reopen, and status methods
..

Remove non-support_ex FSAL open, reopen, and status methods

Change-Id: Ib7f6b8ec0a66e753cb40744123c136e7d8153fdb
Signed-off-by: Frank S. Filz 
---
M src/FSAL/FSAL_GPFS/file.c
M src/FSAL/FSAL_GPFS/gpfs_methods.h
M src/FSAL/FSAL_GPFS/handle.c
M src/FSAL/FSAL_PROXY/handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_file.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h
M src/FSAL/Stackable_FSALs/FSAL_NULL/file.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/handle.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/nullfs_methods.h
M src/FSAL/default_methods.c
M src/FSAL/fsal_config.c
M src/include/fsal_api.h
M src/include/fsal_types.h
14 files changed, 2 insertions(+), 275 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/98/373998/1
-- 
To view, visit https://review.gerrithub.io/373998
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib7f6b8ec0a66e753cb40744123c136e7d8153fdb
Gerrit-Change-Number: 373998
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Remove support_ex FSAL method

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373991


Change subject: Remove support_ex FSAL method
..

Remove support_ex FSAL method

Change-Id: I09e17c305ba4c90e2699251940649a91882e8c9f
Signed-off-by: Frank S. Filz 
---
M src/FSAL/FSAL_CEPH/main.c
M src/FSAL/FSAL_GLUSTER/main.c
M src/FSAL/FSAL_GPFS/main.c
M src/FSAL/FSAL_MEM/mem_main.c
M src/FSAL/FSAL_PROXY/main.c
M src/FSAL/FSAL_PSEUDO/main.c
M src/FSAL/FSAL_RGW/main.c
M src/FSAL/FSAL_VFS/vfs/main.c
M src/FSAL/FSAL_VFS/xfs/main.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_main.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/main.c
M src/FSAL/default_methods.c
M src/include/fsal_api.h
14 files changed, 0 insertions(+), 169 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/91/373991/1
-- 
To view, visit https://review.gerrithub.io/373991
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I09e17c305ba4c90e2699251940649a91882e8c9f
Gerrit-Change-Number: 373991
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Remove non-support_ex FSAL commit method

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373997


Change subject: Remove non-support_ex FSAL commit method
..

Remove non-support_ex FSAL commit method

Change-Id: If40959b04cfb39ef56fabf082a8aa8367bb9c7ba
Signed-off-by: Frank S. Filz 
---
M src/FSAL/FSAL_PROXY/handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_file.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h
M src/FSAL/Stackable_FSALs/FSAL_NULL/file.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/handle.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/nullfs_methods.h
M src/FSAL/default_methods.c
M src/include/fsal_api.h
9 files changed, 0 insertions(+), 123 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/97/373997/1
-- 
To view, visit https://review.gerrithub.io/373997
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: If40959b04cfb39ef56fabf082a8aa8367bb9c7ba
Gerrit-Change-Number: 373997
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Remove non-support_ex create FSAL method

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373994


Change subject: Remove non-support_ex create FSAL method
..

Remove non-support_ex create FSAL method

Change-Id: I01405fe6b788e16f7b24f89b9ddd72ed037e623a
Signed-off-by: Frank S. Filz 
---
M src/FSAL/FSAL_GLUSTER/handle.c
M src/FSAL/FSAL_GPFS/gpfs_methods.h
M src/FSAL/FSAL_GPFS/handle.c
M src/FSAL/FSAL_MEM/mem_handle.c
M src/FSAL/FSAL_PROXY/handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/handle.c
M src/FSAL/default_methods.c
M src/include/fsal_api.h
9 files changed, 0 insertions(+), 395 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/94/373994/1
-- 
To view, visit https://review.gerrithub.io/373994
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I01405fe6b788e16f7b24f89b9ddd72ed037e623a
Gerrit-Change-Number: 373994
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_RGW depends on FSAL status method that it doesn't imple...

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373979


Change subject: FSAL_RGW depends on FSAL status method that it doesn't implement
..

FSAL_RGW depends on FSAL status method that it doesn't implement

This needs to be fixed somehow... For now just hard coding the
effective result of calling fsal_is_open since that function will
be removed because status op is being removed.

Change-Id: Ic7b37c8026e5fb2aa1698f7eda35c58f2ed12c3f
Signed-off-by: Frank S. Filz 
---
M src/FSAL/FSAL_RGW/handle.c
1 file changed, 1 insertion(+), 1 deletion(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/79/373979/1
-- 
To view, visit https://review.gerrithub.io/373979
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic7b37c8026e5fb2aa1698f7eda35c58f2ed12c3f
Gerrit-Change-Number: 373979
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Strip legacy state_share functions used by NFS4

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373986


Change subject: Strip legacy state_share functions used by NFS4
..

Strip legacy state_share functions used by NFS4

state_share_add
state_share_remove
state_share_upgrade
state_share_downgrade

This functionality is now handled inside the support_ex FSALs.

Change-Id: Iab3ec0848624757c0f93944aac2a781f7e1ca601
Signed-off-by: Frank S. Filz 
---
M src/SAL/state_share.c
M src/include/sal_functions.h
2 files changed, 0 insertions(+), 374 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/86/373986/1
-- 
To view, visit https://review.gerrithub.io/373986
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Iab3ec0848624757c0f93944aac2a781f7e1ca601
Gerrit-Change-Number: 373986
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Remove share counters from SAL - WARNING - delegations have ...

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373989


Change subject: Remove share counters from SAL - WARNING - delegations have 
been broken
..

Remove share counters from SAL - WARNING - delegations have been broken

These counters are no longer incremented at all. They were ONLY
incremented during anonymous I/O with support_ex.

Due to the lack of use of these counters, delegation conflict
checking has actually been broken. This patch just highlights
that fact.

Change-Id: Id87f54ffe86911604bbcf270ae095c385a04fc25
Signed-off-by: Frank S. Filz 
---
M src/SAL/state_deleg.c
M src/include/sal_data.h
2 files changed, 7 insertions(+), 23 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/89/373989/1
-- 
To view, visit https://review.gerrithub.io/373989
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Id87f54ffe86911604bbcf270ae095c385a04fc25
Gerrit-Change-Number: 373989
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Disable do_lease_op - FSAL lock_op method is not implemented...

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373984


Change subject: Disable do_lease_op - FSAL lock_op method is not implemented by 
anyone
..

Disable do_lease_op - FSAL lock_op method is not implemented by anyone

We eventually need a new FSAL lease_op method to make delegations
usable.

Change-Id: I97528bcf148c25d4fb7509c1cd02943e6f1dcc99
Signed-off-by: Frank S. Filz 
---
M src/SAL/state_deleg.c
1 file changed, 5 insertions(+), 1 deletion(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/84/373984/1
-- 
To view, visit https://review.gerrithub.io/373984
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I97528bcf148c25d4fb7509c1cd02943e6f1dcc99
Gerrit-Change-Number: 373984
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Assume support_ex in SAL/nfs4_state.c

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373983


Change subject: Assume support_ex in SAL/nfs4_state.c
..

Assume support_ex in SAL/nfs4_state.c

Change-Id: Ic3cf431ccb02f30774ce7d402b50a3ce642f05da
Signed-off-by: Frank S. Filz 
---
M src/SAL/nfs4_state.c
1 file changed, 6 insertions(+), 52 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/83/373983/1
-- 
To view, visit https://review.gerrithub.io/373983
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic3cf431ccb02f30774ce7d402b50a3ce642f05da
Gerrit-Change-Number: 373983
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Always assume support_ex in NFS3 protocol functions

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373981


Change subject: Always assume support_ex in NFS3 protocol functions
..

Always assume support_ex in NFS3 protocol functions

Change-Id: Id9fc3e0c6ce76b377a55ba96d086e825c3312685
Signed-off-by: Frank S. Filz 
---
M src/Protocols/NFS/nfs3_create.c
M src/Protocols/NFS/nfs3_read.c
M src/Protocols/NFS/nfs3_write.c
3 files changed, 42 insertions(+), 117 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/81/373981/1
-- 
To view, visit https://review.gerrithub.io/373981
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Id9fc3e0c6ce76b377a55ba96d086e825c3312685
Gerrit-Change-Number: 373981
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Assume support_ex in SAL/state_lock.c

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373985


Change subject: Assume support_ex in SAL/state_lock.c
..

Assume support_ex in SAL/state_lock.c

Since support_ex is assumed, there can be no FSALs that don't
support lock owners.

Change-Id: I580d8615ba33e71488956960c9f4bd4f553d511e
Signed-off-by: Frank S. Filz 
---
M src/SAL/state_lock.c
M src/include/sal_functions.h
2 files changed, 31 insertions(+), 237 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/85/373985/1
-- 
To view, visit https://review.gerrithub.io/373985
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I580d8615ba33e71488956960c9f4bd4f553d511e
Gerrit-Change-Number: 373985
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Always assume support_ex in 9p

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373980


Change subject: Always assume support_ex in 9p
..

Always assume support_ex in 9p

Change-Id: I1505e5e606d2a360e3c58833d692aa70883fe00f
Signed-off-by: Frank S. Filz 
---
M src/Protocols/9P/9p_lcreate.c
M src/Protocols/9P/9p_lopen.c
M src/Protocols/9P/9p_proto_tools.c
M src/Protocols/9P/9p_read.c
M src/Protocols/9P/9p_remove.c
M src/Protocols/9P/9p_write.c
6 files changed, 71 insertions(+), 149 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/80/373980/1
-- 
To view, visit https://review.gerrithub.io/373980
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I1505e5e606d2a360e3c58833d692aa70883fe00f
Gerrit-Change-Number: 373980
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Assume support_ex in NFS4 protocol functions

2017-08-11 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/373982


Change subject: Assume support_ex in NFS4 protocol functions
..

Assume support_ex in NFS4 protocol functions

Change-Id: I0b6b8136f47ac47b03ab1f12436a5d4a428c5f02
Signed-off-by: Frank S. Filz 
---
M src/Protocols/NFS/nfs4_op_close.c
M src/Protocols/NFS/nfs4_op_open.c
M src/Protocols/NFS/nfs4_op_open_downgrade.c
M src/Protocols/NFS/nfs4_op_read.c
M src/Protocols/NFS/nfs4_op_write.c
5 files changed, 30 insertions(+), 724 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/82/373982/1
-- 
To view, visit https://review.gerrithub.io/373982
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I0b6b8136f47ac47b03ab1f12436a5d4a428c5f02
Gerrit-Change-Number: 373982
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] How to disable MDCACHE completely in ganesha 2.5.1 from the configuration?

2017-08-11 Thread Jyoti Sharma
The observation that I am trying to troubleshoot is this:

I am setting fsalattr->valid_mask |= ATTRS_POSIX; and fsalattr->fileid to a
valid non-zero value. I print those in FSAL log and it comes as expected
before going on wire.
But in pcap trace I see that the value sent is zero. On the client trace it
shows up as "nfs_get_root: get root inode failed".

Any direction to troubleshoot is appreciated.

Regards.

On Sat, Aug 12, 2017 at 12:11 AM, Jyoti Sharma 
wrote:

> Hi,
>
> Is there an option to disable caching from the configuration files?
>
> I am debugging an issue and want to rule out that caching is causing
> trouble.
>
> Thanks.
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] How to disable MDCACHE completely in ganesha 2.5.1 from the configuration?

2017-08-11 Thread Jyoti Sharma
Hi,

Is there an option to disable caching from the configuration files?

I am debugging an issue and want to rule out that caching is causing
trouble.

Thanks.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-11 Thread Daniel Gryniewicz

On 08/11/2017 12:10 PM, Frank Filz wrote:

Right, this is reaping.  I was thinking it was the lane thread.  Reaping only
looks at the single LRU of each queue.  We should probably look at some
small number of each lane, like 2 or 3.

Frank, this, in combination with the PIN lane, it probably the issue.


Yea, that would be a problem, also, consider that v4 will have much less use of 
global fd under support_ex, and thus the fewer open files to trigger less 
lru_run_lane events.

One option would be to promote anything that got state on it to MRU, persistent state 
should (mostly) only happen if the object is going to be referenced in subsequent NFS 
requests anyway (the one exception would be "touch" to create a file that is 
done in a single compound (I don't know if the client actually succeeds in optimizing 
that into a single compound).


I'm not worried about touch, that's open, write, close, so the refcount 
drops back to 1, and it can be reaped.  I'm more worried about opening a 
small file in an editor (open + 1 read, potentially a single compound) 
and then idling.



Also, the vector operations those research folks have been working on would 
create a scenario where a file is accessed with state and done in a single 
compound, which may not deserve promotion to MRU of L1 (but one way to handle 
that is check on the put ref that decrements to refcount 1 if at that point 
there is state on the file, if so, promote to MRU of L1).


We'll have to make other changes if this happens, I suspect.



Other than objects with persistent state, we SHOULD only have #threads * 2 (or 
maybe 3 at most) active references, so as long as we set up the number of lanes 
and everything right, we should still be able to reap.


I'm not quite sure I believe this.  I'd have to go through things in my 
mind.


Daniel

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Manpage - Fix installing manpages in RPM

2017-08-11 Thread GerritHub
>From Daniel Gryniewicz :

Daniel Gryniewicz has uploaded this change for review. ( 
https://review.gerrithub.io/373956


Change subject: Manpage - Fix installing manpages in RPM
..

Manpage - Fix installing manpages in RPM

Move the manpage install into the files section, out of the install
section.

Also, update a few entries in the manpages.

Change-Id: Ib7d5063f9224125e69d4999624a5d3d0f777fecd
Signed-off-by: Daniel Gryniewicz 
---
M src/doc/man/ganesha-cache-config.rst
M src/doc/man/ganesha-core-config.rst
M src/nfs-ganesha.spec-in.cmake
3 files changed, 14 insertions(+), 11 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/56/373956/1
-- 
To view, visit https://review.gerrithub.io/373956
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib7d5063f9224125e69d4999624a5d3d0f777fecd
Gerrit-Change-Number: 373956
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Weekly conference call timing

2017-08-11 Thread Frank Filz
It seems like we have converged on the hour earlier time slot on Tuesdays,
so let's start next week.

Thanks

Frank

> -Original Message-
> From: Frank Filz [mailto:ffilz...@mindspring.com]
> Sent: Wednesday, August 9, 2017 12:48 PM
> To: nfs-ganesha-devel@lists.sourceforge.net
> Subject: [Nfs-ganesha-devel] Weekly conference call timing
> 
> My daughter will be starting a new preschool, possibly as early as August
> 22nd. Unfortunately since it's Monday, Tuesday, Wednesday and I will need
> to drop her off at 9:00 AM Pacific Time, which is right in the middle of
our
> current time slot...
> 
> We could keep the time slot and move to Thursday (or even Friday), or I
> could make it work to do it an hour earlier.
> 
> I'd like to make this work for the largest number of people, so if you
could
> give me an idea of what times DON'T work for you that would be helpful.
> 
> 7:30 AM to 8:30 AM Pacific Time would be:
> 10:30 AM to 11:30 AM Eastern Time
> 4:30 PM to 5:30 PM Paris Time
> 8:00 PM to 9:00 PM Bangalore Time (and 9:00 PM to 10:00 PM when we
> switch back to standard time)
> 
> If there are other time zones we have folks joining from, please let me
know.
> 
> Thanks
> 
> Frank
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 
> 
>

--
> Check out the vibrant tech community on one of the world's most engaging
> tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-11 Thread Frank Filz
> Right, this is reaping.  I was thinking it was the lane thread.  Reaping only
> looks at the single LRU of each queue.  We should probably look at some
> small number of each lane, like 2 or 3.
> 
> Frank, this, in combination with the PIN lane, it probably the issue.

Yea, that would be a problem, also, consider that v4 will have much less use of 
global fd under support_ex, and thus the fewer open files to trigger less 
lru_run_lane events.

One option would be to promote anything that got state on it to MRU, persistent 
state should (mostly) only happen if the object is going to be referenced in 
subsequent NFS requests anyway (the one exception would be "touch" to create a 
file that is done in a single compound (I don't know if the client actually 
succeeds in optimizing that into a single compound). Also, the vector 
operations those research folks have been working on would create a scenario 
where a file is accessed with state and done in a single compound, which may 
not deserve promotion to MRU of L1 (but one way to handle that is check on the 
put ref that decrements to refcount 1 if at that point there is state on the 
file, if so, promote to MRU of L1).

Other than objects with persistent state, we SHOULD only have #threads * 2 (or 
maybe 3 at most) active references, so as long as we set up the number of lanes 
and everything right, we should still be able to reap.

Frank

> Daniel
> 
> On 08/11/2017 11:21 AM, Pradeep wrote:
> > Hi Daniel,
> >
> > I'm testing with 2.5.1. I haven't changed those parameters. Those
> > parameters only affect once you are in lru_run_lane(), right? Since
> > the FDs are lower than low-watermark, it never calls lru_run_lane().
> >
> > Thanks,
> > Pradeep
> >
> > On Fri, Aug 11, 2017 at 5:43 AM, Daniel Gryniewicz  > > wrote:
> >
> > Have you set Reaper_Work?  Have you changed LRU_N_Q_LANES?  (and
> > which version of Ganesha?)
> >
> > Daniel
> >
> > On 08/10/2017 07:12 PM, Pradeep wrote:
> >
> > Debugged this a little more. It appears that the entries that
> > can be reaped are not at the LRU position (head) of the L1
> > queue. So those can be free'd later by lru_run(). I don't see it
> > happening either for some reason.
> >
> > (gdb) p LRU[1].L1
> > $29 = {q = {next = 0x7fb459e71960, prev = 0x7fb3ec3c0d30}, id =
> > LRU_ENTRY_L1, size = 260379}
> >
> > head of the list is an entry with refcnt 2; but there are
> > several entries with refcnt 1.
> >
> > (gdb) p *(mdcache_lru_t *)0x7fb459e71960
> > $30 = {q = {next = 0x7fb43ddea8a0, prev = 0x7d68a0 },
> > qid = LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 2}
> > (gdb) p *(mdcache_lru_t *)0x7fb43ddea8a0
> > $31 = {q = {next = 0x7fb3f041f9a0, prev = 0x7fb459e71960}, qid =
> > LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}
> > (gdb) p *(mdcache_lru_t *)0x7fb3f041f9a0
> > $32 = {q = {next = 0x7fb466960200, prev = 0x7fb43ddea8a0}, qid =
> > LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}
> > (gdb) p *(mdcache_lru_t *)0x7fb466960200
> > $33 = {q = {next = 0x7fb451e20570, prev = 0x7fb3f041f9a0}, qid =
> > LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
> >
> > The entries with refcnt 1 are moved to L2 by the background
> > thread (lru_run). However it does it only of the open file count
> > is greater than low water mark. In my case, the open_fd_count is
> > not high; so lru_run() doesn't call lru_run_lane() to demote
> > those entries to L2. What is the best approach to handle this
> > scenario?
> >
> > Thanks,
> > Pradeep
> >
> >
> >
> > On Mon, Aug 7, 2017 at 6:08 AM, Daniel Gryniewicz
> > 
> > >> wrote:
> >
> >  It never has been.  In cache_inode, a pin-ref kept it from
> > being
> >  reaped, now any ref beyond 1 keeps it.
> >
> >  On Fri, Aug 4, 2017 at 1:31 PM, Frank Filz
> > 
> >   > >> wrote:
> >   >> I'm hitting a case where mdcache keeps growing well
> > beyond the
> >  high water
> >   >> mark. Here is a snapshot of the lru_state:
> >   >>
> >   >> 1 = {entries_hiwat = 10, entries_used = 2306063,
> > chunks_hiwat =
> >   > 10,
> >   >> chunks_used = 16462,
> >   >>
> >   >> It has grown to 2.3 million entries and each entry is
> > ~1.6K.
> >   >>
> >   >> I looked at the first entry in lane 0, L1 queue:
> >   >>
> 

Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-11 Thread Daniel Gryniewicz
Right, this is reaping.  I was thinking it was the lane thread.  Reaping 
only looks at the single LRU of each queue.  We should probably look at 
some small number of each lane, like 2 or 3.


Frank, this, in combination with the PIN lane, it probably the issue.

Daniel

On 08/11/2017 11:21 AM, Pradeep wrote:

Hi Daniel,

I'm testing with 2.5.1. I haven't changed those parameters. Those 
parameters only affect once you are in lru_run_lane(), right? Since the 
FDs are lower than low-watermark, it never calls lru_run_lane().


Thanks,
Pradeep

On Fri, Aug 11, 2017 at 5:43 AM, Daniel Gryniewicz > wrote:


Have you set Reaper_Work?  Have you changed LRU_N_Q_LANES?  (and
which version of Ganesha?)

Daniel

On 08/10/2017 07:12 PM, Pradeep wrote:

Debugged this a little more. It appears that the entries that
can be reaped are not at the LRU position (head) of the L1
queue. So those can be free'd later by lru_run(). I don't see it
happening either for some reason.

(gdb) p LRU[1].L1
$29 = {q = {next = 0x7fb459e71960, prev = 0x7fb3ec3c0d30}, id =
LRU_ENTRY_L1, size = 260379}

head of the list is an entry with refcnt 2; but there are
several entries with refcnt 1.

(gdb) p *(mdcache_lru_t *)0x7fb459e71960
$30 = {q = {next = 0x7fb43ddea8a0, prev = 0x7d68a0 },
qid = LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 2}
(gdb) p *(mdcache_lru_t *)0x7fb43ddea8a0
$31 = {q = {next = 0x7fb3f041f9a0, prev = 0x7fb459e71960}, qid =
LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}
(gdb) p *(mdcache_lru_t *)0x7fb3f041f9a0
$32 = {q = {next = 0x7fb466960200, prev = 0x7fb43ddea8a0}, qid =
LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}
(gdb) p *(mdcache_lru_t *)0x7fb466960200
$33 = {q = {next = 0x7fb451e20570, prev = 0x7fb3f041f9a0}, qid =
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}

The entries with refcnt 1 are moved to L2 by the background
thread (lru_run). However it does it only of the open file count
is greater than low water mark. In my case, the open_fd_count is
not high; so lru_run() doesn't call lru_run_lane() to demote
those entries to L2. What is the best approach to handle this
scenario?

Thanks,
Pradeep



On Mon, Aug 7, 2017 at 6:08 AM, Daniel Gryniewicz

>> wrote:

 It never has been.  In cache_inode, a pin-ref kept it from
being
 reaped, now any ref beyond 1 keeps it.

 On Fri, Aug 4, 2017 at 1:31 PM, Frank Filz

 >> wrote:
  >> I'm hitting a case where mdcache keeps growing well
beyond the
 high water
  >> mark. Here is a snapshot of the lru_state:
  >>
  >> 1 = {entries_hiwat = 10, entries_used = 2306063,
chunks_hiwat =
  > 10,
  >> chunks_used = 16462,
  >>
  >> It has grown to 2.3 million entries and each entry is
~1.6K.
  >>
  >> I looked at the first entry in lane 0, L1 queue:
  >>
  >> (gdb) p LRU[0].L1
  >> $9 = {q = {next = 0x7fad64256f00, prev =
0x7faf21a1bc00}, id =
  >> LRU_ENTRY_L1, size = 254628}
  >> (gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
  >> $10 = (mdcache_entry_t *) 0x7fad64256b00
  >> (gdb) p $10->lru
  >> $11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0
}, qid =
  >> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
  >> (gdb) p $10->fh_hk.inavl
  >> $13 = true
  >
  > The refcount 2 prevents reaping.
  >
  > There could be a refcount leak.
  >
  > Hmm, though, I thought the entries_hwmark was a hard
limit, guess
 not...
  >
  > Frank
  >
  >> Lane 1:
  >> (gdb) p LRU[1].L1
  >> $18 = {q = {next = 0x7fad625c0300, prev =
0x7faec08c5100}, id =
  >> LRU_ENTRY_L1, size = 253006}
  >> (gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
  >> $21 = (mdcache_entry_t *) 0x7fad625bff00
  >> (gdb) p $21->lru
  >> $22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0
}, qid =
  >> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
  >>
  >> (gdb) p $21->fh_hk.inavl
  >> $24 

Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-11 Thread Matt Benjamin
It's not supposed to, as presently defined, right (scan resistence)?

Matt

On Fri, Aug 11, 2017 at 11:48 AM, Daniel Gryniewicz  wrote:
> On 08/11/2017 09:21 AM, Frank Filz wrote:
>>>
>>> That seems overkill to me.  How many strategies would we support (and
>>> test)?
>>>
>>> Part of the problem is that we've drastically changed how FDs are
>>> handled.
>>> We need to rethink how LRU should work in that context, I think.
>>
>>
>> I wonder also if taking pinning out of the equation (which moved cache
>> objects that had persistent state on them into an entirely separate queue)
>> has had an effect.
>
>
> Could be.
>
>> Hopefully those objects get quickly promoted to MRU of L1
>> (since they should have multiple NFS requests against them).
>
>
> Hmmm... This raises an interesting point.  Yes, more operations should
> happen, but the primary ref for the handle (taken by NFS4_OP_PUTFH) will be
> once per compound, not once per op.  So it would take multiple compounds to
> advance to the MRU of L1.  Not a problem for multiple reads or writes, but
> if a file is opened and read/written once, and then left alone, it won't
> advance to the MRU of L1.
>
> Daniel
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-11 Thread Daniel Gryniewicz

On 08/11/2017 09:21 AM, Frank Filz wrote:

That seems overkill to me.  How many strategies would we support (and
test)?

Part of the problem is that we've drastically changed how FDs are handled.
We need to rethink how LRU should work in that context, I think.


I wonder also if taking pinning out of the equation (which moved cache
objects that had persistent state on them into an entirely separate queue)
has had an effect.


Could be.


Hopefully those objects get quickly promoted to MRU of L1
(since they should have multiple NFS requests against them).


Hmmm... This raises an interesting point.  Yes, more operations should 
happen, but the primary ref for the handle (taken by NFS4_OP_PUTFH) will 
be once per compound, not once per op.  So it would take multiple 
compounds to advance to the MRU of L1.  Not a problem for multiple reads 
or writes, but if a file is opened and read/written once, and then left 
alone, it won't advance to the MRU of L1.


Daniel


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-11 Thread Pradeep
Hi Daniel,

I'm testing with 2.5.1. I haven't changed those parameters. Those
parameters only affect once you are in lru_run_lane(), right? Since the FDs
are lower than low-watermark, it never calls lru_run_lane().

Thanks,
Pradeep

On Fri, Aug 11, 2017 at 5:43 AM, Daniel Gryniewicz  wrote:

> Have you set Reaper_Work?  Have you changed LRU_N_Q_LANES?  (and which
> version of Ganesha?)
>
> Daniel
>
> On 08/10/2017 07:12 PM, Pradeep wrote:
>
>> Debugged this a little more. It appears that the entries that can be
>> reaped are not at the LRU position (head) of the L1 queue. So those can be
>> free'd later by lru_run(). I don't see it happening either for some reason.
>>
>> (gdb) p LRU[1].L1
>> $29 = {q = {next = 0x7fb459e71960, prev = 0x7fb3ec3c0d30}, id =
>> LRU_ENTRY_L1, size = 260379}
>>
>> head of the list is an entry with refcnt 2; but there are several entries
>> with refcnt 1.
>>
>> (gdb) p *(mdcache_lru_t *)0x7fb459e71960
>> $30 = {q = {next = 0x7fb43ddea8a0, prev = 0x7d68a0 }, qid =
>> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 2}
>> (gdb) p *(mdcache_lru_t *)0x7fb43ddea8a0
>> $31 = {q = {next = 0x7fb3f041f9a0, prev = 0x7fb459e71960}, qid =
>> LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}
>> (gdb) p *(mdcache_lru_t *)0x7fb3f041f9a0
>> $32 = {q = {next = 0x7fb466960200, prev = 0x7fb43ddea8a0}, qid =
>> LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}
>> (gdb) p *(mdcache_lru_t *)0x7fb466960200
>> $33 = {q = {next = 0x7fb451e20570, prev = 0x7fb3f041f9a0}, qid =
>> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
>>
>> The entries with refcnt 1 are moved to L2 by the background thread
>> (lru_run). However it does it only of the open file count is greater than
>> low water mark. In my case, the open_fd_count is not high; so lru_run()
>> doesn't call lru_run_lane() to demote those entries to L2. What is the best
>> approach to handle this scenario?
>>
>> Thanks,
>> Pradeep
>>
>>
>>
>> On Mon, Aug 7, 2017 at 6:08 AM, Daniel Gryniewicz > > wrote:
>>
>> It never has been.  In cache_inode, a pin-ref kept it from being
>> reaped, now any ref beyond 1 keeps it.
>>
>> On Fri, Aug 4, 2017 at 1:31 PM, Frank Filz > > wrote:
>>  >> I'm hitting a case where mdcache keeps growing well beyond the
>> high water
>>  >> mark. Here is a snapshot of the lru_state:
>>  >>
>>  >> 1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat
>> =
>>  > 10,
>>  >> chunks_used = 16462,
>>  >>
>>  >> It has grown to 2.3 million entries and each entry is ~1.6K.
>>  >>
>>  >> I looked at the first entry in lane 0, L1 queue:
>>  >>
>>  >> (gdb) p LRU[0].L1
>>  >> $9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
>>  >> LRU_ENTRY_L1, size = 254628}
>>  >> (gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
>>  >> $10 = (mdcache_entry_t *) 0x7fad64256b00
>>  >> (gdb) p $10->lru
>>  >> $11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 }, qid =
>>  >> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
>>  >> (gdb) p $10->fh_hk.inavl
>>  >> $13 = true
>>  >
>>  > The refcount 2 prevents reaping.
>>  >
>>  > There could be a refcount leak.
>>  >
>>  > Hmm, though, I thought the entries_hwmark was a hard limit, guess
>> not...
>>  >
>>  > Frank
>>  >
>>  >> Lane 1:
>>  >> (gdb) p LRU[1].L1
>>  >> $18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
>>  >> LRU_ENTRY_L1, size = 253006}
>>  >> (gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
>>  >> $21 = (mdcache_entry_t *) 0x7fad625bff00
>>  >> (gdb) p $21->lru
>>  >> $22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 },
>> qid =
>>  >> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
>>  >>
>>  >> (gdb) p $21->fh_hk.inavl
>>  >> $24 = true
>>  >>
>>  >> As per LRU_ENTRY_RECLAIMABLE(), these entry should be
>> reclaimable. Not
>>  >> sure why it is not able to claim it. Any ideas?
>>  >>
>>  >> Thanks,
>>  >> Pradeep
>>  >>
>>  >>
>>  >
>> 
>> 
>>  > --
>>  >> Check out the vibrant tech community on one of the world's most
>> engaging
>>  >> tech sites, Slashdot.org! http://sdm.link/slashdot
>>  >> ___
>>  >> Nfs-ganesha-devel mailing list
>>  >> Nfs-ganesha-devel@lists.sourceforge.net
>> 
>>  >> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>> 
>>  >
>>  >
>>  > ---
>>  > This email has been checked for viruses by Avast antivirus
>> software.
>>  > 

Re: [Nfs-ganesha-devel] crash in makefd_xprt()

2017-08-11 Thread William Allen Simpson

On 8/11/17 8:35 AM, Matt Benjamin wrote:

On Fri, Aug 11, 2017 at 8:26 AM, William Allen Simpson
 wrote:

On 8/11/17 2:29 AM, Malahal Naineni wrote:


Following confirms that Thread1 (TCP) is trying to use the same "rec" as
Thread42 (UDP), it is easy to reproduce on the customer system!


There are 2 duplicated fd indexed trees, not well coordinated.  My 2015
code to fix this went in Feb/Mar timeframe for Ganesha v2.5/ntirpc 1.5.


I didn't recall this reached 2.5, independent of the current rework.
(offhand, what branch shows the tree consolidation in 2015?)  


Remove now obsolete rpc_dplx
https://github.com/nfs-ganesha/ntirpc/commit/8f46e5063db28579a2b6b050400684c16d972a87

And the extensive series of patches before that, making it possible to
remove that extra fd tree.

Originally, that was one big patch from circa April 2015 (or earlier),
but it was split into 30+ smaller patches to make it easier to review.



In any
case though, perhaps we should start from pulling up the ntirpc
experimentally.


Not likely to work.  The API changed a lot.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-11 Thread Matt Benjamin
initially, just a couple--but the strategizing step forces an internal
api to develop.

Matt

On Fri, Aug 11, 2017 at 8:49 AM, Daniel Gryniewicz  wrote:
> That seems overkill to me.  How many strategies would we support (and test)?
>
> Part of the problem is that we've drastically changed how FDs are handled.
> We need to rethink how LRU should work in that context, I think.
>
> Daniel
>
>
> On 08/10/2017 07:59 PM, Matt Benjamin wrote:
>>
>> I think the particular thresholds of opens and inode count are
>> interacting in a way we'd like to change.  I think it might make sense
>> to delegate the various decision points to maybe a vector of strategy
>> functions, letting more varied approaches compete?
>>
>> Matt
>>
>> On Thu, Aug 10, 2017 at 7:12 PM, Pradeep  wrote:
>>>
>>> Debugged this a little more. It appears that the entries that can be
>>> reaped
>>> are not at the LRU position (head) of the L1 queue. So those can be
>>> free'd
>>> later by lru_run(). I don't see it happening either for some reason.
>>>
>>> (gdb) p LRU[1].L1
>>> $29 = {q = {next = 0x7fb459e71960, prev = 0x7fb3ec3c0d30}, id =
>>> LRU_ENTRY_L1, size = 260379}
>>>
>>> head of the list is an entry with refcnt 2; but there are several entries
>>> with refcnt 1.
>>>
>>> (gdb) p *(mdcache_lru_t *)0x7fb459e71960
>>> $30 = {q = {next = 0x7fb43ddea8a0, prev = 0x7d68a0 }, qid =
>>> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 2}
>>> (gdb) p *(mdcache_lru_t *)0x7fb43ddea8a0
>>> $31 = {q = {next = 0x7fb3f041f9a0, prev = 0x7fb459e71960}, qid =
>>> LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}
>>> (gdb) p *(mdcache_lru_t *)0x7fb3f041f9a0
>>> $32 = {q = {next = 0x7fb466960200, prev = 0x7fb43ddea8a0}, qid =
>>> LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}
>>> (gdb) p *(mdcache_lru_t *)0x7fb466960200
>>> $33 = {q = {next = 0x7fb451e20570, prev = 0x7fb3f041f9a0}, qid =
>>> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
>>>
>>> The entries with refcnt 1 are moved to L2 by the background thread
>>> (lru_run). However it does it only of the open file count is greater than
>>> low water mark. In my case, the open_fd_count is not high; so lru_run()
>>> doesn't call lru_run_lane() to demote those entries to L2. What is the
>>> best
>>> approach to handle this scenario?
>>>
>>> Thanks,
>>> Pradeep
>>>
>>>
>>>
>>> On Mon, Aug 7, 2017 at 6:08 AM, Daniel Gryniewicz 
>>> wrote:


 It never has been.  In cache_inode, a pin-ref kept it from being
 reaped, now any ref beyond 1 keeps it.

 On Fri, Aug 4, 2017 at 1:31 PM, Frank Filz 
 wrote:
>>
>> I'm hitting a case where mdcache keeps growing well beyond the high
>> water
>> mark. Here is a snapshot of the lru_state:
>>
>> 1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
>
> 10,
>>
>> chunks_used = 16462,
>>
>> It has grown to 2.3 million entries and each entry is ~1.6K.
>>
>> I looked at the first entry in lane 0, L1 queue:
>>
>> (gdb) p LRU[0].L1
>> $9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
>> LRU_ENTRY_L1, size = 254628}
>> (gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
>> $10 = (mdcache_entry_t *) 0x7fad64256b00
>> (gdb) p $10->lru
>> $11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 }, qid =
>> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
>> (gdb) p $10->fh_hk.inavl
>> $13 = true
>
>
> The refcount 2 prevents reaping.
>
> There could be a refcount leak.
>
> Hmm, though, I thought the entries_hwmark was a hard limit, guess
> not...
>
> Frank
>
>> Lane 1:
>> (gdb) p LRU[1].L1
>> $18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
>> LRU_ENTRY_L1, size = 253006}
>> (gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
>> $21 = (mdcache_entry_t *) 0x7fad625bff00
>> (gdb) p $21->lru
>> $22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 }, qid =
>> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
>>
>> (gdb) p $21->fh_hk.inavl
>> $24 = true
>>
>> As per LRU_ENTRY_RECLAIMABLE(), these entry should be reclaimable. Not
>> sure why it is not able to claim it. Any ideas?
>>
>> Thanks,
>> Pradeep
>>
>>
>
>
> 
> --
>>
>> Check out the vibrant tech community on one of the world's most
>> engaging
>> tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> 

Re: [Nfs-ganesha-devel] mdcache growing beyond limits.

2017-08-11 Thread Daniel Gryniewicz
Have you set Reaper_Work?  Have you changed LRU_N_Q_LANES?  (and which 
version of Ganesha?)


Daniel

On 08/10/2017 07:12 PM, Pradeep wrote:
Debugged this a little more. It appears that the entries that can be 
reaped are not at the LRU position (head) of the L1 queue. So those can 
be free'd later by lru_run(). I don't see it happening either for some 
reason.


(gdb) p LRU[1].L1
$29 = {q = {next = 0x7fb459e71960, prev = 0x7fb3ec3c0d30}, id = 
LRU_ENTRY_L1, size = 260379}


head of the list is an entry with refcnt 2; but there are several 
entries with refcnt 1.


(gdb) p *(mdcache_lru_t *)0x7fb459e71960
$30 = {q = {next = 0x7fb43ddea8a0, prev = 0x7d68a0 }, qid = 
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 2}

(gdb) p *(mdcache_lru_t *)0x7fb43ddea8a0
$31 = {q = {next = 0x7fb3f041f9a0, prev = 0x7fb459e71960}, qid = 
LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}

(gdb) p *(mdcache_lru_t *)0x7fb3f041f9a0
$32 = {q = {next = 0x7fb466960200, prev = 0x7fb43ddea8a0}, qid = 
LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 1, cf = 0}

(gdb) p *(mdcache_lru_t *)0x7fb466960200
$33 = {q = {next = 0x7fb451e20570, prev = 0x7fb3f041f9a0}, qid = 
LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}


The entries with refcnt 1 are moved to L2 by the background thread 
(lru_run). However it does it only of the open file count is greater 
than low water mark. In my case, the open_fd_count is not high; so 
lru_run() doesn't call lru_run_lane() to demote those entries to L2. 
What is the best approach to handle this scenario?


Thanks,
Pradeep



On Mon, Aug 7, 2017 at 6:08 AM, Daniel Gryniewicz > wrote:


It never has been.  In cache_inode, a pin-ref kept it from being
reaped, now any ref beyond 1 keeps it.

On Fri, Aug 4, 2017 at 1:31 PM, Frank Filz > wrote:
 >> I'm hitting a case where mdcache keeps growing well beyond the
high water
 >> mark. Here is a snapshot of the lru_state:
 >>
 >> 1 = {entries_hiwat = 10, entries_used = 2306063, chunks_hiwat =
 > 10,
 >> chunks_used = 16462,
 >>
 >> It has grown to 2.3 million entries and each entry is ~1.6K.
 >>
 >> I looked at the first entry in lane 0, L1 queue:
 >>
 >> (gdb) p LRU[0].L1
 >> $9 = {q = {next = 0x7fad64256f00, prev = 0x7faf21a1bc00}, id =
 >> LRU_ENTRY_L1, size = 254628}
 >> (gdb) p (mdcache_entry_t *)(0x7fad64256f00-1024)
 >> $10 = (mdcache_entry_t *) 0x7fad64256b00
 >> (gdb) p $10->lru
 >> $11 = {q = {next = 0x7fad65ea0f00, prev = 0x7d67c0 }, qid =
 >> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 0, cf = 0}
 >> (gdb) p $10->fh_hk.inavl
 >> $13 = true
 >
 > The refcount 2 prevents reaping.
 >
 > There could be a refcount leak.
 >
 > Hmm, though, I thought the entries_hwmark was a hard limit, guess
not...
 >
 > Frank
 >
 >> Lane 1:
 >> (gdb) p LRU[1].L1
 >> $18 = {q = {next = 0x7fad625c0300, prev = 0x7faec08c5100}, id =
 >> LRU_ENTRY_L1, size = 253006}
 >> (gdb) p (mdcache_entry_t *)(0x7fad625c0300 - 1024)
 >> $21 = (mdcache_entry_t *) 0x7fad625bff00
 >> (gdb) p $21->lru
 >> $22 = {q = {next = 0x7fad66fce600, prev = 0x7d68a0 }, qid =
 >> LRU_ENTRY_L1, refcnt = 2, flags = 0, lane = 1, cf = 1}
 >>
 >> (gdb) p $21->fh_hk.inavl
 >> $24 = true
 >>
 >> As per LRU_ENTRY_RECLAIMABLE(), these entry should be
reclaimable. Not
 >> sure why it is not able to claim it. Any ideas?
 >>
 >> Thanks,
 >> Pradeep
 >>
 >>
 >

 > --
 >> Check out the vibrant tech community on one of the world's most
engaging
 >> tech sites, Slashdot.org! http://sdm.link/slashdot
 >> ___
 >> Nfs-ganesha-devel mailing list
 >> Nfs-ganesha-devel@lists.sourceforge.net

 >> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

 >
 >
 > ---
 > This email has been checked for viruses by Avast antivirus software.
 > https://www.avast.com/antivirus 
 >
 >
 >

--
 > Check out the vibrant tech community on one of the world's most
 > engaging tech sites, Slashdot.org! http://sdm.link/slashdot
 > ___
 > Nfs-ganesha-devel mailing list
 > Nfs-ganesha-devel@lists.sourceforge.net

 > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] crash in makefd_xprt()

2017-08-11 Thread Matt Benjamin
I didn't recall this reached 2.5, independent of the current rework.
(offhand, what branch shows the tree consolidation in 2015?)  In any
case though, perhaps we should start from pulling up the ntirpc
experimentally.

Matt

On Fri, Aug 11, 2017 at 8:26 AM, William Allen Simpson
 wrote:
> On 8/11/17 2:29 AM, Malahal Naineni wrote:
>>
>> Following confirms that Thread1 (TCP) is trying to use the same "rec" as
>> Thread42 (UDP), it is easy to reproduce on the customer system!
>>
> There are 2 duplicated fd indexed trees, not well coordinated.  My 2015
> code to fix this went in Feb/Mar timeframe for Ganesha v2.5/ntirpc 1.5.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] crash in makefd_xprt()

2017-08-11 Thread William Allen Simpson

On 8/11/17 2:29 AM, Malahal Naineni wrote:

Following confirms that Thread1 (TCP) is trying to use the same "rec" as 
Thread42 (UDP), it is easy to reproduce on the customer system!


There are 2 duplicated fd indexed trees, not well coordinated.  My 2015
code to fix this went in Feb/Mar timeframe for Ganesha v2.5/ntirpc 1.5.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel