Re: [Nfs-ganesha-devel] Announce Push of V2.6-dev.17

2017-11-14 Thread Marc Eshel
I did update submodules but did not installed the libntirpc, where do you 
get it?
Thanks, Marc.



From:   Malahal Naineni 
To: Marc Eshel 
Cc: Frank Filz , 
nfs-ganesha-devel@lists.sourceforge.net
Date:   11/14/2017 11:37 AM
Subject:Re: [Nfs-ganesha-devel] Announce Push of V2.6-dev.17



Marc, I just built and loaded V2.6 dev.17. I am able to mount and do "ls" 
from the client. Did you do "submodules update" and installed the 
libntirpc as a separate rpm as well?

Regards, Malahal.

On Tue, Nov 14, 2017 at 11:06 PM, Marc Eshel  wrote:
I skipped a couple of dev releases but now I am get this when I try to
mount.

(gdb) c
Continuing.
[New Thread 0x71b3b700 (LWP 18716)]
[New Thread 0x7ffe411ce700 (LWP 18717)]

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x71b3b700 (LWP 18716)]
0x75e13989 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);



From:   "Frank Filz" 
To: 
Date:   11/12/2017 09:36 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.6-dev.17



Branch next

Tag:V2.6-dev.17

Sorry for the delay on this, forgot to send before I came up from office
on
Friday...

NOTE: This merge includes an ntirpc pullup, please update your submodule.

Release Highlights

* ntirpc pullup

* SAL: Various cleanup of state recovery bits

* SAL: allow grace period to be lifted early if all clients have sent
RECLAIM_COMPLETE

* CEPH: do an inode lookup vs. MDS when the Inode is not in cache

* 9P lock: aquire state_lock properly

* Set thread names in FSAL_PROXY and ntirpc initiated threads

* Allow configuration of NFSv4 minor versions.

* Lower message log level for a non-existent user

* Fix cmake failure when /etc/os-release is not present

* GLUSTER: glusterfs_create_export() SEGV for typo ganesha.conf

* handle hosts via libcidr to unify IPv4/IPv4 host/network clients

* Add some detail to config documentation

* NFSv4.1+ return special invalid stateid on close per Section 8.2.3

* Give temp fd in fsal_reopen_obj when verification fails for a fd's
openflags

* GPFS: Set a FDs 'openflags=FSAL_O_CLOSED' when fd=-1 is set

* Various RPC callback and timeout fixes

Signed-off-by: Frank S. Filz 

Contents:

d8e89f7 Frank S. Filz V2.6-dev.17
542ea90 William Allen Simpson Pull up NTIRPC through #91
645f410 Madhu Thorat [GPFS] Set a FDs 'openflags=FSAL_O_CLOSED' when fd=-1
is set
482672a Madhu Thorat Give temp fd in fsal_reopen_obj when verification
fails
for a fd's openflags
05ade07 Frank S. Filz NFSv4.1+ return special invalid stateid on close per
Section 8.2.3
9bd00bd Frank S. Filz Add some detail to config documentation
5ca449d Jan-Martin Rämer handle hosts via libcidr to unify IPv4/IPv4
host/network clients
0819dc4 Kaleb S. KEITHLEY fsal_gluster: glusterfs_create_export() SEGV for
typo ganesha.conf
a5da1a0 Malahal Naineni Fix cmake failure when /etc/os-release is not
present
ed4bace Malahal Naineni Lower message log level for a non-existent user
abcd932 Malahal Naineni Allow configuration of NFSv4 minor versions.
1a9f1e0 Dominique Martinet FSAL_PROXY: set thread names for logging
3b857f1 Dominique Martinet 9P lock: aquire state_lock properly
302ab52 Jeff Layton SAL: allow grace period to be lifted early if all
clients have sent RECLAIM_COMPLETE
476c206 Jeff Layton FSAL_CEPH: do an inode lookup vs. MDS when the Inode
is
not in cache
08a953a Jeff Layton recovery_fs: ensure we free the cid_recov_tag when
removing the entry
0f34e77 Jeff Layton recovery_fs: remove unnecessary conditionals from
fs_read_recov_clids_impl
875495c Jeff Layton NFSv4: remove stable-storage client record on
DESTROY_CLIENTID
35ba7dd Jeff Layton NFSv4: make cid_allow_reclaim a bool
e016e58 Jeff Layton SAL: fix locking around clnt->cid_recov_tag
3e228c8 Jeff Layton SAL: remove check_clid recovery operation
e132294 Jeff Layton SAL: clean up nfs4_check_deleg_reclaim a bit


---
This email has been checked for viruses by Avast antivirus software.
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_antivirus&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=NhjWjQMiZ2Z3jl9k1z_vFQ&m=8bQeqLihynhki7AB0Jir49ppgOHtLD15akxDdRJUR1g&s=W5izlGpe_Bib2XeX0j8VpEpy_kMs7UOkQ2EgkDX1mY0&e=




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org!
https://urldefense.proofpoint.com/v2/url?u=http-3A__sdm.link_slashdot&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=NhjWjQMiZ2Z3jl9k1z_vFQ&m=8bQeqLihynhki7AB0Jir49ppgOHtLD15akxDdRJUR1g&s=z-_BvnV3pMpLuuTXjdthkPDebgoCfZmysyaGcZwsKAY&e=


___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_nfs-2Dgane

Re: [Nfs-ganesha-devel] Announce Push of V2.6-dev.17

2017-11-14 Thread Marc Eshel
I skipped a couple of dev releases but now I am get this when I try to 
mount.

(gdb) c
Continuing.
[New Thread 0x71b3b700 (LWP 18716)]
[New Thread 0x7ffe411ce700 (LWP 18717)]

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x71b3b700 (LWP 18716)]
0x75e13989 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);



From:   "Frank Filz" 
To: 
Date:   11/12/2017 09:36 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.6-dev.17



Branch next

Tag:V2.6-dev.17

Sorry for the delay on this, forgot to send before I came up from office 
on
Friday...

NOTE: This merge includes an ntirpc pullup, please update your submodule.

Release Highlights

* ntirpc pullup

* SAL: Various cleanup of state recovery bits

* SAL: allow grace period to be lifted early if all clients have sent
RECLAIM_COMPLETE

* CEPH: do an inode lookup vs. MDS when the Inode is not in cache

* 9P lock: aquire state_lock properly

* Set thread names in FSAL_PROXY and ntirpc initiated threads

* Allow configuration of NFSv4 minor versions.

* Lower message log level for a non-existent user

* Fix cmake failure when /etc/os-release is not present

* GLUSTER: glusterfs_create_export() SEGV for typo ganesha.conf

* handle hosts via libcidr to unify IPv4/IPv4 host/network clients

* Add some detail to config documentation

* NFSv4.1+ return special invalid stateid on close per Section 8.2.3

* Give temp fd in fsal_reopen_obj when verification fails for a fd's
openflags

* GPFS: Set a FDs 'openflags=FSAL_O_CLOSED' when fd=-1 is set

* Various RPC callback and timeout fixes

Signed-off-by: Frank S. Filz 

Contents:

d8e89f7 Frank S. Filz V2.6-dev.17
542ea90 William Allen Simpson Pull up NTIRPC through #91
645f410 Madhu Thorat [GPFS] Set a FDs 'openflags=FSAL_O_CLOSED' when fd=-1
is set
482672a Madhu Thorat Give temp fd in fsal_reopen_obj when verification 
fails
for a fd's openflags
05ade07 Frank S. Filz NFSv4.1+ return special invalid stateid on close per
Section 8.2.3
9bd00bd Frank S. Filz Add some detail to config documentation
5ca449d Jan-Martin Rämer handle hosts via libcidr to unify IPv4/IPv4
host/network clients
0819dc4 Kaleb S. KEITHLEY fsal_gluster: glusterfs_create_export() SEGV for
typo ganesha.conf
a5da1a0 Malahal Naineni Fix cmake failure when /etc/os-release is not
present
ed4bace Malahal Naineni Lower message log level for a non-existent user
abcd932 Malahal Naineni Allow configuration of NFSv4 minor versions.
1a9f1e0 Dominique Martinet FSAL_PROXY: set thread names for logging
3b857f1 Dominique Martinet 9P lock: aquire state_lock properly
302ab52 Jeff Layton SAL: allow grace period to be lifted early if all
clients have sent RECLAIM_COMPLETE
476c206 Jeff Layton FSAL_CEPH: do an inode lookup vs. MDS when the Inode 
is
not in cache
08a953a Jeff Layton recovery_fs: ensure we free the cid_recov_tag when
removing the entry
0f34e77 Jeff Layton recovery_fs: remove unnecessary conditionals from
fs_read_recov_clids_impl
875495c Jeff Layton NFSv4: remove stable-storage client record on
DESTROY_CLIENTID
35ba7dd Jeff Layton NFSv4: make cid_allow_reclaim a bool
e016e58 Jeff Layton SAL: fix locking around clnt->cid_recov_tag
3e228c8 Jeff Layton SAL: remove check_clid recovery operation
e132294 Jeff Layton SAL: clean up nfs4_check_deleg_reclaim a bit


---
This email has been checked for viruses by Avast antivirus software.
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_antivirus&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=NhjWjQMiZ2Z3jl9k1z_vFQ&m=8bQeqLihynhki7AB0Jir49ppgOHtLD15akxDdRJUR1g&s=W5izlGpe_Bib2XeX0j8VpEpy_kMs7UOkQ2EgkDX1mY0&e=



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! 
https://urldefense.proofpoint.com/v2/url?u=http-3A__sdm.link_slashdot&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=NhjWjQMiZ2Z3jl9k1z_vFQ&m=8bQeqLihynhki7AB0Jir49ppgOHtLD15akxDdRJUR1g&s=z-_BvnV3pMpLuuTXjdthkPDebgoCfZmysyaGcZwsKAY&e=

___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_nfs-2Dganesha-2Ddevel&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=NhjWjQMiZ2Z3jl9k1z_vFQ&m=8bQeqLihynhki7AB0Jir49ppgOHtLD15akxDdRJUR1g&s=Sj9bY-f51503IvUB7rLIs5wzlYtmw4eP01sd6Xp-AKo&e=







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.6-dev.10

2017-09-20 Thread Marc Eshel
Sorry, the problem is gone after reboot.
Thanks, Marc.



From:   William Allen Simpson 
To: Marc Eshel , Frank Filz 

Cc: nfs-ganesha-devel@lists.sourceforge.net
Date:   09/20/2017 03:39 PM
Subject:Re: [Nfs-ganesha-devel] Announce Push of V2.6-dev.10



It's harder, as that message block appears twice in the code.

I've pushed a patch to linuxbox2 ntirpc branch was16:

 svc_vc_wait to distinguish MSG_WAITALL

Try that with Default_Log_Level = FULL_DEBUG;

We'll get to the bottom of this.






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.6-dev.10

2017-09-20 Thread Marc Eshel
Just update to V2.6-dev.10 after skipping couple of version and can not 
mount and I see this msg in the log.

20/09/2017 10:52:30 : epoch 59c2aac5 : bear105 : 
ganesha.nfsd-25633[0x7f7a81da1ee0] rpc :TIRPC :EVENT :svc_vc_recv: 
0x7f7a6c000ab0 fd 18 recv errno 0 (will set dead)

Marc.



From:   "Frank Filz" 
To: 
Date:   09/19/2017 12:16 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.6-dev.10



Branch next

Tag:V2.6-dev.10

NOTE: This release contains an ntirpc pullup, please update your 
submodules

Release Highlights

* ntirpc pullup

Signed-off-by: Frank S. Filz 

Contents:

2a86360 Frank S. Filz V2.6-dev.10
32db433 William Allen Simpson Pull up NTIRPC #75


---
This email has been checked for viruses by Avast antivirus software.
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_antivirus&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=NhjWjQMiZ2Z3jl9k1z_vFQ&m=GqUeuO9Q-bqQWY5NDojTp116QVqHdKkucmPEIfHoEQo&s=5PKrpXmNimLduHIVwUvkSykdaNwtOY86W54Jno9Z58A&e=
 



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! 
https://urldefense.proofpoint.com/v2/url?u=http-3A__sdm.link_slashdot&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=NhjWjQMiZ2Z3jl9k1z_vFQ&m=GqUeuO9Q-bqQWY5NDojTp116QVqHdKkucmPEIfHoEQo&s=5VWqdV7Olni72P7LPw9-QAGSbuem3bEfPeeVD8tcMXY&e=
 

___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_nfs-2Dganesha-2Ddevel&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=NhjWjQMiZ2Z3jl9k1z_vFQ&m=GqUeuO9Q-bqQWY5NDojTp116QVqHdKkucmPEIfHoEQo&s=E1BRR7_k7-a562t0wFWaBYSjBuLY7Hp0eMQxFiA2F-g&e=
 







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Proposed backports for 2.5.2

2017-08-09 Thread Marc Eshel
Can you also add, it is a minor change in the value of 
FATTR4_XATTR_SUPPORT
Thanks, Marc.

commit ac16ad42a514de218debc9b21e92cf9ca353ccc4
Author: Marc Eshel 
Date:   Sun Jul 30 12:50:14 2017 -0700

update FATTR4_XATTR_SUPPORT
 
update FATTR4_XATTR_SUPPORT to 82 based on latest 
draft-ietf-nfsv4-xattrs-06.txt
Push to version 2.5
 
Change-Id: Id9afe2cedc9de080e5008a4a8bb502f47ebd0116
Signed-off-by: Marc Eshel 



From:   "Frank Filz" 
To: , "'Matt Benjamin'" 
Cc: 'nfs-ganesha-devel' 
Date:   08/09/2017 09:05 AM
Subject:Re: [Nfs-ganesha-devel] Proposed backports for 2.5.2



> On 08/09/2017 11:29 AM, Frank Filz wrote:
> > Candidates not merged into 2.6 yet:
> >
> > Fix rgw_mount2 check when RGW not installed Daniel
> Gryniewicz
> > CMake - Have 'make dist' generate the correct tarball name Daniel
> > Gryniewicz
> > Build libntirpc package when not using system ntirpc Daniel
> > Gryniewicz
> 
> These could be useful to upstream Ceph for building.  However, it's not 
vital.
> For now, they have to have a separate branch with patches to do this.

If it helps upstream Ceph for building, that seems like a good reason to 
take them.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.5-rc4

2017-05-05 Thread Marc Eshel
Another bug that was just introduced on the up call path, this time op_ctx 
is good but op_ctx->ctx_export is null.
Marc.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fba8356b280 (LWP 10471)]
0x0053eb6e in mdc_check_mapping (entry=0x7fb874001480) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:258
258 (int32_t) op_ctx->ctx_export->export_id)
(gdb) where
#0  0x0053eb6e in mdc_check_mapping (entry=0x7fb874001480) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:258
#1  0x0054077e in mdcache_find_keyed (key=0x7fba83569ce0, 
entry=0x7fba83569db8) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:856
#2  0x005324e3 in mdc_up_invalidate (export=0x1829030, 
handle=0x7fba83569f50, flags=15) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_up.c:59
#3  0x7fba8357b757 in GPFSFSAL_UP_Thread (Arg=0x1827b40) at 
/nas/ganesha/new-ganesha/src/FSAL/FSAL_GPFS/fsal_up.c:318
#4  0x7fba866f3df3 in start_thread (arg=0x7fba8356b280) at 
pthread_create.c:308
#5  0x7fba85db43dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) p op_ctx
$1 = (struct req_op_context *) 0x7fba83569d00
(gdb) p op_ctx->ctx_export
$2 = (struct gsh_export *) 0x0



From:   "Frank Filz" 
To: 
Date:   05/05/2017 04:47 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.5-rc4



Branch next

Tag:V2.5-rc4

NOTE: This merge has an ntirpc pullup - please update your submodule

Release Highlights

* fix various races and issues with exports and mdcache, esp. unexport

* make sure rename and link check handles for different exports (XDEV)

* Fix infinite loop in avl_dirent_set_deleted

* Handle junctions in rename/link/unlink

* Logging : Limit the no of logrotated files

* Reset stat counters using dbus interface

* gpfs: Remove unused REOPEN_BY_FD

* dispatch: call SVC_STAT before svcerr_*

* ntirpc pullup:  take RPC and XDR decode length fixes

Signed-off-by: Frank S. Filz 

Contents:

d876449 Frank S. Filz V2.5-rc4
43863cf Matt Benjamin dispatch: call SVC_STAT before svcerr_*
1437aa0 Matt Benjamin ntirpc pullup:  take RPC and XDR decode length fixes
15d1e66 Malahal Naineni gpfs: Remove unused REOPEN_BY_FD
9b4739f Sachin Punadikar Reset stat counters using dbus interface
052e82a Jiffin Tony Thottan Logging : Limit the no of logrotated files
d78535a Daniel Gryniewicz Handle junctions in rename/link/unlink
b4f8214 Frank S. Filz Make NFS v4 and 9P rename and link check for XDEV
across exports
e259e81 Frank S. Filz Use saved_export when cleaning up saved_object
(SavedFH).
1c70180 Frank S. Filz Fix infinite loop in avl_dirent_set_deleted
ceb4aed Frank S. Filz Fixup unexport/lru_run_lane/mdcache_lru_clean races
dc83243 Frank S. Filz Fix race between mdcache_unexport and
mdc_check_mapping
5374f6f Frank S. Filz Don't double deconstruct new entry in
mdcache_new_entry


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.5-rc4

2017-05-05 Thread Marc Eshel
Frank,
Do I have to re-post my patches or can they go into rc5 ?
Marc.



From:   "Frank Filz" 
To: 
Date:   05/05/2017 04:47 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.5-rc4



Branch next

Tag:V2.5-rc4

NOTE: This merge has an ntirpc pullup - please update your submodule

Release Highlights

* fix various races and issues with exports and mdcache, esp. unexport

* make sure rename and link check handles for different exports (XDEV)

* Fix infinite loop in avl_dirent_set_deleted

* Handle junctions in rename/link/unlink

* Logging : Limit the no of logrotated files

* Reset stat counters using dbus interface

* gpfs: Remove unused REOPEN_BY_FD

* dispatch: call SVC_STAT before svcerr_*

* ntirpc pullup:  take RPC and XDR decode length fixes

Signed-off-by: Frank S. Filz 

Contents:

d876449 Frank S. Filz V2.5-rc4
43863cf Matt Benjamin dispatch: call SVC_STAT before svcerr_*
1437aa0 Matt Benjamin ntirpc pullup:  take RPC and XDR decode length fixes
15d1e66 Malahal Naineni gpfs: Remove unused REOPEN_BY_FD
9b4739f Sachin Punadikar Reset stat counters using dbus interface
052e82a Jiffin Tony Thottan Logging : Limit the no of logrotated files
d78535a Daniel Gryniewicz Handle junctions in rename/link/unlink
b4f8214 Frank S. Filz Make NFS v4 and 9P rename and link check for XDEV
across exports
e259e81 Frank S. Filz Use saved_export when cleaning up saved_object
(SavedFH).
1c70180 Frank S. Filz Fix infinite loop in avl_dirent_set_deleted
ceb4aed Frank S. Filz Fixup unexport/lru_run_lane/mdcache_lru_clean races
dc83243 Frank S. Filz Fix race between mdcache_unexport and
mdc_check_mapping
5374f6f Frank S. Filz Don't double deconstruct new entry in
mdcache_new_entry


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.5-rc3

2017-05-04 Thread Marc Eshel
Hi Frank,

You recently added 
commit 65599c645a81edbe8b953cc29ac29978671a11be
Author: Frank S. Filz 
Date:   Wed Mar 1 17:28:18 2017 -0800

Fill in FATTR4_SUPPORTED_ATTRS from FSAL fs_supported_attrs 


but when posix2fsal_attributes() is called from the up thread op_ctx is 
null.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x73442280 (LWP 10492)]
0x0041d36f in posix2fsal_attributes (buffstat=0x73441020, 
fsalattr=0x73440e40)
at /nas/ganesha/new-ganesha/src/FSAL/fsal_convert.c:422
422 fsalattr->supported = 
op_ctx->fsal_export->exp_ops.fs_supported_attrs(
(gdb) where
#0  0x0041d36f in posix2fsal_attributes (buffstat=0x73441020, 
fsalattr=0x73440e40)
at /nas/ganesha/new-ganesha/src/FSAL/fsal_convert.c:422
#1  0x734527e2 in GPFSFSAL_UP_Thread (Arg=0x7ecb40) at 
/nas/ganesha/new-ganesha/src/FSAL/FSAL_GPFS/fsal_up.c:342
#2  0x765cadf3 in start_thread (arg=0x73442280) at 
pthread_create.c:308
#3  0x75c8b3dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) p op_ctx
$1 = (struct req_op_context *) 0x0






From:   "Frank Filz" 
To: 
Date:   04/28/2017 03:47 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.5-rc3



Branch next

Tag:V2.5-rc3

Release Highlights

* fix nlm and nsm refcounts

* make state_owner hash table work like other hash tables

* Fix F_GETLK/SETLK/SETLKW having F_GETLK64/SETLK64/SETLKW64 value

* Don't call state_share_remove with support_ex

* fixup use of handles into wire, host, key, and fsal object (3 patches)

* FSAL_MEM - Check for valid attrs. CID #161621

* Fix NFSv4 messages with NFS_V4 component

* FSAL_GLUSTER - initialize buffxstat. CID 161510

* Fix borked MDCACHE LTTng tracepoints

* Fix CMake configuration so ganesha_conf is installed correctly.

* fix some things in FSAL_GPFS

* logging (non-root): move log files from /var/log to /var/log/ganesha/

Signed-off-by: Frank S. Filz 

Contents:

1c71400 Frank S. Filz V2.5-rc3
58877b0 Kaleb S. KEITHLEY logging (non-root): move log files from /var/log
to /var/log/ganesha/
afa662e Swen Schillig [FSAL_GPFS] Fix root_fd gathering on export.
5eac292 Swen Schillig [FSAL_GPFS] Remove dead code.
36b5776 Swen Schillig [FSAL_GPFS] Remove duplicate code.
4fa6683 Wyllys Ingersoll Fix CMake configuration so ganesha_conf is
installed correctly.
06a274f Daniel Gryniewicz Fix borked MDCACHE LTTng tracepoints
5d8126b Daniel Gryniewicz FSAL_GLUSTER - initialize buffxstat. CID 161510
872f174 Malahal Naineni Fix NFSv4 messages with NFS_V4 component
870bd0a Malahal Naineni Replace mdcache_key_t in struct fsdir by 
host-handle
292f3b0 Daniel Gryniewicz Clean up handle method naming in FSAL API
ab9d2f1 Daniel Gryniewicz FSAL_MEM - Check for valid attrs. CID #161621
ffd225b Malahal Naineni Add handle_to_key export operation.
c362183 Malahal Naineni Don't call state_share_remove with support_ex
4fe2a3a Malahal Naineni Fix F_GETLK/SETLK/SETLKW having
F_GETLK64/SETLK64/SETLKW64 value
b049eb9 Malahal Naineni Convert state_owner hash table to behave like 
others
52e0e12 Malahal Naineni Fix nlm state refcount going up from zero
acb632c Malahal Naineni Fix nsm client refcount going up from zero
feb12d2 Malahal Naineni Fix nlm client refcount going up from zero


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] seek2 useless ?

2017-02-16 Thread Marc Eshel
GPFS has few 2.4 functions implemented including SEEK.
Marc.



From:   Frank Filz 
To: patrice.lu...@cea.fr
Cc: MARTINET Dominique , 
nfs-ganesha-devel@lists.sourceforge.net, LEIBOVICI Thomas 
, DENIEL Philippe 158570 
Date:   02/16/2017 10:54 AM
Subject:Re: [Nfs-ganesha-devel] seek2 useless ?



On 02/16/2017 02:41 AM, LUCAS Patrice wrote:
> Hi Frank,
>
>
> After a quick search, it seems that seek2 is never called and
> implemented only by the two stackable fsals, mdcache and nullfs.
>
>
> Can you confirm that it is useless to implement seek2 in the
> support_ex version of FSAL_PROXY ?
>
>
> Best regards,
>
Seek2 is there for NFS v4.2, it probably won't get much implementation
until we have more 4.2 support.


Frank


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] gpfs.h update

2017-02-07 Thread Marc Eshel
I am trying to update the gpfs.h but since it is not complying with 
Ganesha format it is rejected. I have to keep it the same as the one used 
by GPFS source code, is there a way to force it in?
Thanks, Marc.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Fwd: Ganesha logo / hex sticker

2017-02-07 Thread Marc Eshel
I see that this mailing list will not let you share attachments.
Marc.



From:   Marc Eshel/Almaden/IBM@IBMUS
To: "Kaleb S. KEITHLEY" 
Cc: nfs-ganesha-devel@lists.sourceforge.net
Date:   02/07/2017 10:45 AM
Subject:Re: [Nfs-ganesha-devel] Fwd: Ganesha logo / hex sticker



I could not see you logo for some reason it got detached but here is the 
one I used when I worked on Ganesha.






From:   "Kaleb S. KEITHLEY" 
To: nfs-ganesha-devel@lists.sourceforge.net
Date:   02/07/2017 08:38 AM
Subject:[Nfs-ganesha-devel] Fwd: Ganesha logo / hex sticker






 Forwarded Message 
Subject: Ganesha logo / hex sticker
Date:Tue, 07 Feb 2017 12:25:14 +
From:Tuomas Kuosmanen 
To:  Amye Scavarda 
CC:  Kaleb Keithley , Vijay Bellur 




Hello all :-)

There was a discussion of a hexagonal Ganesha project sticker during 
Devconf, and I realized the logo I got from Amye needed some work, 
because it was a very wide shape, quite low resolution and not in vector 
format.  Thus I have a proposed new logo for Ganesha project... and a 
sticker with it, later on, of course.

I also bumped into Kaleb in Brno and we discussed this. (Cc:ed, hello! :-)

Now, I sadly admit my great fondness of Indian cuisine don't help me 
very much with Indian culture and particularly the symbology in 
hinduism. Since the logo for Ganesha is an elephant which also happens 
to be a Hindi deity, I am aware there might be some pitfalls in 
designing an elephant based logo for the Ganesha project... For this 
reason I also Cc:ed Vijay into this mail thread, hoping that he might be 
able to give more feedback and maybe circulate this through some other 
colleagues of indian heritage if necessary. I want to get the design 
right, so it is not silly nor offending to anyone.

The attached logo is gray because I have not looked into the colors yet. 
The trunk of the elephant is supposedly forming a "G" (and it looks a 
bit like a muscular hand too, I guess)

Let me know what you think!

//Tuomas

ganesha-logo-mockup.png

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Fwd: Ganesha logo / hex sticker

2017-02-07 Thread Marc Eshel
I could not see you logo for some reason it got detached but here is the 
one I used when I worked on Ganesha.






From:   "Kaleb S. KEITHLEY" 
To: nfs-ganesha-devel@lists.sourceforge.net
Date:   02/07/2017 08:38 AM
Subject:[Nfs-ganesha-devel] Fwd: Ganesha logo / hex sticker






 Forwarded Message 
Subject: Ganesha logo / hex sticker
Date:Tue, 07 Feb 2017 12:25:14 +
From:Tuomas Kuosmanen 
To:  Amye Scavarda 
CC:  Kaleb Keithley , Vijay Bellur 




Hello all :-)

There was a discussion of a hexagonal Ganesha project sticker during 
Devconf, and I realized the logo I got from Amye needed some work, 
because it was a very wide shape, quite low resolution and not in vector 
format.  Thus I have a proposed new logo for Ganesha project... and a 
sticker with it, later on, of course.

I also bumped into Kaleb in Brno and we discussed this. (Cc:ed, hello! :-)

Now, I sadly admit my great fondness of Indian cuisine don't help me 
very much with Indian culture and particularly the symbology in 
hinduism. Since the logo for Ganesha is an elephant which also happens 
to be a Hindi deity, I am aware there might be some pitfalls in 
designing an elephant based logo for the Ganesha project... For this 
reason I also Cc:ed Vijay into this mail thread, hoping that he might be 
able to give more feedback and maybe circulate this through some other 
colleagues of indian heritage if necessary. I want to get the design 
right, so it is not silly nor offending to anyone.

The attached logo is gray because I have not looked into the colors yet. 
The trunk of the elephant is supposedly forming a "G" (and it looks a 
bit like a muscular hand too, I guess)

Let me know what you think!

//Tuomas

ganesha-logo-mockup.png

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Ganesha and RDMA

2017-01-25 Thread Marc Eshel
Can someone answer few questions about RDMA? 
Which Ganesha version supports RDMA, with which hardware,  NFS protocols, 
and FSALs?
Thanks, Marc.



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.4.0.4

2016-10-28 Thread Marc Eshel
Not sure I follow, but yes it fixed my problem.
Thanks, Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS, 
Cc: 
Date:   10/28/2016 10:47 AM
Subject:RE: [Nfs-ganesha-devel] Announce Push of V2.4.0.4



Ok, I just tried NSF v3, and discovered Ganesha wasn't registering with
Portmapper.

The problem turns out to be issues with how cmake works... Somehow the
NO_PORTMAPPER option was getting cached as on in cmake and once it was
properly plumbed in, all of a sudden it took effect even though never
requested.

A git clean -dfx and rerun cmake and make install should clear up the
problem.

Frank

> -Original Message-
> From: Marc Eshel [mailto:es...@us.ibm.com]
> Sent: Friday, October 28, 2016 9:13 AM
> To: d...@redhat.com
> Cc: nfs-ganesha-devel@lists.sourceforge.net
> Subject: Re: [Nfs-ganesha-devel] Announce Push of V2.4.0.4
> 
> I just see the error in the trace but I don't know what Ganesha does 
with
it, if
> it is not the reason for v3 not working I will have to look somewhere
else.
> Thanks, Marc.
> 
> 
> 
> From:   Daniel Gryniewicz 
> To: nfs-ganesha-devel@lists.sourceforge.net
> Date:   10/28/2016 06:59 AM
> Subject:Re: [Nfs-ganesha-devel] Announce Push of V2.4.0.4
> 
> 
> 
> On 10/27/2016 08:58 PM, Marc Eshel wrote:
> > Hi Frank,
> > I see two problems with this version.
> > 1. It looks like I can not use NFSv3 mount.
> > 2. if I export the root of the GPFS fs the export create gets and
> > error
> I
> > believe because of the
> > 971670b6df6546360514b8ecb50bc0db843b39d6 patch to ensure that a
> handle
> has
> > a
> > parent pointer, which is trying to get handle of the parent of the
> > root which is a different file system.
> 
> Unlikely, because this change never returns an error.  It eats any 
errors,
> under the assumption that it's reached the top of the filesystem.  Could
you
> send me the actual errors involved so I can look?
> 
> Daniel
> 
> 
>

--
> The Command Line: Reinvented for Modern Developers Did the resurgence
> of CLI tooling catch you by surprise?
> Reconnect with the command line and become more productive.
> Learn the new .NET and ASP.NET CLI. Get your free copy!
> http://sdm.link/telerik
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> 
> 
> 
> 
> 
> 
>

--
> The Command Line: Reinvented for Modern Developers Did the resurgence
> of CLI tooling catch you by surprise?
> Reconnect with the command line and become more productive.
> Learn the new .NET and ASP.NET CLI. Get your free copy!
> http://sdm.link/telerik
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.4.0.4

2016-10-28 Thread Marc Eshel
I just see the error in the trace but I don't know what Ganesha does with 
it, if it is not the reason for v3 not working I will have to look 
somewhere else.
Thanks, Marc.



From:   Daniel Gryniewicz 
To: nfs-ganesha-devel@lists.sourceforge.net
Date:   10/28/2016 06:59 AM
Subject:Re: [Nfs-ganesha-devel] Announce Push of V2.4.0.4



On 10/27/2016 08:58 PM, Marc Eshel wrote:
> Hi Frank,
> I see two problems with this version.
> 1. It looks like I can not use NFSv3 mount.
> 2. if I export the root of the GPFS fs the export create gets and error 
I
> believe because of the
> 971670b6df6546360514b8ecb50bc0db843b39d6 patch to ensure that a handle 
has
> a
> parent pointer, which is trying to get handle of the parent of the root
> which is a different file system.

Unlikely, because this change never returns an error.  It eats any 
errors, under the assumption that it's reached the top of the 
filesystem.  Could you send me the actual errors involved so I can look?

Daniel


--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.4.0.4

2016-10-27 Thread Marc Eshel
Hi Frank,
I see two problems with this version. 
1. It looks like I can not use NFSv3 mount.
2. if I export the root of the GPFS fs the export create gets and error I 
believe because of the 
971670b6df6546360514b8ecb50bc0db843b39d6 patch to ensure that a handle has 
a
parent pointer, which is trying to get handle of the parent of the root 
which is a different file system. 
Marc.



From:   "Frank Filz" 
To: "'NFS Ganesha Developers'" 

Date:   10/25/2016 02:37 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4.0.4



Branch next

Tag:V2.4.0.4

Release Highlights

* Fix READDIR cookie traversal during deletes

* Fix a couple of MDCACHE stacking issues

* MDCACHE - invalidate attrs on LAYOUTCOMMIT

* MDCACHE - ensure that a handle has a parent pointer

* Plum through NO_PORTMAPPER and NO_TCP_REDISTER

* fsal_open2: only fail directories on createmode != FSAL_NO_CREATE

* NFS4.1: Fix FREE_STATEID/LOCK race

* RGW: initialize rc in create_export()

* Fix display_lock_cookie_key to pass right args to display_lock_cookie

* Remove some obsolete attrs code from FSAL_GLUSTER

* Separate struct attrlist mask into request_mask and valid_mask

* FSAL_GLUSTER: Invalidate cache entry on pNFS/DS Writes

Signed-off-by: Frank S. Filz 

Contents:

e55386f Frank S. Filz V2.4.0.4
42d5dd5 Soumya Koduri FSAL_GLUSTER: Invalidate cache entry on pNFS/DS 
Writes
89a7bf4 Frank S. Filz Separate struct attrlist mask into request_mask and
valid_mask
fbb95f2 Frank S. Filz FSAL_GLUSTER: Remove glusterfs_fetch_attrs and
attributes from obj_handle
9f90de5 Frank S. Filz FSAL_GLUSTER: Remove obsolete setattrs method
32dd99e Malahal Naineni Fix display_lock_cookie_key to pass right args to
display_lock_cookie
88e167d Ken Dreyer RGW: initialize rc in create_export()
ff45c8f Dominique Martinet NFS4.1: Fix FREE_STATEID/LOCK race
79b25bd Dominique Martinet fsal_open2: only fail directories on createmode
!= FSAL_NO_CREATE
9b37a22 Daniel Gryniewicz Plum through NO_PORTMAPPER and NO_TCP_REDISTER
971670b Daniel Gryniewicz MDCACHE - ensure that a handle has a parent
pointer
bdac833 Daniel Gryniewicz MDCACHE - invalidate attrs on LAYOUTCOMMIT
f089f5f Daniel Gryniewicz Handle stacked exports on DS
de36971 Daniel Gryniewicz Attach MDCACHE export to FSAL
40f8d9c Daniel Gryniewicz Fix READDIR cookie traversal during deletes


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] nfs testing for SAP - status - new traces / new approach

2016-10-20 Thread Marc Eshel
We are not able to get IO bigger than 256K with Ganesha, same client and 
kNFS can get 1M.
Is there something in Ganesha that limmits the IO size? see attached email 
maxsize = 256 * 1024; /* XXX */, is that a problem ?
Marc. 



From:   Sven Oehme/Almaden/IBM
To: Olaf Weiser/Germany/IBM@IBMDE
Cc: Malahal Naineni/Beaverton/IBM@IBMUS, dhil...@us.ibm.com, 
fschm...@us.ibm.com, gfsch...@us.ibm.com, Marc Eshel/Almaden/IBM@IBMUS, 
robg...@us.ibm.com
Date:   10/20/2016 05:32 PM
Subject:Re: nfs testing for SAP - status - new traces / new 
approach


marc will send a email on what i found in the ganesha code. it seems that 
max rpc size is hard limited to 256k :

* Find the appropriate buffer size
*/
u_int /*ARGSUSED*/
__rpc_get_t_size(int af, int proto, int size)
{
int maxsize, defsize;

maxsize = 256 * 1024; /* XXX */
switch (proto) {
case IPPROTO_TCP:
defsize = 64 * 1024; /* XXX */
break;
case IPPROTO_UDP:
defsize = UDPMSGSIZE;
break;
default:
defsize = RPC_MAXDATASIZE;
break;
}
if (size == 0)
return defsize;

/* Check whether the value is within the upper max limit */
return (size > maxsize ? (u_int) maxsize : (u_int) size);
}  
4:01:28 PM
in : src/rpc_generic.c  


--
Sven Oehme 
Scalable Storage Research 
email: oeh...@us.ibm.com 
Phone: +1 (408) 824-8904 
IBM Almaden Research Lab 
--






--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Support_ex Oops...

2016-09-24 Thread Marc Eshel
I am a little confused, can you end up with 2 objects if you use openat() 
like FSAL_VFS does? 
Why does FSAL_VFS has a merge() method implementation.
If I change FSAL_GPFS to use openat() do I need to implement merge() ?
Marc.



From:   "Frank Filz" 
To: "'nfs-ganesha-devel'" 
Date:   09/22/2016 04:33 PM
Subject:[Nfs-ganesha-devel] Support_ex Oops...



Crud, while working on the documentation for support_ex, just came across 
a
method other folks implementing support_ex missed...

There is the possibility that two open2() calls to open by name will race,
resulting in two fsal_obj_handle's being created. When the MDCACHE 
attempts
to cache the 2nd entry, it will detect a collision, and then it needs to 
fix
things up so the 2nd object handle can be released.

The FSAL is expected to implement the merge() method to accomplish merging
the share reservations.

FSAL_GLUSTER, FSAL_GPFS, and FSAL_RGW do not implement this method...

Which means the default that does nothing will be called. This will result
in the 2nd share reservation being dropped...

This is not good...

Frank



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.4-rc7

2016-09-21 Thread Marc Eshel
Hi Frank,
pNFS seems to work for some basic tests.
Thanks, Marc.



From:   "Frank Filz" 
To: "'nfs-ganesha-devel'" 
Date:   09/21/2016 03:14 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-rc7



Branch next

Tag:V2.4-rc7

NOTE: This is the final merge before 2.4.0 baring some show stopper issue.

Release Highlights

* Fixes to compile under c++

* gtest infrastructure

* clean MDCACHE entry export cache

Signed-off-by: Frank S. Filz 

Contents:

dc4dcd3 Frank S. Filz V2.4-rc7
761ec19 Daniel Gryniewicz Clear an entry's first_export cache
9794d85 Daniel Gryniewicz c++ fixes for remaining fsals
c447ff1 Matt Benjamin gtest: add initial libganesha fsal test driver
604c268 Matt Benjamin fsal.h: fix binding in *createmode_to_fsal
be7e88c Matt Benjamin nfs_exports.h: s/export/exp/;
ebfedc7 Matt Benjamin fsal_up:  s/export/up_export/;
66acc48 Matt Benjamin SAL and FSAL: move enum state_type decl
070bcca Matt Benjamin fsal_types.h: narrowing
b12760e Matt Benjamin gsh_rpc: narrowing (maybe)
2ad65e5 Matt Benjamin wait_queue.h: avoid narrowing cast
373fed8 Matt Benjamin correct casts in abstract_mem.h
464ed4f Matt Benjamin Log: Constness
0ab81b4 Matt Benjamin log.h: s/private/private_data
efc145d Matt Benjamin log: avoid truncation in display_buffer_len
a28fba4 Matt Benjamin testing: s/-std=c++14/-std=gnu++14/ to enable typeof
3378be8 Matt Benjamin c++ headers: avoid export identifiers in 
nfs_exports.h
c582760 Matt Benjamin testing: support debug level and log path
9e206ff Matt Benjamin testing: fix test driver linkage to fsalpseudo
af08348 Matt Benjamin testing: Fsalcore, nfs_libmain dup conf
d10f4a9 Matt Benjamin testing:  add nfs_libmain, program_options
a3c4b35 Matt Benjamin FSAL_GPFS:  fix some c++ identifier errors
744af39 Matt Benjamin c++ header compilation:  remove export token, others
e848afa Matt Benjamin nfsv41.h:  remove incorrect forward decls
92ad507 Matt Benjamin testing: add missing src/gtest/CMakeLists.txt
430e92b Matt Benjamin testing: introduce gmock and gtest framework
fd48848 Matt Benjamin testing: c++, prune src/test/CMakeLists.txt
5a32104 Matt Benjamin nfs init: factor malloc check into 
nfs_check_malloc()
9c40347 Matt Benjamin testing:  add nfs_lib.c


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] NLM async locking

2016-09-14 Thread Marc Eshel
Hi Frank,
Are you going to push the patch you asked me to test ?
Thanks, Marc.





From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: "'nfs-ganesha-devel'" 
Date:   09/07/2016 04:18 PM
Subject:RE: [Nfs-ganesha-devel] NLM async locking



I changed it so we don't take the lock off the list when it find the 
blocked
lock entry in the blocked lock list.

It then removes it from the list only if sending the async grant succeeds.

It probably needs some tweaking, but it SHOULD help.

Frank

> -----Original Message-
> From: Marc Eshel [mailto:es...@us.ibm.com]
> Sent: Wednesday, September 7, 2016 4:11 PM
> To: Frank Filz 
> Cc: 'nfs-ganesha-devel' 
> Subject: RE: [Nfs-ganesha-devel] NLM async locking
> 
> Just looking at the code I don't see where you retry the lock request 
from
> the FSAL which is required to add it back to the FSAL queue.
> Am I missing something?
> Marc.
> 
> 
> 
> From:   "Frank Filz" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: "'nfs-ganesha-devel'" 
> Date:   09/07/2016 03:57 PM
> Subject:RE: [Nfs-ganesha-devel] NLM async locking
> 
> 
> 
> Marc,
> 
> Could you try the top commit in this branch:
> 
> https://github.com/ffilz/nfs-ganesha/commits/async
> 
> It may not be the complete solution, but I think it will help your
> scenario.
> 
> I need to do more work on async blocking locks...
> 
> And it looks like without async blocking lock support, Ganesha doesn't
> handle the case where a lock blocks on a conflicting lock from outside 
the
> Ganesha instance. I will be looking at implementing my thread pool idea
> that
> I modeled in the multilock tool.
> 
> Frank
> 
> > -Original Message-
> > From: Frank Filz [mailto:ffilz...@mindspring.com]
> > Sent: Wednesday, September 7, 2016 9:42 AM
> > To: 'Marc Eshel' 
> > Cc: 'nfs-ganesha-devel' 
> > Subject: Re: [Nfs-ganesha-devel] NLM async locking
> >
> > Ok, I'm not sure this ever worked right...
> >
> > With the lock available upcall, we never put the lock back on the
> blocked
> lock
> > list if an attempt to acquire the lock from the FSAL fails...
> >
> > So the way the lock available upcall is supposed to work:
> >
> > Client requests conflicting lock
> > Blocked lock gets registered by FSAL
> > SAL puts lock on blocked lock list
> > Time passes
> > FSAL makes lock available upcall
> > SAL finds the blocked lock entry in the blocked lock list SAL makes a
> call
> to
> > FSAL to attempt to acquire the lock Assume that fails (in the example,
> > because multiple conflicting locks got
> > notified)
> > SAL puts the lock BACK on the blocked lock list (this step is missing)
> and
> all is
> > well
> > Time passes
> > FSAL makes lock available upcall
> > SAL finds the blocked lock entry in the blocked lock list SAL makes a
> call
> to
> > FSAL to attempt to acquire the lock Lock is granted by FSAL SAL makes
> async
> > call back to client If THAT fails, SAL releases the lock from the FSAL
> and
> > disposes of the lock entry and all is well If THAT succeeds, the lock 
is
> > completely granted and all is well
> >
> > I also see that if the client retries the lock before it is granted, 
we
> don't
> > remove the lock entry from the blocked lock list... I don't think that
> will ever
> > cause a problem but we should clean that up also...
> >
> > Let me try a patch to fix...
> >
> > Frank
> >
> > > -Original Message-
> > > From: Marc Eshel [mailto:es...@us.ibm.com]
> > > Sent: Tuesday, September 6, 2016 9:34 PM
> > > To: Frank Filz 
> > > Cc: 'nfs-ganesha-devel' 
> > > Subject: RE: NLM async locking
> > >
> > > Did you get a chance to look at this problem?
> > > Marc.
> > >
> > >
> > >
> > > From:   "Frank Filz" 
> > > To: Marc Eshel/Almaden/IBM@IBMUS
> > > Cc: "'nfs-ganesha-devel'"
> 
> > > Date:   08/29/2016 02:37 PM
> > > Subject:RE: NLM async locking
> > >
> > >
> > >
> > > > I see the following failure:
> > > > 1. Get conflicting locks from 3 clients
> > > > cli 1 gets 0-100
> > > > cli 2 is blocked on 0-1000
> > > > cli 3 is blocked on 0-1
> > > > 2. cli 1 unlocks
> > > > up-call for cli 2 and 3 to retry
&

Re: [Nfs-ganesha-devel] open2 implementations

2016-09-12 Thread Marc Eshel
Ye, FSAL_GPFS could use openat but I want to have control over all the 
calls in GPFS before it calls the VFS. Do you see any problem with the 
code?
Thanks, Marc.



From:   "Frank Filz" 
To: "'nfs-ganesha-devel'" , 
Marc Eshel/Almaden/IBM@IBMUS
Date:   09/12/2016 11:42 AM
Subject:open2 implementations



Marc especially...

I see several open2 implementations seem to be doing a lookup and then an
open2 by handle.

FSAL_CEPH needed to do this because it doesn't have openat to open a file
given a directory handle and a filename within that directory.

FSAL_ VFS can do openat, so it doesn't need to do a lookup...

I'm pretty sure FSAL_GPFS could just use openat...

I'm not sure about FSAL_GLUSTER

I'm guessing FSAL_RGW is in the same boat as FSAL_CEPH.

And FSAL_PROXY and FSAL_ZFS haven't been converted...

Frank



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] NLM async locking

2016-09-07 Thread Marc Eshel
I see, I tried your fix and it did work for the case that I had problems 
with.
Thanks, Marc.



From:   Frank Filz 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: nfs-ganesha-devel 
Date:   09/07/2016 09:14 PM
Subject:Re: [Nfs-ganesha-devel] NLM async locking



Request is resubmitted to the FSAL in the cookie processing (sorry don't 
have code handy. Follow the whole chain. It's several steps...)

Frank

Sent from my iPhone

> On Sep 7, 2016, at 8:35 PM, Marc Eshel  wrote:
> 
> It will not help if the request is not resubmitted to the FSAL.
> Marc.
> 
> 
> 
> From:   "Frank Filz" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: "'nfs-ganesha-devel'" 
> Date:   09/07/2016 04:18 PM
> Subject:RE: [Nfs-ganesha-devel] NLM async locking
> 
> 
> 
> I changed it so we don't take the lock off the list when it find the 
> blocked
> lock entry in the blocked lock list.
> 
> It then removes it from the list only if sending the async grant 
succeeds.
> 
> It probably needs some tweaking, but it SHOULD help.
> 
> Frank
> 
>> -Original Message-
>> From: Marc Eshel [mailto:es...@us.ibm.com]
>> Sent: Wednesday, September 7, 2016 4:11 PM
>> To: Frank Filz 
>> Cc: 'nfs-ganesha-devel' 
>> Subject: RE: [Nfs-ganesha-devel] NLM async locking
>> 
>> Just looking at the code I don't see where you retry the lock request
> from
>> the FSAL which is required to add it back to the FSAL queue.
>> Am I missing something?
>> Marc.
>> 
>> 
>> 
>> From:   "Frank Filz" 
>> To: Marc Eshel/Almaden/IBM@IBMUS






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] NLM async locking

2016-09-07 Thread Marc Eshel
It will not help if the request is not resubmitted to the FSAL.
Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: "'nfs-ganesha-devel'" 
Date:   09/07/2016 04:18 PM
Subject:RE: [Nfs-ganesha-devel] NLM async locking



I changed it so we don't take the lock off the list when it find the 
blocked
lock entry in the blocked lock list.

It then removes it from the list only if sending the async grant succeeds.

It probably needs some tweaking, but it SHOULD help.

Frank

> -----Original Message-
> From: Marc Eshel [mailto:es...@us.ibm.com]
> Sent: Wednesday, September 7, 2016 4:11 PM
> To: Frank Filz 
> Cc: 'nfs-ganesha-devel' 
> Subject: RE: [Nfs-ganesha-devel] NLM async locking
> 
> Just looking at the code I don't see where you retry the lock request 
from
> the FSAL which is required to add it back to the FSAL queue.
> Am I missing something?
> Marc.
> 
> 
> 
> From:   "Frank Filz" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: "'nfs-ganesha-devel'" 
> Date:   09/07/2016 03:57 PM
> Subject:RE: [Nfs-ganesha-devel] NLM async locking
> 
> 
> 
> Marc,
> 
> Could you try the top commit in this branch:
> 
> https://github.com/ffilz/nfs-ganesha/commits/async
> 
> It may not be the complete solution, but I think it will help your
> scenario.
> 
> I need to do more work on async blocking locks...
> 
> And it looks like without async blocking lock support, Ganesha doesn't
> handle the case where a lock blocks on a conflicting lock from outside 
the
> Ganesha instance. I will be looking at implementing my thread pool idea
> that
> I modeled in the multilock tool.
> 
> Frank
> 
> > -Original Message-
> > From: Frank Filz [mailto:ffilz...@mindspring.com]
> > Sent: Wednesday, September 7, 2016 9:42 AM
> > To: 'Marc Eshel' 
> > Cc: 'nfs-ganesha-devel' 
> > Subject: Re: [Nfs-ganesha-devel] NLM async locking
> >
> > Ok, I'm not sure this ever worked right...
> >
> > With the lock available upcall, we never put the lock back on the
> blocked
> lock
> > list if an attempt to acquire the lock from the FSAL fails...
> >
> > So the way the lock available upcall is supposed to work:
> >
> > Client requests conflicting lock
> > Blocked lock gets registered by FSAL
> > SAL puts lock on blocked lock list
> > Time passes
> > FSAL makes lock available upcall
> > SAL finds the blocked lock entry in the blocked lock list SAL makes a
> call
> to
> > FSAL to attempt to acquire the lock Assume that fails (in the example,
> > because multiple conflicting locks got
> > notified)
> > SAL puts the lock BACK on the blocked lock list (this step is missing)
> and
> all is
> > well
> > Time passes
> > FSAL makes lock available upcall
> > SAL finds the blocked lock entry in the blocked lock list SAL makes a
> call
> to
> > FSAL to attempt to acquire the lock Lock is granted by FSAL SAL makes
> async
> > call back to client If THAT fails, SAL releases the lock from the FSAL
> and
> > disposes of the lock entry and all is well If THAT succeeds, the lock 
is
> > completely granted and all is well
> >
> > I also see that if the client retries the lock before it is granted, 
we
> don't
> > remove the lock entry from the blocked lock list... I don't think that
> will ever
> > cause a problem but we should clean that up also...
> >
> > Let me try a patch to fix...
> >
> > Frank
> >
> > > -Original Message-
> > > From: Marc Eshel [mailto:es...@us.ibm.com]
> > > Sent: Tuesday, September 6, 2016 9:34 PM
> > > To: Frank Filz 
> > > Cc: 'nfs-ganesha-devel' 
> > > Subject: RE: NLM async locking
> > >
> > > Did you get a chance to look at this problem?
> > > Marc.
> > >
> > >
> > >
> > > From:   "Frank Filz" 
> > > To: Marc Eshel/Almaden/IBM@IBMUS
> > > Cc: "'nfs-ganesha-devel'"
> 
> > > Date:   08/29/2016 02:37 PM
> > > Subject:RE: NLM async locking
> > >
> > >
> > >
> > > > I see the following failure:
> > > > 1. Get conflicting locks from 3 clients
> > > > cli 1 gets 0-100
> > > > cli 2 is blocked on 0-1000
> > > > cli 3 is blocked on 0-1
> > > > 2. cli 1 unlocks
> > > > up-call for cli 2 and 3 to retry
&

Re: [Nfs-ganesha-devel] NLM async locking

2016-09-07 Thread Marc Eshel
Just looking at the code I don't see where you retry the lock request from 
the FSAL which is required to add it back to the FSAL queue.
Am I missing something?
Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: "'nfs-ganesha-devel'" 
Date:   09/07/2016 03:57 PM
Subject:RE: [Nfs-ganesha-devel] NLM async locking



Marc,

Could you try the top commit in this branch:

https://github.com/ffilz/nfs-ganesha/commits/async

It may not be the complete solution, but I think it will help your 
scenario.

I need to do more work on async blocking locks...

And it looks like without async blocking lock support, Ganesha doesn't
handle the case where a lock blocks on a conflicting lock from outside the
Ganesha instance. I will be looking at implementing my thread pool idea 
that
I modeled in the multilock tool.

Frank

> -Original Message-
> From: Frank Filz [mailto:ffilz...@mindspring.com]
> Sent: Wednesday, September 7, 2016 9:42 AM
> To: 'Marc Eshel' 
> Cc: 'nfs-ganesha-devel' 
> Subject: Re: [Nfs-ganesha-devel] NLM async locking
> 
> Ok, I'm not sure this ever worked right...
> 
> With the lock available upcall, we never put the lock back on the 
blocked
lock
> list if an attempt to acquire the lock from the FSAL fails...
> 
> So the way the lock available upcall is supposed to work:
> 
> Client requests conflicting lock
> Blocked lock gets registered by FSAL
> SAL puts lock on blocked lock list
> Time passes
> FSAL makes lock available upcall
> SAL finds the blocked lock entry in the blocked lock list SAL makes a 
call
to
> FSAL to attempt to acquire the lock Assume that fails (in the example,
> because multiple conflicting locks got
> notified)
> SAL puts the lock BACK on the blocked lock list (this step is missing) 
and
all is
> well
> Time passes
> FSAL makes lock available upcall
> SAL finds the blocked lock entry in the blocked lock list SAL makes a 
call
to
> FSAL to attempt to acquire the lock Lock is granted by FSAL SAL makes
async
> call back to client If THAT fails, SAL releases the lock from the FSAL 
and
> disposes of the lock entry and all is well If THAT succeeds, the lock is
> completely granted and all is well
> 
> I also see that if the client retries the lock before it is granted, we
don't
> remove the lock entry from the blocked lock list... I don't think that
will ever
> cause a problem but we should clean that up also...
> 
> Let me try a patch to fix...
> 
> Frank
> 
> > -Original Message-
> > From: Marc Eshel [mailto:es...@us.ibm.com]
> > Sent: Tuesday, September 6, 2016 9:34 PM
> > To: Frank Filz 
> > Cc: 'nfs-ganesha-devel' 
> > Subject: RE: NLM async locking
> >
> > Did you get a chance to look at this problem?
> > Marc.
> >
> >
> >
> > From:   "Frank Filz" 
> > To: Marc Eshel/Almaden/IBM@IBMUS
> > Cc: "'nfs-ganesha-devel'" 

> > Date:   08/29/2016 02:37 PM
> > Subject:RE: NLM async locking
> >
> >
> >
> > > I see the following failure:
> > > 1. Get conflicting locks from 3 clients
> > > cli 1 gets 0-100
> > > cli 2 is blocked on 0-1000
> > > cli 3 is blocked on 0-1
> > > 2. cli 1 unlocks
> > > up-call for cli 2 and 3 to retry
> > > cli 2 gets 0-1000
> > > cli 3 is blocked on 0-1000
> > > 3. cli 2 unlocks
> > > up-call for cli 3 but Ganesha fails
> > >
> > > /* We must be out of sync with FSAL, this is fatal */
> > > LogLockDesc(COMPONENT_STATE, NIV_MAJ, "Blocked Lock Not
> > > Found for"
> > > ,
> > > obj, owner, lock);
> > > LogFatal(COMPONENT_STATE, "Locks out of sync with FSAL");
> > >
> > > I think the problem is in step 2, after cli 3 failed for the second
> > > time
> > it is not
> > > put back in queue, the sbd_list.
> > >
> > > Can you please confirm this logic is very complicated.
> >
> > That sounds like a likely problem. I'd have to dig into the code to
> > see
> why...
> > May take me a day or two to investigate.
> >
> > Frank
> >
> >
> >
> > ---
> > This email has been checked for viruses by Avast antivirus software.
> > https://www.avast.com/antivirus
> >
> >
> >
> 
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 
> 
>

--
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] NLM async locking

2016-09-06 Thread Marc Eshel
Did you get a chance to look at this problem?
Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: "'nfs-ganesha-devel'" 
Date:   08/29/2016 02:37 PM
Subject:RE: NLM async locking



> I see the following failure:
> 1. Get conflicting locks from 3 clients
> cli 1 gets 0-100
> cli 2 is blocked on 0-1000
> cli 3 is blocked on 0-1
> 2. cli 1 unlocks
> up-call for cli 2 and 3 to retry
> cli 2 gets 0-1000
> cli 3 is blocked on 0-1000
> 3. cli 2 unlocks
> up-call for cli 3 but Ganesha fails
> 
> /* We must be out of sync with FSAL, this is fatal */
> LogLockDesc(COMPONENT_STATE, NIV_MAJ, "Blocked Lock Not Found
> for"
> ,
> obj, owner, lock);
> LogFatal(COMPONENT_STATE, "Locks out of sync with FSAL");
> 
> I think the problem is in step 2, after cli 3 failed for the second time
it is not
> put back in queue, the sbd_list.
> 
> Can you please confirm this logic is very complicated.

That sounds like a likely problem. I'd have to dig into the code to see
why... May take me a day or two to investigate.

Frank



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.4-rc2

2016-09-02 Thread Marc Eshel
Any idea what is wrong?

[ 63%] Building C object Protocols/9P/CMakeFiles/9p.dir/9p_rerror.c.o
Linking C static library lib9p.a
[ 63%] Built target 9p
[ 64%] Building C object 
Protocols/NLM/CMakeFiles/sm_notify.ganesha.dir/sm_notify.c.o
Linking C executable sm_notify.ganesha
/bin/ld: cannot find -lntirpc
collect2: error: ld returned 1 exit status
make[2]: *** [Protocols/NLM/sm_notify.ganesha] Error 1
make[1]: *** [Protocols/NLM/CMakeFiles/sm_notify.ganesha.dir/all] Error 2
make: *** [all] Error 2



From:   "Frank Filz" 
To: "'nfs-ganesha-devel'" 
Date:   09/02/2016 04:26 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-rc2



Branch next

Tag:V2.4-rc2

Release Highlights

* Valgrind fixes

* GPFS fixes

* netgroup cache fixes

* FSAL_CEPH fix to allow set group owner

* Build fixes

* Bump FSAL API major version

Signed-off-by: Frank S. Filz 

Contents:

addb7ff Frank S. Filz V2.4-rc2
5d4ee87 Frank S. Filz Bump FSAL_API major version
965364d Frank S. Filz CEPH: Fix to change owner group in setattr instead 
of
owner...
a5a177d Kaleb S. KEITHLEY cmake: find libhandle.so on Debian Stretch and
Ubuntu Xenial
8d6e870 Kaleb S. KEITHLEY FSAL_{CEPH,GLUSTER}: #include "config.h" to get
LINUX
a0aa7e4 Marc Eshel GPFS_FSAL: fix few CIDs
a1813e9 Marc Eshel FSAL_GPFS: more fixes and cleanup for multi-fd
7f82e3e Malahal Naineni GPFS: remove unused code
d9dc8eb Malahal Naineni Make sure that netgroup cache entry exists before
removing.
dd01f62 Malahal Naineni Fixed couple invalid memory accesses reported by
valgrind.


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] NLM async locking

2016-08-29 Thread Marc Eshel
I can look at your fix but the problem here has nothing to do with cancel. 
This locks should work without client involvement.
Can you point me again to ganltc.
Marc.



From:   Malahal Naineni 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: Frank Filz , nfs-ganesha-devel 

Date:   08/29/2016 06:25 PM
Subject:Re: [Nfs-ganesha-devel] NLM async locking



Marc, there is a known issue with that failure message but I thought
client needs a cancel request. See if my hack fixes it but I never got
a chance to fix it properly, so is not in upstream yet.

See 924e7464f in ganltc repo and see that helps.

Regards, Malahal.

On Mon, Aug 29, 2016 at 3:50 PM, Marc Eshel  wrote:
> Hi Frank,
>
> I see the following failure:
> 1. Get conflicting locks from 3 clients
> cli 1 gets 0-100
> cli 2 is blocked on 0-1000
> cli 3 is blocked on 0-1
> 2. cli 1 unlocks
> up-call for cli 2 and 3 to retry
> cli 2 gets 0-1000
> cli 3 is blocked on 0-1000
> 3. cli 2 unlocks
> up-call for cli 3 but Ganesha fails
>
> /* We must be out of sync with FSAL, this is fatal */
> LogLockDesc(COMPONENT_STATE, NIV_MAJ, "Blocked Lock Not Found 
for"
> ,
> obj, owner, lock);
> LogFatal(COMPONENT_STATE, "Locks out of sync with FSAL");
>
> I think the problem is in step 2, after cli 3 failed for the second time
> it is not put back in queue, the sbd_list.
>
> Can you please confirm this logic is very complicated.
>
> Thanks, Marc.
>
>
> 
--
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] NLM async locking

2016-08-29 Thread Marc Eshel
Hi Frank,

I see the following failure:
1. Get conflicting locks from 3 clients
cli 1 gets 0-100
cli 2 is blocked on 0-1000
cli 3 is blocked on 0-1
2. cli 1 unlocks
up-call for cli 2 and 3 to retry
cli 2 gets 0-1000
cli 3 is blocked on 0-1000
3. cli 2 unlocks 
up-call for cli 3 but Ganesha fails

/* We must be out of sync with FSAL, this is fatal */
LogLockDesc(COMPONENT_STATE, NIV_MAJ, "Blocked Lock Not Found for"
,
obj, owner, lock);
LogFatal(COMPONENT_STATE, "Locks out of sync with FSAL");

I think the problem is in step 2, after cli 3 failed for the second time 
it is not put back in queue, the sbd_list.

Can you please confirm this logic is very complicated.

Thanks, Marc.


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Add blocking locks to multi-fd.

2016-08-26 Thread Marc Eshel
Please merge them, I will do more testing next week.
Thanks, Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: "'NFS Ganesha Developers'" 

Date:   08/26/2016 09:58 AM
Subject:RE: Change in ffilz/nfs-ganesha[next]: Add blocking locks 
to multi-fd.



Do you feel like your patches are ready to go yet, or do you need more
work/testing before I merge them?

Thanks

Frank



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Add blocking locks to multi-fd.

2016-08-25 Thread Marc Eshel
"Frank Filz"  wrote on 08/25/2016 02:23:39 PM:

> From: "Frank Filz" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: "'NFS Ganesha Developers'" 
> Date: 08/25/2016 02:23 PM
> Subject: RE: Change in ffilz/nfs-ganesha[next]: Add blocking locks 
> to multi-fd.
> 
> > > Since you already have capability to pass a lockowner key to the
> > > kernel,
> > you
> > > don't necessarily need a separate fd for each lock owner, so instead
> > > of having an open fd associated with each lock stateid, you could 
just
> > > use
> > the
> > > fd associated with the open stateid.
> > 
> > How does another fd help here? when ever any of the fd is closed all 
posix
> > locks will be released.
> 
> FSAL_VFS uses Open File Description locks. Within the context of a 
process,
> each fd that results from an open (or open_by_handle) system call has a
> separate file description, and thus is a separate lock owner. And OFD 
locks
> only get released if all file descriptors that refer to the file 
description
> get closed. Ganesha should have a separate file description for every 
file
> descriptor (file descriptions become shared among file descriptors 
primarily
> due to dup() or fork() system calls).

So the FSAL_VFS lock is acquired under a different process than the 
Ganesha process? 

> 
> Frank
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 



--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Add blocking locks to multi-fd.

2016-08-25 Thread Marc Eshel
"Frank Filz"  wrote on 08/25/2016 11:48:45 AM:

> From: "Frank Filz" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: "'NFS Ganesha Developers'" 
> Date: 08/25/2016 11:49 AM
> Subject: RE: Change in ffilz/nfs-ganesha[next]: Add blocking locks 
> to multi-fd.
> 
> > "Frank Filz"  wrote on 08/25/2016 11:01:37 
AM:
> > 
> > > From: "Frank Filz" 
> > > To: Marc Eshel/Almaden/IBM@IBMUS
> > > Cc: "'NFS Ganesha Developers'"
> > > 
> > > Date: 08/25/2016 11:01 AM
> > > Subject: RE: Change in ffilz/nfs-ganesha[next]: Add blocking locks 
to
> > > multi-fd.
> > >
> > > > I am not sure of the logic in this section of code in
> > > > FSAL/commonlib.c
> > > > fsal_find_fd()
> > > > but the lock fails because the openflags is zero before and after
> > > > the
> > &
> > > > FSAL_O_RDWR; The lock is write lock and the open was not done for
> > write.
> > > > If I change it to openflags = related_fd->openflags | FSAL_O_RDWR;
> > every
> > > > thing work but I am not sure if it is the correct fix.
> > >
> > > Do you actually want a separate open fd for locks? FSAL_VFS needs it
> > > for lockowner purposes, but you may not need it. If you really don't
> > > need
> > it,
> > > and can live with the open state fd (or the global fd for v3 locks),
> > > you could just pass false.
> > 
> > Not sure what you mean by separate open fd, I need a lockowner for 
every
> > lock by a different client, and what do you mean by pass false? pass 
false
> > from
> 
> So with NFS v4 you have an open stateid and a lock stateid, related for
> locks is supposed to be the open stateid.
> 
> Since you already have capability to pass a lockowner key to the kernel, 
you
> don't necessarily need a separate fd for each lock owner, so instead of
> having an open fd associated with each lock stateid, you could just use 
the
> fd associated with the open stateid.

How does another fd help here? when ever any of the fd is closed all posix 
locks will be released.
 
> 
> > which call?
> 
> You would pass false instead of true to find_fd called from lock_op2 if 
this
> scheme would work.

Yes, this fixed my problem.

> 
> > >
> > > But not quite sure why you're hitting the issue, I thought POSIX
> > required
> > > O_RDWR for write locks? If so, shouldn't your OPEN state be open
> > read/write?
> > 
> > The open in this code status = open_func(obj_hdl, openflags, 
state_fd); is
> > using what ever
> > open openflags is set to and in this case it is zero.
> > 
> > Where was related_fd->openflags was suppose to be set?
> 
> Related_fd should have come from the open stateid. Hmm, do you have a
> delegation? That may be the issue, since I didn't have an FSAL capable 
of
> delegations, I never figured out how delegations fit into things, and 
thus
> there is no open_fd for a delegation stateid...
> 
> Frank
> 
> > >
> > > Frank
> > >
> > > > if (open_for_locks) {
> > > > if (state_fd->openflags != FSAL_O_CLOSED) {
> > > > LogCrit(COMPONENT_FSAL,
> > > > "Conflicting open, can not re-open 
fd
> > with
> > > locks");
> > > > return fsalstat(posix2fsal_error(EINVAL),
> > EINVAL);
> > > > }
> > > >
> > > > /* This is being opened for locks, we will not be 
able
> > to
> > > >  * re-open so open for read/write unless openstate
> > > indicates
> > > >  * something different.
> > > >  */
> > > > if (state->state_data.lock.openstate != NULL) {
> > > > struct fsal_fd *related_fd = (struct 
fsal_fd
> > *)
> > > > (state->state_data.lock.openstate
> > > > + 1);
> > > >
> > > > openflags = related_fd->openflags &
> > FSAL_O_RDWR;
> > > > } else {
> > > > /* No associated open, open read/write. */
> > > > openflags = FSAL_O_RDWR;
> > > > }
> > >
> > >
> > > ---
> > > This email has been checked for viruses by Avast antivirus software.
> > > https://www.avast.com/antivirus
> > >
> 
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 



--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Add blocking locks to multi-fd.

2016-08-25 Thread Marc Eshel
"Frank Filz"  wrote on 08/25/2016 11:01:37 AM:

> From: "Frank Filz" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: "'NFS Ganesha Developers'" 
> Date: 08/25/2016 11:01 AM
> Subject: RE: Change in ffilz/nfs-ganesha[next]: Add blocking locks 
> to multi-fd.
> 
> > I am not sure of the logic in this section of code in FSAL/commonlib.c
> > fsal_find_fd()
> > but the lock fails because the openflags is zero before and after the 
&
> > FSAL_O_RDWR; The lock is write lock and the open was not done for 
write.
> > If I change it to openflags = related_fd->openflags | FSAL_O_RDWR; 
every
> > thing work but I am not sure if it is the correct fix.
> 
> Do you actually want a separate open fd for locks? FSAL_VFS needs it for
> lockowner purposes, but you may not need it. If you really don't need 
it,
> and can live with the open state fd (or the global fd for v3 locks), you
> could just pass false.

Not sure what you mean by separate open fd, I need a lockowner for every 
lock
by a different client, and what do you mean by pass false? pass false from 

which call?

> 
> But not quite sure why you're hitting the issue, I thought POSIX 
required
> O_RDWR for write locks? If so, shouldn't your OPEN state be open 
read/write?

The open in this code status = open_func(obj_hdl, openflags, state_fd); is 
using what ever
open openflags is set to and in this case it is zero.

Where was related_fd->openflags was suppose to be set?

> 
> Frank
> 
> > if (open_for_locks) {
> > if (state_fd->openflags != FSAL_O_CLOSED) {
> > LogCrit(COMPONENT_FSAL,
> > "Conflicting open, can not re-open fd 
with
> locks");
> > return fsalstat(posix2fsal_error(EINVAL), 
EINVAL);
> > }
> > 
> > /* This is being opened for locks, we will not be able 
to
> >  * re-open so open for read/write unless openstate
> indicates
> >  * something different.
> >  */
> > if (state->state_data.lock.openstate != NULL) {
> > struct fsal_fd *related_fd = (struct fsal_fd 
*)
> > (state->state_data.lock.openstate
> > + 1);
> > 
> > openflags = related_fd->openflags & 
FSAL_O_RDWR;
> > } else {
> > /* No associated open, open read/write. */
> > openflags = FSAL_O_RDWR;
> > }
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 



--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Add blocking locks to multi-fd.

2016-08-25 Thread Marc Eshel
Hi Frank, 

I am not sure of the logic in this section of code in FSAL/commonlib.c 
fsal_find_fd()
but the lock fails because the openflags is zero before and after the & 
FSAL_O_RDWR;
The lock is write lock and the open was not done for write.
If I change it to openflags = related_fd->openflags | FSAL_O_RDWR;
every thing work but I am not sure if it is the correct fix.


if (open_for_locks) {
if (state_fd->openflags != FSAL_O_CLOSED) {
LogCrit(COMPONENT_FSAL, 
"Conflicting open, can not re-open fd with 
locks");
return fsalstat(posix2fsal_error(EINVAL), EINVAL);
}

/* This is being opened for locks, we will not be able to
 * re-open so open for read/write unless openstate 
indicates
 * something different.
 */
if (state->state_data.lock.openstate != NULL) {
struct fsal_fd *related_fd = (struct fsal_fd *)
(state->state_data.lock.openstate 
+ 1);

openflags = related_fd->openflags & FSAL_O_RDWR;
} else {
/* No associated open, open read/write. */
openflags = FSAL_O_RDWR;
} 


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] MDC up call

2016-08-21 Thread Marc Eshel
This time it did work.
Marc.



From:   Daniel Gryniewicz 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: Frank Filz , NFS Ganesha Developers 

Date:   08/21/2016 02:45 PM
Subject:Re: MDC up call



In general, MDCACHE assumes it has op_ctx set, and I'd prefer to not
have that assumption violated, as it will complicate the code a lot.

It appears that the export passed into the upcalls is already the
MDCACHE export, not the sub-export.  I've uploaded a new version of
the patch with that change.  Coud you try it again?

On Fri, Aug 19, 2016 at 4:56 PM, Marc Eshel  wrote:
> I am not sure you need to set op_ctx
> I fixed it for this path by not calling  mdc_check_mapping() from
> mdcache_find_keyed() if op_ctx is NULL
> I think the mapping should already exist for calls that are coming from
> up-call.
> Marc.
>
>
>
> From:   Daniel Gryniewicz 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: Frank Filz ,
> nfs-ganesha-devel@lists.sourceforge.net
> Date:   08/19/2016 06:13 AM
> Subject:Re: MDC up call
>
>
>
> Marc, could you try with this patch: https://review.gerrithub.io/287904
>
> Daniel
>
> On 08/18/2016 06:55 PM, Marc Eshel wrote:
>> Was up-call with MDC tested?
>> It looks like it is trying to use op_ctx which is NULL.
>> Marc.
>>
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 0x7fe867fff700 (LWP 18907)]
>> 0x00532b76 in mdc_cur_export () at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
>> 376 return mdc_export(op_ctx->fsal_export);
>> (gdb) where
>> #0  0x00532b76 in mdc_cur_export () at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
>> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
>> #2  0x0053584c in mdcache_find_keyed (key=0x7fe867ffe470,
>> entry=0x7fe867ffe468) at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:636
>> #3  0x005358c1 in mdcache_locate_keyed (key=0x7fe867ffe470,
>> export=0x12d8f40, entry=0x7fe867ffe468, attrs_out=0x0)
>> at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:670
>> #4  0x0052feac in mdcache_create_handle (exp_hdl=0x12d8f40,
>> hdl_desc=0x7fe880001088, handle=0x7fe867ffe4e8, attrs_out=0x0)
>> at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1629
>> #5  0x00433f36 in lock_avail (export=0x12d8f40,
>> file=0x7fe880001088, owner=0x7fe87c302dc0, lock_param=0x7fe8800010a0) 
at
>> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_top.c:172
>> #6  0x00438142 in queue_lock_avail (ctx=0x7fe880001100) at
>> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_async.c:243
>> #7  0x0050156f in fridgethr_start_routine (arg=0x7fe880001100)
> at
>> /nas/ganesha/new-ganesha/src/support/fridgethr.c:550
>> #8  0x7fea288a0df3 in start_thread (arg=0x7fe867fff700) at
>> pthread_create.c:308
>> #9  0x7fea27f603dd in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>> (gdb) up
>> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
>>
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
>> 210 struct mdcache_fsal_export *export = mdc_cur_export();
>> (gdb) p op_ctx
>> $1 = (struct req_op_context *) 0x0
>>
>>
>>
>> From:   Marc Eshel/Almaden/IBM@IBMUS
>> To: "Frank Filz" 
>> Cc: nfs-ganesha-devel@lists.sourceforge.net
>> Date:   08/18/2016 09:21 AM
>> Subject:Re: [Nfs-ganesha-devel] multi fd support
>>
>>
>>
>> Using NFSv4 I get read lock on the same file from two different NFS
>> clients. The server get the two locks using the two different owners
>> (state), when I unlock the lock on one client that results in closing
> the
>> file I get fsal_close() with no owner id so I am forced to release all
>> locks which is wrong.
>> Marc.
>>
>>
>>
>> From:   "Frank Filz" 
>> To: Marc Eshel/Almaden/IBM@IBMUS
>> Cc: 
>> Date:   08/17/2016 10:04 PM
>> Subject:RE: multi fd support
>>
>>
>>
>>> Hi Frank,
>>> Don't we need fsal_close() to call close2() ?
>>> We need the owner so we can release only the locks for this fd before
>>> closing it.
>>> Marc.
>>
>>

Re: [Nfs-ganesha-devel] multi fd support

2016-08-19 Thread Marc Eshel
Franks,
This patch that clears state_owner only after the call to close2 fixes my 
problem.
Marc.

diff --git a/src/SAL/nfs4_state.c b/src/SAL/nfs4_state.c
index 842c2bf..dadd790 100644
--- a/src/SAL/nfs4_state.c
+++ b/src/SAL/nfs4_state.c
@@ -380,7 +380,6 @@ void state_del_locked(state_t *state)
PTHREAD_MUTEX_lock(&state->state_mutex);
 
glist_del(&state->state_owner_list);
-   state->state_owner = NULL;
 
/* If we are dropping the last open state from an open
 * owner, we will want to retain a refcount and let the
@@ -463,6 +462,7 @@ void state_del_locked(state_t *state)
PTHREAD_MUTEX_lock(&state->state_mutex);
glist_del(&state->state_export_list);
state->state_export = NULL;
+   state->state_owner = NULL;
PTHREAD_MUTEX_unlock(&state->state_mutex);
PTHREAD_RWLOCK_unlock(&export->lock);
    put_gsh_export(export);



From:   Marc Eshel/Almaden/IBM@IBMUS
To: "Frank Filz" 
Cc: nfs-ganesha-devel@lists.sourceforge.net
Date:   08/18/2016 09:21 AM
Subject:Re: [Nfs-ganesha-devel] multi fd support



Using NFSv4 I get read lock on the same file from two different NFS 
clients. The server get the two locks using the two different owners 
(state), when I unlock the lock on one client that results in closing the 
file I get fsal_close() with no owner id so I am forced to release all 
locks which is wrong.
Marc. 



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: 
Date:   08/17/2016 10:04 PM
Subject:RE: multi fd support



> Hi Frank,
> Don't we need fsal_close() to call close2() ?
> We need the owner so we can release only the locks for this fd before
> closing it.
> Marc.

With support_ex enabled, fsal_close is only called when the 
fsal_obj_handle
is being disposed of or when the LRU thread is closing open file 
descriptors
(which will now only be those open file descriptors not associated with
state), and it's purpose is only to close the global/anonymous file
descriptor. There should be no locks associated with the global file
descriptor.

A few notes for you:

1. Not having a delegation aware FSAL to work on, I did not explore all 
the
implications of delegations with support_ex. A delegation probably should
inherit the file descriptor from the initial open state, but maybe it 
needs
it's own.

2. For NFS v4 locks, the support_ex API SHOULD allow you to just have an
open file descriptor associated with the open state and not have to have 
one
per lock state (per lock owner) since your locks already have owners
associated without having to have separate file descriptors. For NFS v3
locks of course there is no way (currently) to tie to an open state (even 
if
there is an NLM_SHARE from the same process). I would like to eventually
look for ties and create them if possible. Of course if it benefits you to
have an open fd per lock owner, that's fine too. And actually, you can 
even
fall back to using the global file descriptor (and note that now the FSAL
actually gets to control when that's opened or closed).

3. I'm not sure you caught that you need to protect the global file
descriptor with the fsal_obj_handle->lock since the content_lock is no
more...

I'm on vacation the rest of the week so I may not be able to respond until
next week.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] MDC up call

2016-08-19 Thread Marc Eshel
I am not sure you need to set op_ctx 
I fixed it for this path by not calling  mdc_check_mapping() from 
mdcache_find_keyed() if op_ctx is NULL
I think the mapping should already exist for calls that are coming from 
up-call.
Marc.



From:   Daniel Gryniewicz 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: Frank Filz , 
nfs-ganesha-devel@lists.sourceforge.net
Date:   08/19/2016 06:13 AM
Subject:Re: MDC up call



Marc, could you try with this patch: https://review.gerrithub.io/287904

Daniel

On 08/18/2016 06:55 PM, Marc Eshel wrote:
> Was up-call with MDC tested?
> It looks like it is trying to use op_ctx which is NULL.
> Marc.
>
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fe867fff700 (LWP 18907)]
> 0x00532b76 in mdc_cur_export () at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
> 376 return mdc_export(op_ctx->fsal_export);
> (gdb) where
> #0  0x00532b76 in mdc_cur_export () at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
> #2  0x0053584c in mdcache_find_keyed (key=0x7fe867ffe470,
> entry=0x7fe867ffe468) at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:636
> #3  0x005358c1 in mdcache_locate_keyed (key=0x7fe867ffe470,
> export=0x12d8f40, entry=0x7fe867ffe468, attrs_out=0x0)
> at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:670
> #4  0x0052feac in mdcache_create_handle (exp_hdl=0x12d8f40,
> hdl_desc=0x7fe880001088, handle=0x7fe867ffe4e8, attrs_out=0x0)
> at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1629
> #5  0x00433f36 in lock_avail (export=0x12d8f40,
> file=0x7fe880001088, owner=0x7fe87c302dc0, lock_param=0x7fe8800010a0) at
> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_top.c:172
> #6  0x00438142 in queue_lock_avail (ctx=0x7fe880001100) at
> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_async.c:243
> #7  0x0050156f in fridgethr_start_routine (arg=0x7fe880001100) 
at
> /nas/ganesha/new-ganesha/src/support/fridgethr.c:550
> #8  0x7fea288a0df3 in start_thread (arg=0x7fe867fff700) at
> pthread_create.c:308
> #9  0x7fea27f603dd in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
> (gdb) up
> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
> 210 struct mdcache_fsal_export *export = mdc_cur_export();
> (gdb) p op_ctx
> $1 = (struct req_op_context *) 0x0
>
>
>
> From:   Marc Eshel/Almaden/IBM@IBMUS
> To: "Frank Filz" 
> Cc: nfs-ganesha-devel@lists.sourceforge.net
> Date:   08/18/2016 09:21 AM
> Subject:Re: [Nfs-ganesha-devel] multi fd support
>
>
>
> Using NFSv4 I get read lock on the same file from two different NFS
> clients. The server get the two locks using the two different owners
> (state), when I unlock the lock on one client that results in closing 
the
> file I get fsal_close() with no owner id so I am forced to release all
> locks which is wrong.
> Marc.
>
>
>
> From:   "Frank Filz" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: 
> Date:   08/17/2016 10:04 PM
> Subject:RE: multi fd support
>
>
>
>> Hi Frank,
>> Don't we need fsal_close() to call close2() ?
>> We need the owner so we can release only the locks for this fd before
>> closing it.
>> Marc.
>
> With support_ex enabled, fsal_close is only called when the
> fsal_obj_handle
> is being disposed of or when the LRU thread is closing open file
> descriptors
> (which will now only be those open file descriptors not associated with
> state), and it's purpose is only to close the global/anonymous file
> descriptor. There should be no locks associated with the global file
> descriptor.
>
> A few notes for you:
>
> 1. Not having a delegation aware FSAL to work on, I did not explore all
> the
> implications of delegations with support_ex. A delegation probably 
should
> inherit the file descriptor from the initial open state, but maybe it
> needs
> it's own.
>
> 2. For NFS v4 locks, the support_ex API SHOULD allow you to just have an
> open file descriptor associated with the open state and not have to have
> one
> per lock state (per lock owner) since your locks already have owners
> associated without having to have separate file descriptors. For NFS v3
> lock

Re: [Nfs-ganesha-devel] MDC up call

2016-08-19 Thread Marc Eshel
Did not work sub_export looks bad.

[New Thread 0x7f44dbfff700 (LWP 14104)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f44dbfff700 (LWP 14104)]
0x0052a711 in mdc_up_lock_avail (sub_export=0x956ff0, 
file=0x7f4524000958, owner=0x7f45201829f0, lock_param=0x7f4524000970)
at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_up.c:372
372 rc = export->super_up_ops.lock_avail(sub_export, file, 
owner,
(gdb) where
#0  0x0052a711 in mdc_up_lock_avail (sub_export=0x956ff0, 
file=0x7f4524000958, owner=0x7f45201829f0, lock_param=0x7f4524000970)
at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_up.c:372
#1  0x00438202 in queue_lock_avail (ctx=0x7f45240009d0) at 
/nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_async.c:243
#2  0x0050161f in fridgethr_start_routine (arg=0x7f45240009d0) at 
/nas/ganesha/new-ganesha/src/support/fridgethr.c:550
#3  0x7f46c4731df3 in start_thread (arg=0x7f44dbfff700) at 
pthread_create.c:308
#4  0x7f46c3df13dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113



From:   Daniel Gryniewicz 
To:     Marc Eshel/Almaden/IBM@IBMUS
Cc: Frank Filz , 
nfs-ganesha-devel@lists.sourceforge.net
Date:   08/19/2016 06:13 AM
Subject:Re: MDC up call



Marc, could you try with this patch: https://review.gerrithub.io/287904

Daniel

On 08/18/2016 06:55 PM, Marc Eshel wrote:
> Was up-call with MDC tested?
> It looks like it is trying to use op_ctx which is NULL.
> Marc.
>
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fe867fff700 (LWP 18907)]
> 0x00532b76 in mdc_cur_export () at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
> 376 return mdc_export(op_ctx->fsal_export);
> (gdb) where
> #0  0x00532b76 in mdc_cur_export () at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
> #2  0x0053584c in mdcache_find_keyed (key=0x7fe867ffe470,
> entry=0x7fe867ffe468) at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:636
> #3  0x005358c1 in mdcache_locate_keyed (key=0x7fe867ffe470,
> export=0x12d8f40, entry=0x7fe867ffe468, attrs_out=0x0)
> at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:670
> #4  0x0052feac in mdcache_create_handle (exp_hdl=0x12d8f40,
> hdl_desc=0x7fe880001088, handle=0x7fe867ffe4e8, attrs_out=0x0)
> at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1629
> #5  0x00433f36 in lock_avail (export=0x12d8f40,
> file=0x7fe880001088, owner=0x7fe87c302dc0, lock_param=0x7fe8800010a0) at
> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_top.c:172
> #6  0x00438142 in queue_lock_avail (ctx=0x7fe880001100) at
> /nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_async.c:243
> #7  0x0050156f in fridgethr_start_routine (arg=0x7fe880001100) 
at
> /nas/ganesha/new-ganesha/src/support/fridgethr.c:550
> #8  0x7fea288a0df3 in start_thread (arg=0x7fe867fff700) at
> pthread_create.c:308
> #9  0x7fea27f603dd in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
> (gdb) up
> #1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at
> 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
> 210 struct mdcache_fsal_export *export = mdc_cur_export();
> (gdb) p op_ctx
> $1 = (struct req_op_context *) 0x0
>
>
>
> From:   Marc Eshel/Almaden/IBM@IBMUS
> To: "Frank Filz" 
> Cc: nfs-ganesha-devel@lists.sourceforge.net
> Date:   08/18/2016 09:21 AM
> Subject:Re: [Nfs-ganesha-devel] multi fd support
>
>
>
> Using NFSv4 I get read lock on the same file from two different NFS
> clients. The server get the two locks using the two different owners
> (state), when I unlock the lock on one client that results in closing 
the
> file I get fsal_close() with no owner id so I am forced to release all
> locks which is wrong.
> Marc.
>
>
>
> From:   "Frank Filz" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: 
> Date:   08/17/2016 10:04 PM
> Subject:RE: multi fd support
>
>
>
>> Hi Frank,
>> Don't we need fsal_close() to call close2() ?
>> We need the owner so we can release only the locks for this fd before
>> closing it.
>> Marc.
>
> With support_ex enabled, fsal_close is only called when the
> fsal_obj_handle
> is being disposed of or when the LRU thread is closing op

[Nfs-ganesha-devel] MDC up call

2016-08-18 Thread Marc Eshel
Was up-call with MDC tested?
It looks like it is trying to use op_ctx which is NULL.
Marc.


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fe867fff700 (LWP 18907)]
0x00532b76 in mdc_cur_export () at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
376 return mdc_export(op_ctx->fsal_export);
(gdb) where
#0  0x00532b76 in mdc_cur_export () at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:376
#1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
#2  0x0053584c in mdcache_find_keyed (key=0x7fe867ffe470, 
entry=0x7fe867ffe468) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:636
#3  0x005358c1 in mdcache_locate_keyed (key=0x7fe867ffe470, 
export=0x12d8f40, entry=0x7fe867ffe468, attrs_out=0x0)
at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:670
#4  0x0052feac in mdcache_create_handle (exp_hdl=0x12d8f40, 
hdl_desc=0x7fe880001088, handle=0x7fe867ffe4e8, attrs_out=0x0)
at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1629
#5  0x00433f36 in lock_avail (export=0x12d8f40, 
file=0x7fe880001088, owner=0x7fe87c302dc0, lock_param=0x7fe8800010a0) at 
/nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_top.c:172
#6  0x00438142 in queue_lock_avail (ctx=0x7fe880001100) at 
/nas/ganesha/new-ganesha/src/FSAL_UP/fsal_up_async.c:243
#7  0x0050156f in fridgethr_start_routine (arg=0x7fe880001100) at 
/nas/ganesha/new-ganesha/src/support/fridgethr.c:550
#8  0x7fea288a0df3 in start_thread (arg=0x7fe867fff700) at 
pthread_create.c:308
#9  0x7fea27f603dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) up
#1  0x005342a1 in mdc_check_mapping (entry=0x7fe870001530) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:210
210 struct mdcache_fsal_export *export = mdc_cur_export();
(gdb) p op_ctx
$1 = (struct req_op_context *) 0x0



From:   Marc Eshel/Almaden/IBM@IBMUS
To: "Frank Filz" 
Cc: nfs-ganesha-devel@lists.sourceforge.net
Date:   08/18/2016 09:21 AM
Subject:Re: [Nfs-ganesha-devel] multi fd support



Using NFSv4 I get read lock on the same file from two different NFS 
clients. The server get the two locks using the two different owners 
(state), when I unlock the lock on one client that results in closing the 
file I get fsal_close() with no owner id so I am forced to release all 
locks which is wrong.
Marc. 



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: 
Date:   08/17/2016 10:04 PM
Subject:RE: multi fd support



> Hi Frank,
> Don't we need fsal_close() to call close2() ?
> We need the owner so we can release only the locks for this fd before
> closing it.
> Marc.

With support_ex enabled, fsal_close is only called when the 
fsal_obj_handle
is being disposed of or when the LRU thread is closing open file 
descriptors
(which will now only be those open file descriptors not associated with
state), and it's purpose is only to close the global/anonymous file
descriptor. There should be no locks associated with the global file
descriptor.

A few notes for you:

1. Not having a delegation aware FSAL to work on, I did not explore all 
the
implications of delegations with support_ex. A delegation probably should
inherit the file descriptor from the initial open state, but maybe it 
needs
it's own.

2. For NFS v4 locks, the support_ex API SHOULD allow you to just have an
open file descriptor associated with the open state and not have to have 
one
per lock state (per lock owner) since your locks already have owners
associated without having to have separate file descriptors. For NFS v3
locks of course there is no way (currently) to tie to an open state (even 
if
there is an NLM_SHARE from the same process). I would like to eventually
look for ties and create them if possible. Of course if it benefits you to
have an open fd per lock owner, that's fine too. And actually, you can 
even
fall back to using the global file descriptor (and note that now the FSAL
actually gets to control when that's opened or closed).

3. I'm not sure you caught that you need to protect the global file
descriptor with the fsal_obj_handle->lock since the content_lock is no
more...

I'm on vacation the rest of the week so I may not be able to respond until
next week.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.so

Re: [Nfs-ganesha-devel] multi fd support

2016-08-18 Thread Marc Eshel
Using NFSv4 I get read lock on the same file from two different NFS 
clients. The server get the two locks using the two different owners 
(state), when I unlock the lock on one client that results in closing the 
file I get fsal_close() with no owner id so I am forced to release all 
locks which is wrong.
Marc. 



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: 
Date:   08/17/2016 10:04 PM
Subject:RE: multi fd support



> Hi Frank,
> Don't we need fsal_close() to call close2() ?
> We need the owner so we can release only the locks for this fd before
> closing it.
> Marc.

With support_ex enabled, fsal_close is only called when the 
fsal_obj_handle
is being disposed of or when the LRU thread is closing open file 
descriptors
(which will now only be those open file descriptors not associated with
state), and it's purpose is only to close the global/anonymous file
descriptor. There should be no locks associated with the global file
descriptor.

A few notes for you:

1. Not having a delegation aware FSAL to work on, I did not explore all 
the
implications of delegations with support_ex. A delegation probably should
inherit the file descriptor from the initial open state, but maybe it 
needs
it's own.

2. For NFS v4 locks, the support_ex API SHOULD allow you to just have an
open file descriptor associated with the open state and not have to have 
one
per lock state (per lock owner) since your locks already have owners
associated without having to have separate file descriptors. For NFS v3
locks of course there is no way (currently) to tie to an open state (even 
if
there is an NLM_SHARE from the same process). I would like to eventually
look for ties and create them if possible. Of course if it benefits you to
have an open fd per lock owner, that's fine too. And actually, you can 
even
fall back to using the global file descriptor (and note that now the FSAL
actually gets to control when that's opened or closed).

3. I'm not sure you caught that you need to protect the global file
descriptor with the fsal_obj_handle->lock since the content_lock is no
more...

I'm on vacation the rest of the week so I may not be able to respond until
next week.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] multi fd support

2016-08-17 Thread Marc Eshel
Hi Frank,
Don't we need fsal_close() to call close2() ?
We need the owner so we can release only the locks for this fd before 
closing it.
Marc.


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Addtional parameters that might be interesting to dynamic update

2016-08-10 Thread Marc Eshel
A common reason to update the exports is for adding and removing NFS 
clients, is that covered ?
Marc.



From:   Kaleb KEITHLEY 
To: Frank Filz , "'nfs-ganesha-devel'" 

Date:   08/10/2016 01:40 PM
Subject:Re: [Nfs-ganesha-devel] Addtional parameters that might be 
interesting to dynamic update



How about NFS-GRACE time?

On 08/10/2016 03:53 PM, Frank Filz wrote:
> Having vanquished (for the most part) dynamic export update, and the 
ease of
> doing so, I have started to think about what other config parameters 
would
> be useful to be able to dynamically update.
> 
> Please read over this and give feedback.
> 
> Thanks
> 
> Frank
> 
> NFS_CORE_PARAM:
> 
> All the ports and such probably aren't a good idea to dynamically 
update.
> 
> Nb_Worker would certainly be useful to be able to change.
> 
> Drop_.*_Errors, should be easy to update, someone might want that.
> 
> DRC options are probably not good to tweak dynamically?
> 
> RPC options are probably not good to tweak dynamically?
> 
> NFS_IP_NAME
> 
> Changing the expiration time probably is ok to change.
> 
> NFS_KRB5
> 
> I don't think any of these are candidates for dynamic update.
> 
> NFSV4
> 
> Changing any of these options will probably wreak havoc, though playing 
with
> the numeric owners options dynamically is probably not too horrid.
> 
> EXPORT { FSAL { } }
> 
> I didn't explore any of these with dynamic export update. FSAL_VFS 
allows
> configuring the type of fsid used for the filesystem, changing that
> dynamically is a bad idea since it changes the format of file handles.
> 
> CACHE_INODE
> 
> Most of these should be updateable. NPart would not be changeable. I'm 
not
> sure if any others would be problematical. I wonder if some of them are 
no
> longer used.
> 
> 9P
> 
> These don't look like good candidates for dynamic update.
> 
> CEPH
> 
> Changing the config path for libcephfs won't accomplish anything
> 
> GPFS
> 
> I'm not sure some of these should even be config variables, not sure if 
any
> make sense for dynamic update
> 
> RGW
> 
> These need consideration, probably not candidates for dynamic update
> 
> VFS/XFS
> 
> Some of the same questionable options as GPFS
> 
> ZFS
> 
> Same options as VFS/XFS
> 
> PROXY
> 
> Not worth making dynamically updateable until we really make this thing
> work...
> 
> 
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 
> 
> 
--
> What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
> patterns at an interface-level. Reveals which users, apps, and protocols 
are 
> consuming the most bandwidth. Provides multi-vendor support for NetFlow, 

> J-Flow, sFlow and other flows. Make informed decisions using capacity 
> planning reports. http://sdm.link/zohodev2dev
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> 


--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] FSAL_MDCACHE and pNFS

2016-07-26 Thread Marc Eshel
Here are all the ops that need to be supported through MDCACHE, I am not 
sure it it is the way of any call-backs.
Thabks, Marc.

static void dsh_ops_init(struct fsal_dsh_ops *ops)
{
/* redundant copy, but you never know about the future... */
memcpy(ops, &def_dsh_ops, sizeof(struct fsal_dsh_ops));

ops->release = ds_release;
ops->read = ds_read;
ops->read_plus = ds_read_plus;
ops->write = ds_write;
ops->write_plus = ds_write_plus;
ops->commit = ds_commit;
}

/**
 * @brief set ops layout
 *
 * @param ops reference to object
 */
void handle_ops_pnfs(struct fsal_obj_ops *ops)
{
ops->layoutget = layoutget;
ops->layoutreturn = layoutreturn;
ops->layoutcommit = layoutcommit;
}



From:   Daniel Gryniewicz 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: Frank Filz , 
nfs-ganesha-devel@lists.sourceforge.net
Date:   07/26/2016 06:23 AM
Subject:Re: FSAL_MDCACHE and pNFS



I'll go through and make sure that all ops that are stackable are passed 
through.  Sorry about this.

Daniel

On 07/24/2016 01:53 AM, Marc Eshel wrote:
> correction,  the NFS4ERR_LAYOUTUNAVAILABLE is coming from
> FSAL/default_methods.c
>
> static nfsstat4 layoutget(struct fsal_obj_handle *obj_hdl,
>   struct req_op_context *req_ctx, XDR *loc_body,
>   const struct fsal_layoutget_arg *arg,
>   struct fsal_layoutget_res *res)
> {
> return NFS4ERR_LAYOUTUNAVAILABLE;
> }
>
>
>
> From:   Marc Eshel/Almaden/IBM
> To: Marc Eshel/Almaden/IBM@IBMUS, d...@redhat.com
> Cc: "Frank Filz" ,
> nfs-ganesha-devel@lists.sourceforge.net
> Date:   07/23/2016 09:25 PM
> Subject:FSAL_MDCACHE and pNFS
>
>
> It looks like FSAL_MDCACHE changes patch cafbe60c broke more than 
exports
> op redirection for pNFS.
>
> state_add() is called by acquire_layout_state() with the mutex held so 
we
> might need an option on state_add()  the indicates if the mutex is held.
>
> After that problem we hit another export ops below that is not directed 
to
> the real FSAL, is there a way to fix them all and not one at a time?
>
> Thanks, Marc.
>
> /* max_segment_count is also an indication of if fsal supports
> pnfs */
> max_segment_count = op_ctx->fsal_export->exp_ops.
> fs_maximum_segments(op_ctx->fsal_export);
>
> if (max_segment_count == 0) {
> LogWarn(COMPONENT_PNFS,
> "The FSAL must specify a non-zero
> fs_maximum_segments.");
> nfs_status = NFS4ERR_LAYOUTUNAVAILABLE;
> goto out;
> }
>
>
>
>
> From:   Marc Eshel/Almaden/IBM@IBMUS
> To: "Frank Filz" 
> Cc: nfs-ganesha-devel@lists.sourceforge.net
> Date:   07/22/2016 04:00 PM
> Subject:Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-26
>
>
>
> We are making progress, now we return the file layout attribute
>197.618970099   9141 TRACE_GANESHA: [work-65] nfs4_FSALattr_To_Fattr
> :NFS4 :F_DBG :Encoded attr 62, name = FATTR4_FS_LAYOUT_TYPES
>
> but we fail on the first layout get
>210.125317087   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4
> :DEBUG :Request 2: opcode 50 is OP_LAYOUTGET
>210.137484089   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4
> :M_DBG :NFS4: MID DEBUG: Check export perms export = 00f0 req =
> 0040
>210.149620502   9147 TRACE_GANESHA: [work-71] state_add :RW LOCK 
:CRIT
> :Error 35, write locking 0x7f5e040012b0 (&obj->state_hdl->state_lock) at
> /nas/ganesha/new-ganesha/src/SAL/nfs4_state.c:299
>
> Marc.
>
>
>
> From:   "Frank Filz" 
> To: 
> Date:   07/22/2016 02:16 PM
> Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-26
>
>
>
> Branch next
>
> Tag:V2.4-dev-26
>
> Release Highlights
>
> * A variety of small fixes
>
> * RGW: Add 3 new config options
>
> Signed-off-by: Frank S. Filz 
>
> Contents:
>
> 5ba03b2 Frank S. Filz V2.4-dev-26
> 77c71ae Marc Eshel GPFS_FSAL: Use a shorter file handle.
> a910381 Swen Schillig [valgrind] memory leak in mdcache_exp_release()
> 1c45f6a Matt Benjamin rgw: add 3 new config options
> 93631a9 Malahal Naineni Chomp tailing slash from pseudopath
> 036703e Kaleb S KEITHLEY misc fsals: 32-bit fmt strings, gcc-6.1 
possible
> uninit'd use
> a336200 Soumya Koduri FSAL_GLUSTER/Upcall: Change poll interval to 10us
> 0923be1 Soumya Koduri FSAL_GLUSTER: Coverity fixes
> e9f4d55 ajay nair Fixes error in sample configuration file of FSAL ZFS
> f7c37f7 ajay Removed warnings in make

Re: [Nfs-ganesha-devel] multi-fd

2016-07-25 Thread Marc Eshel
Do we have an FSAL that implemented multi-fd ?



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: 
Date:   07/25/2016 04:10 PM
Subject:RE: multi-fd



> Why do we have reopen2 as one of the multi-fd support, I thought that 
one
> of the resones for multi fd is so we don't have to reopen files when we
get
> differnet/conflicting open options.

Reopen2 is for open upgrade/downgrade.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] multi-fd

2016-07-25 Thread Marc Eshel
Hi Frank,

Why do we have reopen2 as one of the multi-fd support, I thought that one 
of the reasons for multi fd is so we don't have to reopen files when we 
get different/conflicting open options.
Do we have an FSAL that implemented multi-fd?

Marc.



--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Make a direct call to state_add_impl() with lock held.

2016-07-25 Thread Marc Eshel
Hi Frank,

Why do we have reopen2 as one of the multi-fd support, I thought that one 
of the resones for multi fd is so we don't have to reopen files when we 
get differnet/conflicting open options.

Marc.



From:   GerritHub 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: CEA-HPC , Matt Benjamin 
, Gluster Community Jenkins , 
openstack-ci-service+rdo-ci-cen...@redhat.com
Date:   07/25/2016 03:13 PM
Subject:Change in ffilz/nfs-ganesha[next]: Make a direct call to 
state_add_impl() with lock held.



>From Frank Filz :

Frank Filz has posted comments on this change.

Change subject: Make a direct call to state_add_impl() with lock held.
..


Patch Set 3:

(1 comment)

https://review.gerrithub.io/#/c/285191/3/src/Protocols/NFS/nfs4_op_layoutget.c

File src/Protocols/NFS/nfs4_op_layoutget.c:

Line 186:if (clientid_owner->so_type != 
STATE_CLIENTID_OWNER_NFSV4) {
> so why do we have it in state_add() ?
Too much CYA programming in Ganesha?

With the code path you have here, I'm pretty sure you're guaranteed that 
this is all right...


-- 
To view, visit https://review.gerrithub.io/285191
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I5fb6f65a7545c63adf6adc09084763932bae591a
Gerrit-PatchSet: 3
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Owner: es...@us.ibm.com
Gerrit-Reviewer: CEA-HPC 
Gerrit-Reviewer: Frank Filz 
Gerrit-Reviewer: Gluster Community Jenkins 
Gerrit-Reviewer: Matt Benjamin 
Gerrit-Reviewer: es...@us.ibm.com
Gerrit-Reviewer: openstack-ci-service+rdo-ci-cen...@redhat.com
Gerrit-HasComments: Yes






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] FSAL_MDCACHE and pNFS

2016-07-25 Thread Marc Eshel
Here are all the ops that need to be supported through MDCACHE, I am not 
sure it it is the way of any call-backs.
Marc.

static void dsh_ops_init(struct fsal_dsh_ops *ops)
{
/* redundant copy, but you never know about the future... */
memcpy(ops, &def_dsh_ops, sizeof(struct fsal_dsh_ops));

ops->release = ds_release;
ops->read = ds_read;
ops->read_plus = ds_read_plus;
ops->write = ds_write;
ops->write_plus = ds_write_plus;
ops->commit = ds_commit;
}

/**
 * @brief set ops layout
 *
 * @param ops reference to object
 */
void handle_ops_pnfs(struct fsal_obj_ops *ops)
{
ops->layoutget = layoutget;
ops->layoutreturn = layoutreturn;
ops->layoutcommit = layoutcommit;
}




From:   Marc Eshel/Almaden/IBM
To: Marc Eshel/Almaden/IBM@IBMUS, d...@redhat.com
Cc: "Frank Filz" , 
nfs-ganesha-devel@lists.sourceforge.net
Date:   07/23/2016 09:25 PM
Subject:FSAL_MDCACHE and pNFS


It looks like FSAL_MDCACHE changes patch cafbe60c broke more than exports 
op redirection for pNFS.

state_add() is called by acquire_layout_state() with the mutex held so we 
might need an option on state_add()  the indicates if the mutex is held.

After that problem we hit another export ops below that is not directed to 
the real FSAL, is there a way to fix them all and not one at a time?
 
Thanks, Marc.

/* max_segment_count is also an indication of if fsal supports 
pnfs */
max_segment_count = op_ctx->fsal_export->exp_ops.
fs_maximum_segments(op_ctx->fsal_export);

if (max_segment_count == 0) {
LogWarn(COMPONENT_PNFS,
"The FSAL must specify a non-zero 
fs_maximum_segments.");
nfs_status = NFS4ERR_LAYOUTUNAVAILABLE;
goto out;
} 




From:   Marc Eshel/Almaden/IBM@IBMUS
To: "Frank Filz" 
Cc: nfs-ganesha-devel@lists.sourceforge.net
Date:   07/22/2016 04:00 PM
Subject:Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-26



We are making progress, now we return the file layout attribute
   197.618970099   9141 TRACE_GANESHA: [work-65] nfs4_FSALattr_To_Fattr 
:NFS4 :F_DBG :Encoded attr 62, name = FATTR4_FS_LAYOUT_TYPES

but we fail on the first layout get
   210.125317087   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4 
:DEBUG :Request 2: opcode 50 is OP_LAYOUTGET
   210.137484089   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4 
:M_DBG :NFS4: MID DEBUG: Check export perms export = 00f0 req = 
0040
   210.149620502   9147 TRACE_GANESHA: [work-71] state_add :RW LOCK :CRIT 
:Error 35, write locking 0x7f5e040012b0 (&obj->state_hdl->state_lock) at 
/nas/ganesha/new-ganesha/src/SAL/nfs4_state.c:299

Marc.



From:   "Frank Filz" 
To: 
Date:   07/22/2016 02:16 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-26



Branch next

Tag:V2.4-dev-26

Release Highlights

* A variety of small fixes

* RGW: Add 3 new config options

Signed-off-by: Frank S. Filz 

Contents:

5ba03b2 Frank S. Filz V2.4-dev-26
77c71ae Marc Eshel GPFS_FSAL: Use a shorter file handle.
a910381 Swen Schillig [valgrind] memory leak in mdcache_exp_release()
1c45f6a Matt Benjamin rgw: add 3 new config options
93631a9 Malahal Naineni Chomp tailing slash from pseudopath
036703e Kaleb S KEITHLEY misc fsals: 32-bit fmt strings, gcc-6.1 possible
uninit'd use
a336200 Soumya Koduri FSAL_GLUSTER/Upcall: Change poll interval to 10us
0923be1 Soumya Koduri FSAL_GLUSTER: Coverity fixes
e9f4d55 ajay nair Fixes error in sample configuration file of FSAL ZFS
f7c37f7 ajay Removed warnings in make by solving conflicts from 
cong_yacc.y
7d67e10 Daniel Gryniewicz Clang fix - don't double initialize
9ec03b6 Daniel Gryniewicz Coverity fixes
d3eff0f Daniel Gryniewicz MDCACHE - stack all the pNFS export ops
f75b336 Frank S. Filz NFS4: Actually set fsid and fileid in the returned
attributes


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
con

Re: [Nfs-ganesha-devel] FSAL_MDCACHE and pNFS

2016-07-23 Thread Marc Eshel
correction,  the NFS4ERR_LAYOUTUNAVAILABLE is coming from 
FSAL/default_methods.c

static nfsstat4 layoutget(struct fsal_obj_handle *obj_hdl,
  struct req_op_context *req_ctx, XDR *loc_body,
  const struct fsal_layoutget_arg *arg,
  struct fsal_layoutget_res *res)
{
return NFS4ERR_LAYOUTUNAVAILABLE;
}



From:   Marc Eshel/Almaden/IBM
To: Marc Eshel/Almaden/IBM@IBMUS, d...@redhat.com
Cc: "Frank Filz" , 
nfs-ganesha-devel@lists.sourceforge.net
Date:   07/23/2016 09:25 PM
Subject:FSAL_MDCACHE and pNFS


It looks like FSAL_MDCACHE changes patch cafbe60c broke more than exports 
op redirection for pNFS.

state_add() is called by acquire_layout_state() with the mutex held so we 
might need an option on state_add()  the indicates if the mutex is held.

After that problem we hit another export ops below that is not directed to 
the real FSAL, is there a way to fix them all and not one at a time?
 
Thanks, Marc.

/* max_segment_count is also an indication of if fsal supports 
pnfs */
max_segment_count = op_ctx->fsal_export->exp_ops.
fs_maximum_segments(op_ctx->fsal_export);

if (max_segment_count == 0) {
LogWarn(COMPONENT_PNFS,
"The FSAL must specify a non-zero 
fs_maximum_segments.");
nfs_status = NFS4ERR_LAYOUTUNAVAILABLE;
goto out;
} 




From:   Marc Eshel/Almaden/IBM@IBMUS
To: "Frank Filz" 
Cc: nfs-ganesha-devel@lists.sourceforge.net
Date:   07/22/2016 04:00 PM
Subject:Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-26



We are making progress, now we return the file layout attribute
   197.618970099   9141 TRACE_GANESHA: [work-65] nfs4_FSALattr_To_Fattr 
:NFS4 :F_DBG :Encoded attr 62, name = FATTR4_FS_LAYOUT_TYPES

but we fail on the first layout get
   210.125317087   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4 
:DEBUG :Request 2: opcode 50 is OP_LAYOUTGET
   210.137484089   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4 
:M_DBG :NFS4: MID DEBUG: Check export perms export = 00f0 req = 
0040
   210.149620502   9147 TRACE_GANESHA: [work-71] state_add :RW LOCK :CRIT 
:Error 35, write locking 0x7f5e040012b0 (&obj->state_hdl->state_lock) at 
/nas/ganesha/new-ganesha/src/SAL/nfs4_state.c:299

Marc.



From:   "Frank Filz" 
To: 
Date:   07/22/2016 02:16 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-26



Branch next

Tag:V2.4-dev-26

Release Highlights

* A variety of small fixes

* RGW: Add 3 new config options

Signed-off-by: Frank S. Filz 

Contents:

5ba03b2 Frank S. Filz V2.4-dev-26
77c71ae Marc Eshel GPFS_FSAL: Use a shorter file handle.
a910381 Swen Schillig [valgrind] memory leak in mdcache_exp_release()
1c45f6a Matt Benjamin rgw: add 3 new config options
93631a9 Malahal Naineni Chomp tailing slash from pseudopath
036703e Kaleb S KEITHLEY misc fsals: 32-bit fmt strings, gcc-6.1 possible
uninit'd use
a336200 Soumya Koduri FSAL_GLUSTER/Upcall: Change poll interval to 10us
0923be1 Soumya Koduri FSAL_GLUSTER: Coverity fixes
e9f4d55 ajay nair Fixes error in sample configuration file of FSAL ZFS
f7c37f7 ajay Removed warnings in make by solving conflicts from 
cong_yacc.y
7d67e10 Daniel Gryniewicz Clang fix - don't double initialize
9ec03b6 Daniel Gryniewicz Coverity fixes
d3eff0f Daniel Gryniewicz MDCACHE - stack all the pNFS export ops
f75b336 Frank S. Filz NFS4: Actually set fsid and fileid in the returned
attributes


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel







---

[Nfs-ganesha-devel] FSAL_MDCACHE and pNFS

2016-07-23 Thread Marc Eshel
It looks like FSAL_MDCACHE changes patch cafbe60c broke more than exports 
op redirection for pNFS.

state_add() is called by acquire_layout_state() with the mutex held so we 
might need an option on state_add()  the indicates if the mutex is held.

After that problem we hit another export ops below that is not directed to 
the real FSAL, is there a way to fix them all and not one at a time?
 
Thanks, Marc.

/* max_segment_count is also an indication of if fsal supports 
pnfs */
max_segment_count = op_ctx->fsal_export->exp_ops.
fs_maximum_segments(op_ctx->fsal_export);

if (max_segment_count == 0) {
LogWarn(COMPONENT_PNFS,
"The FSAL must specify a non-zero 
fs_maximum_segments.");
nfs_status = NFS4ERR_LAYOUTUNAVAILABLE;
goto out;
    } 



From:   Marc Eshel/Almaden/IBM@IBMUS
To: "Frank Filz" 
Cc: nfs-ganesha-devel@lists.sourceforge.net
Date:   07/22/2016 04:00 PM
Subject:Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-26



We are making progress, now we return the file layout attribute
   197.618970099   9141 TRACE_GANESHA: [work-65] nfs4_FSALattr_To_Fattr 
:NFS4 :F_DBG :Encoded attr 62, name = FATTR4_FS_LAYOUT_TYPES

but we fail on the first layout get
   210.125317087   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4 
:DEBUG :Request 2: opcode 50 is OP_LAYOUTGET
   210.137484089   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4 
:M_DBG :NFS4: MID DEBUG: Check export perms export = 00f0 req = 
0040
   210.149620502   9147 TRACE_GANESHA: [work-71] state_add :RW LOCK :CRIT 
:Error 35, write locking 0x7f5e040012b0 (&obj->state_hdl->state_lock) at 
/nas/ganesha/new-ganesha/src/SAL/nfs4_state.c:299

Marc.



From:   "Frank Filz" 
To: 
Date:   07/22/2016 02:16 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-26



Branch next

Tag:V2.4-dev-26

Release Highlights

* A variety of small fixes

* RGW: Add 3 new config options

Signed-off-by: Frank S. Filz 

Contents:

5ba03b2 Frank S. Filz V2.4-dev-26
77c71ae Marc Eshel GPFS_FSAL: Use a shorter file handle.
a910381 Swen Schillig [valgrind] memory leak in mdcache_exp_release()
1c45f6a Matt Benjamin rgw: add 3 new config options
93631a9 Malahal Naineni Chomp tailing slash from pseudopath
036703e Kaleb S KEITHLEY misc fsals: 32-bit fmt strings, gcc-6.1 possible
uninit'd use
a336200 Soumya Koduri FSAL_GLUSTER/Upcall: Change poll interval to 10us
0923be1 Soumya Koduri FSAL_GLUSTER: Coverity fixes
e9f4d55 ajay nair Fixes error in sample configuration file of FSAL ZFS
f7c37f7 ajay Removed warnings in make by solving conflicts from 
cong_yacc.y
7d67e10 Daniel Gryniewicz Clang fix - don't double initialize
9ec03b6 Daniel Gryniewicz Coverity fixes
d3eff0f Daniel Gryniewicz MDCACHE - stack all the pNFS export ops
f75b336 Frank S. Filz NFS4: Actually set fsid and fileid in the returned
attributes


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-26

2016-07-22 Thread Marc Eshel
We are making progress, now we return the file layout attribute
   197.618970099   9141 TRACE_GANESHA: [work-65] nfs4_FSALattr_To_Fattr 
:NFS4 :F_DBG :Encoded attr 62, name = FATTR4_FS_LAYOUT_TYPES

but we fail on the first layout get
   210.125317087   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4 
:DEBUG :Request 2: opcode 50 is OP_LAYOUTGET
   210.137484089   9147 TRACE_GANESHA: [work-71] nfs4_Compound :NFS4 
:M_DBG :NFS4: MID DEBUG: Check export perms export = 00f0 req = 
0040
   210.149620502   9147 TRACE_GANESHA: [work-71] state_add :RW LOCK :CRIT 
:Error 35, write locking 0x7f5e040012b0 (&obj->state_hdl->state_lock) at 
/nas/ganesha/new-ganesha/src/SAL/nfs4_state.c:299

Marc.



From:   "Frank Filz" 
To: 
Date:   07/22/2016 02:16 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-26



Branch next

Tag:V2.4-dev-26

Release Highlights

* A variety of small fixes

* RGW: Add 3 new config options

Signed-off-by: Frank S. Filz 

Contents:

5ba03b2 Frank S. Filz V2.4-dev-26
77c71ae Marc Eshel GPFS_FSAL: Use a shorter file handle.
a910381 Swen Schillig [valgrind] memory leak in mdcache_exp_release()
1c45f6a Matt Benjamin rgw: add 3 new config options
93631a9 Malahal Naineni Chomp tailing slash from pseudopath
036703e Kaleb S KEITHLEY misc fsals: 32-bit fmt strings, gcc-6.1 possible
uninit'd use
a336200 Soumya Koduri FSAL_GLUSTER/Upcall: Change poll interval to 10us
0923be1 Soumya Koduri FSAL_GLUSTER: Coverity fixes
e9f4d55 ajay nair Fixes error in sample configuration file of FSAL ZFS
f7c37f7 ajay Removed warnings in make by solving conflicts from 
cong_yacc.y
7d67e10 Daniel Gryniewicz Clang fix - don't double initialize
9ec03b6 Daniel Gryniewicz Coverity fixes
d3eff0f Daniel Gryniewicz MDCACHE - stack all the pNFS export ops
f75b336 Frank S. Filz NFS4: Actually set fsid and fileid in the returned
attributes


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-25

2016-07-15 Thread Marc Eshel
I don't know how the new FSAL_MDCACHE is working but it is calling the 
default operation instead of the real FSAL export and calling the encoding 
fs_layouttypes()
Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: 
Date:   07/15/2016 04:43 PM
Subject:RE: [Nfs-ganesha-devel] Announce Push of V2.4-dev-25



 
> It looks like we are not telling the NFS client that we support pNFS, I
don't
> see a call to  encode_fs_layout_types()
> 
> [FATTR4_FS_LAYOUT_TYPES] = {
> .name = "FATTR4_FS_LAYOUT_TYPES",
> .supported = 1,
> .size_fattr4 = sizeof(fattr4_fs_layout_types),
> .encode = encode_fs_layout_types,
> .decode = decode_fs_layout_types,
> .access = FATTR4_ATTR_READ}

Hmm, I'm not sure what drives that... I may need some help next week to 
dig
into that (I don't really have any pNFS capability right now...).

Frank

> From:   "Frank Filz" 
> To: 
> Date:   07/15/2016 11:54 AM
> Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-25
> 
> 
> 
> Branch next
> 
> Tag:V2.4-dev-25
> 
> NOTE: If your chekcpatch.conf is not directly taken from the tree,
>   please update from the tree.
> 
> Release Highlights
> 
> * This is a huge merge...
> 
> * FSAL_CEPH is enabled for support_ex
> 
> * FSAL_GLUSTER is enabled for support_ex
> 
> * FSAL_PSEUDO is enabled for support_ex (not much of a big deal...)
> 
> * Attributes are no longer in fsal_obj_handle. FSAL_MDCACHE does 
maintain
>   attributes per the caching rules.
> 
> * ACL validity is managed separately from the rest of the attributes.
> 
> * FSAL methods that return a new fsal_obj_handle (creates, lookups,
>   create_handle) also return the attributes if requested. FSAL_MDCACHE
>   does so as well as the NFS v3 operations (for use in post op attr).
> 
> * Config parser has been fixed so "/" is an allowed path and no longer
>   needs to be quoted.
> 
> * A variety of bug fixes
> 
> Signed-off-by: Frank S. Filz 
> 
> Contents:
> 
> e545192 Frank S. Filz V2.4-dev-25
> 1a5fdb2 ajay Modified PATHNAME macro in lexical analyzer 
file(conf_lex.l)
to
> match a single slash ('/') as a valid path
> 4cf28f1 Malahal Naineni Explicitly set privilegedport to false in 
default
> PseudoFS export 8191f7c Matt Benjamin idmapping: add
> only_numeric_owners option (off by
> default)
> d7acea1 Swen Schillig [state_misc.c] Remove assert() for invalid owner
type.
> 83fdcd5 Daniel Gryniewicz Leak fix in NFS4_OP_OPEN
> b758982 Daniel Gryniewicz Clang fix - don't double increment, +2 instead
> ad1bdf9 Soumya Koduri FSAL_GLUSTER: Remove old APIs replaced by ex-
> APIs e01bc8c Soumya Koduri glusterfs_open2: Handle FSAL_NO_CREATE
> mode
> 00f9779 Soumya Koduri Handle mdcache_create_handle failures 67b896c
> Soumya Koduri FSAL_GLUSTER: support_ex notions for mknod, mkdir &
> symlink operations
> 168a173 Soumya Koduri FSAL_GLUSTER: enable support for extended fops
> 0ad1f87 Soumya Koduri FSAL_GLUSTER: close2() fop 554599e Soumya Koduri
> FSAL_GLUSTER: Implement setattr2() fop
> 071e0f5 Soumya Koduri FSAL_GLUSTER: getattr2() fop 45de78d Soumya
> Koduri FSAL_GLUSTER: Implement lock_op2() fop 0900fab Soumya Koduri
> FSAL_GLUSTER: enable 'lock_support_owner'
> c37fc47 Soumya Koduri FSAL_GLUSTER: commit2() fop 970a85a Soumya
> Koduri FSAL_GLUSTER: Implement write2() fop f90e9ab Soumya Koduri
> FSAL_GLUSTER: Implement read2() fop 3927b6c Soumya Koduri
> FSAL_GLUSTER: Implement reopen2() fop d39318b Soumya Koduri Change
> default status2 fop to return openflags
> cb89400 Soumya Koduri FSAL_GLUSTER: implement open2 fop da8746b
> Soumya Koduri FSAL_GLUSTER: Given fd, fetch requested attributes
> 3990902 Soumya Koduri FSAL_GLUSTER: Given an object handle and state
> find its fd 3cffd7d Soumya Koduri FSAL_GLUSTER: Routines to open and 
close
> global_fd
> 85319f0 Soumya Koduri FSAL_GLUSTER: Added routines to open/close my_fd
> dd5044d Soumya Koduri FSAL_GLUSTER: Add skeletons of *fops2*
> 94b9b23 Soumya Koduri FSAL_GLUSTER: Use struct glusterfs_fd to store
> common fd
> 7e295a8 Soumya Koduri FSAL_GLUSTER: Define glusterfs fd structures and
> allocate state
> 27a53a6 Frank S. Filz FSAL_GPFS: Optimize to only ask xstat for 
attributes
> actually requested
> f3ad267 Frank S. Filz Fixup checkpatch.conf to match what I (maintainer)
> actually use
> 6395d61 Frank S. Filz Use correct mounted_on_fileid for NFS v4
> 73c6789 Frank S. Filz MDCACHE: Add debug when methods return attributes
> e1085ca Frank S. Filz FSAL_CEPH: Use passed target obj handle type to
decide
> unlink or rmdir ca

Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-25

2016-07-15 Thread Marc Eshel
It looks like we are not telling the NFS client that we support pNFS, I 
don't see a call to  encode_fs_layout_types()

[FATTR4_FS_LAYOUT_TYPES] = {
.name = "FATTR4_FS_LAYOUT_TYPES",
.supported = 1,
.size_fattr4 = sizeof(fattr4_fs_layout_types),
.encode = encode_fs_layout_types,
.decode = decode_fs_layout_types,
.access = FATTR4_ATTR_READ}



From:   "Frank Filz" 
To: 
Date:   07/15/2016 11:54 AM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-25



Branch next

Tag:V2.4-dev-25

NOTE: If your chekcpatch.conf is not directly taken from the tree,
  please update from the tree.

Release Highlights

* This is a huge merge...

* FSAL_CEPH is enabled for support_ex

* FSAL_GLUSTER is enabled for support_ex

* FSAL_PSEUDO is enabled for support_ex (not much of a big deal...)

* Attributes are no longer in fsal_obj_handle. FSAL_MDCACHE does maintain
  attributes per the caching rules.

* ACL validity is managed separately from the rest of the attributes.

* FSAL methods that return a new fsal_obj_handle (creates, lookups,
  create_handle) also return the attributes if requested. FSAL_MDCACHE
  does so as well as the NFS v3 operations (for use in post op attr).

* Config parser has been fixed so "/" is an allowed path and no longer
  needs to be quoted.

* A variety of bug fixes

Signed-off-by: Frank S. Filz 

Contents:

e545192 Frank S. Filz V2.4-dev-25
1a5fdb2 ajay Modified PATHNAME macro in lexical analyzer file(conf_lex.l) 
to
match a single slash ('/') as a valid path
4cf28f1 Malahal Naineni Explicitly set privilegedport to false in default
PseudoFS export
8191f7c Matt Benjamin idmapping: add only_numeric_owners option (off by
default)
d7acea1 Swen Schillig [state_misc.c] Remove assert() for invalid owner 
type.
83fdcd5 Daniel Gryniewicz Leak fix in NFS4_OP_OPEN
b758982 Daniel Gryniewicz Clang fix - don't double increment, +2 instead
ad1bdf9 Soumya Koduri FSAL_GLUSTER: Remove old APIs replaced by ex-APIs
e01bc8c Soumya Koduri glusterfs_open2: Handle FSAL_NO_CREATE mode
00f9779 Soumya Koduri Handle mdcache_create_handle failures
67b896c Soumya Koduri FSAL_GLUSTER: support_ex notions for mknod, mkdir &
symlink operations
168a173 Soumya Koduri FSAL_GLUSTER: enable support for extended fops
0ad1f87 Soumya Koduri FSAL_GLUSTER: close2() fop
554599e Soumya Koduri FSAL_GLUSTER: Implement setattr2() fop
071e0f5 Soumya Koduri FSAL_GLUSTER: getattr2() fop
45de78d Soumya Koduri FSAL_GLUSTER: Implement lock_op2() fop
0900fab Soumya Koduri FSAL_GLUSTER: enable 'lock_support_owner'
c37fc47 Soumya Koduri FSAL_GLUSTER: commit2() fop
970a85a Soumya Koduri FSAL_GLUSTER: Implement write2() fop
f90e9ab Soumya Koduri FSAL_GLUSTER: Implement read2() fop
3927b6c Soumya Koduri FSAL_GLUSTER: Implement reopen2() fop
d39318b Soumya Koduri Change default status2 fop to return openflags
cb89400 Soumya Koduri FSAL_GLUSTER: implement open2 fop
da8746b Soumya Koduri FSAL_GLUSTER: Given fd, fetch requested attributes
3990902 Soumya Koduri FSAL_GLUSTER: Given an object handle and state find
its fd
3cffd7d Soumya Koduri FSAL_GLUSTER: Routines to open and close global_fd
85319f0 Soumya Koduri FSAL_GLUSTER: Added routines to open/close my_fd
dd5044d Soumya Koduri FSAL_GLUSTER: Add skeletons of *fops2*
94b9b23 Soumya Koduri FSAL_GLUSTER: Use struct glusterfs_fd to store 
common
fd
7e295a8 Soumya Koduri FSAL_GLUSTER: Define glusterfs fd structures and
allocate state
27a53a6 Frank S. Filz FSAL_GPFS: Optimize to only ask xstat for attributes
actually requested
f3ad267 Frank S. Filz Fixup checkpatch.conf to match what I (maintainer)
actually use
6395d61 Frank S. Filz Use correct mounted_on_fileid for NFS v4
73c6789 Frank S. Filz MDCACHE: Add debug when methods return attributes
e1085ca Frank S. Filz FSAL_CEPH: Use passed target obj handle type to 
decide
unlink or rmdir
cab069e Frank S. Filz FSAL_CEPH: Add some debug
3df6815 Frank S. Filz 9P: Get fileid directly out of obj_handle, don't use
fsal_fileid
3e068a2 Frank S. Filz MDCACHE: Add call to FSAL merge method back in
cf6c0c4 Frank S. Filz Make sure we don't leak ACLs on getattrs
003d88f Frank S. Filz MDCACHE: Manage ACL validity separate from rest of
attributes
4a29e5c Frank S. Filz FSAL_PSEUDO: Allow setting attributes on mkdir from
caller
52a03e7 Frank S. Filz MDCACHE: make mdc_unreachable and mdcache_kill_entry
more debuggable
6af776d Frank S. Filz MDCACHE: Add some debug
de7f571 Frank S. Filz Pass attributes out on fsal_obj_handle create and
readdir
428dc9c Frank S. Filz MDCACHE: Resolve name collisions in dirent tables
9b9bba0 Frank S. Filz MDCACHE: Cleanup almost unused MDCACHE flags
9d44fdb Frank S. Filz MDCACHE: pass to mdcache_new_entry 
MDCACHE_FLAG_CREATE
for mkdir
416e94e Frank S. Filz MDCACHE: Do mdcache_invalidate stuff explicitly.
f44655b Frank S. Filz FSAL_VFS and FSAL_XFS: Make temp file for testing 
OFD
locks more secure
94d6d9b Frank S. Filz FSAL_VFS: Add E

Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-25

2016-07-15 Thread Marc Eshel
Thanks, it work now, is there a reason no to change the source.

diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
index f4fce42..4a67f06 100644
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -187,7 +187,7 @@ option(USE_FSAL_CEPH "build CEPH FSAL shared library" 
ON)
 option(USE_FSAL_GPFS "build GPFS FSAL" ON)
 option(USE_FSAL_ZFS "build ZFS FSAL" ON)
 option(USE_FSAL_XFS "build XFS support in VFS FSAL" ON)
-option(USE_FSAL_PANFS "build PanFS support in VFS FSAL" OFF)
+#option(USE_FSAL_PANFS "build PanFS support in VFS FSAL" OFF)
 option(USE_FSAL_GLUSTER "build GLUSTER FSAL shared library" ON)
 option(USE_FSAL_NULL "build NULL FSAL shared library" ON)
 option(USE_FSAL_RGW "build RGW FSAL shared library" OFF) 



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: 
Date:   07/15/2016 03:49 PM
Subject:RE: [Nfs-ganesha-devel] Announce Push of V2.4-dev-25



> [ 91%] Building C object
> FSAL/FSAL_VFS/panfs/CMakeFiles/fsalpanfs.dir/handle.c.o
> /nas/ganesha/new-ganesha/src/FSAL/FSAL_VFS/panfs/handle.c: In function
> ‘
> panfs_getattrs’:
> /nas/ganesha/new-ganesha/src/FSAL/FSAL_VFS/panfs/handle.c:42:36:
> error: ‘ struct vfs_fsal_obj_handle’ has no member named ‘attributes’
>   struct attrlist *attrib = &vfs_hdl->attributes;
> ^
> /nas/ganesha/new-ganesha/src/FSAL/FSAL_VFS/panfs/handle.c: In function
> ‘
> panfs_handle_ops_init’:
> /nas/ganesha/new-ganesha/src/FSAL/FSAL_VFS/panfs/handle.c:76:32:
> error:
> assignment from incompatible pointer type [-Werror]
>   panfs_hdl->panfs_ops.getattrs = panfs_getattrs;
> ^
> cc1: all warnings being treated as errors
> make[2]: *** [FSAL/FSAL_VFS/panfs/CMakeFiles/fsalpanfs.dir/handle.c.o]
> Error 1
> make[1]: *** [FSAL/FSAL_VFS/panfs/CMakeFiles/fsalpanfs.dir/all] Error 2

You need to disable panfs build

It shouldn't be enabled by any of the included build configurations.

Frank

> From:   "Frank Filz" 
> To: 
> Date:   07/15/2016 11:54 AM
> Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-25
> 
> 
> 
> Branch next
> 
> Tag:V2.4-dev-25
> 
> NOTE: If your chekcpatch.conf is not directly taken from the tree,
>   please update from the tree.
> 
> Release Highlights
> 
> * This is a huge merge...
> 
> * FSAL_CEPH is enabled for support_ex
> 
> * FSAL_GLUSTER is enabled for support_ex
> 
> * FSAL_PSEUDO is enabled for support_ex (not much of a big deal...)
> 
> * Attributes are no longer in fsal_obj_handle. FSAL_MDCACHE does 
maintain
>   attributes per the caching rules.
> 
> * ACL validity is managed separately from the rest of the attributes.
> 
> * FSAL methods that return a new fsal_obj_handle (creates, lookups,
>   create_handle) also return the attributes if requested. FSAL_MDCACHE
>   does so as well as the NFS v3 operations (for use in post op attr).
> 
> * Config parser has been fixed so "/" is an allowed path and no longer
>   needs to be quoted.
> 
> * A variety of bug fixes
> 
> Signed-off-by: Frank S. Filz 
> 
> Contents:
> 
> e545192 Frank S. Filz V2.4-dev-25
> 1a5fdb2 ajay Modified PATHNAME macro in lexical analyzer 
file(conf_lex.l) to
> match a single slash ('/') as a valid path
> 4cf28f1 Malahal Naineni Explicitly set privilegedport to false in 
default
> PseudoFS export 8191f7c Matt Benjamin idmapping: add
> only_numeric_owners option (off by
> default)
> d7acea1 Swen Schillig [state_misc.c] Remove assert() for invalid owner 
type.
> 83fdcd5 Daniel Gryniewicz Leak fix in NFS4_OP_OPEN
> b758982 Daniel Gryniewicz Clang fix - don't double increment, +2 instead
> ad1bdf9 Soumya Koduri FSAL_GLUSTER: Remove old APIs replaced by ex-
> APIs e01bc8c Soumya Koduri glusterfs_open2: Handle FSAL_NO_CREATE
> mode
> 00f9779 Soumya Koduri Handle mdcache_create_handle failures 67b896c
> Soumya Koduri FSAL_GLUSTER: support_ex notions for mknod, mkdir &
> symlink operations
> 168a173 Soumya Koduri FSAL_GLUSTER: enable support for extended fops
> 0ad1f87 Soumya Koduri FSAL_GLUSTER: close2() fop 554599e Soumya Koduri
> FSAL_GLUSTER: Implement setattr2() fop
> 071e0f5 Soumya Koduri FSAL_GLUSTER: getattr2() fop 45de78d Soumya
> Koduri FSAL_GLUSTER: Implement lock_op2() fop 0900fab Soumya Koduri
> FSAL_GLUSTER: enable 'lock_support_owner'
> c37fc47 Soumya Koduri FSAL_GLUSTER: commit2() fop 970a85a Soumya
> Koduri FSAL_GLUSTER: Implement write2() fop f90e9ab Soumya Koduri
> FSAL_GLUSTER: Implement read2() fop 3927b6c Soumya Koduri
> FSAL_GLUSTER: Implement reopen2() fop d39318b Soumya Koduri Change
> default status2 fop to return open

Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-25

2016-07-15 Thread Marc Eshel
[ 91%] Building C object 
FSAL/FSAL_VFS/panfs/CMakeFiles/fsalpanfs.dir/handle.c.o
/nas/ganesha/new-ganesha/src/FSAL/FSAL_VFS/panfs/handle.c: In function ‘
panfs_getattrs’:
/nas/ganesha/new-ganesha/src/FSAL/FSAL_VFS/panfs/handle.c:42:36: error: ‘
struct vfs_fsal_obj_handle’ has no member named ‘attributes’
  struct attrlist *attrib = &vfs_hdl->attributes;
^
/nas/ganesha/new-ganesha/src/FSAL/FSAL_VFS/panfs/handle.c: In function ‘
panfs_handle_ops_init’:
/nas/ganesha/new-ganesha/src/FSAL/FSAL_VFS/panfs/handle.c:76:32: error: 
assignment from incompatible pointer type [-Werror]
  panfs_hdl->panfs_ops.getattrs = panfs_getattrs;
^
cc1: all warnings being treated as errors
make[2]: *** [FSAL/FSAL_VFS/panfs/CMakeFiles/fsalpanfs.dir/handle.c.o] 
Error 1
make[1]: *** [FSAL/FSAL_VFS/panfs/CMakeFiles/fsalpanfs.dir/all] Error 2



From:   "Frank Filz" 
To: 
Date:   07/15/2016 11:54 AM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-25



Branch next

Tag:V2.4-dev-25

NOTE: If your chekcpatch.conf is not directly taken from the tree,
  please update from the tree.

Release Highlights

* This is a huge merge...

* FSAL_CEPH is enabled for support_ex

* FSAL_GLUSTER is enabled for support_ex

* FSAL_PSEUDO is enabled for support_ex (not much of a big deal...)

* Attributes are no longer in fsal_obj_handle. FSAL_MDCACHE does maintain
  attributes per the caching rules.

* ACL validity is managed separately from the rest of the attributes.

* FSAL methods that return a new fsal_obj_handle (creates, lookups,
  create_handle) also return the attributes if requested. FSAL_MDCACHE
  does so as well as the NFS v3 operations (for use in post op attr).

* Config parser has been fixed so "/" is an allowed path and no longer
  needs to be quoted.

* A variety of bug fixes

Signed-off-by: Frank S. Filz 

Contents:

e545192 Frank S. Filz V2.4-dev-25
1a5fdb2 ajay Modified PATHNAME macro in lexical analyzer file(conf_lex.l) 
to
match a single slash ('/') as a valid path
4cf28f1 Malahal Naineni Explicitly set privilegedport to false in default
PseudoFS export
8191f7c Matt Benjamin idmapping: add only_numeric_owners option (off by
default)
d7acea1 Swen Schillig [state_misc.c] Remove assert() for invalid owner 
type.
83fdcd5 Daniel Gryniewicz Leak fix in NFS4_OP_OPEN
b758982 Daniel Gryniewicz Clang fix - don't double increment, +2 instead
ad1bdf9 Soumya Koduri FSAL_GLUSTER: Remove old APIs replaced by ex-APIs
e01bc8c Soumya Koduri glusterfs_open2: Handle FSAL_NO_CREATE mode
00f9779 Soumya Koduri Handle mdcache_create_handle failures
67b896c Soumya Koduri FSAL_GLUSTER: support_ex notions for mknod, mkdir &
symlink operations
168a173 Soumya Koduri FSAL_GLUSTER: enable support for extended fops
0ad1f87 Soumya Koduri FSAL_GLUSTER: close2() fop
554599e Soumya Koduri FSAL_GLUSTER: Implement setattr2() fop
071e0f5 Soumya Koduri FSAL_GLUSTER: getattr2() fop
45de78d Soumya Koduri FSAL_GLUSTER: Implement lock_op2() fop
0900fab Soumya Koduri FSAL_GLUSTER: enable 'lock_support_owner'
c37fc47 Soumya Koduri FSAL_GLUSTER: commit2() fop
970a85a Soumya Koduri FSAL_GLUSTER: Implement write2() fop
f90e9ab Soumya Koduri FSAL_GLUSTER: Implement read2() fop
3927b6c Soumya Koduri FSAL_GLUSTER: Implement reopen2() fop
d39318b Soumya Koduri Change default status2 fop to return openflags
cb89400 Soumya Koduri FSAL_GLUSTER: implement open2 fop
da8746b Soumya Koduri FSAL_GLUSTER: Given fd, fetch requested attributes
3990902 Soumya Koduri FSAL_GLUSTER: Given an object handle and state find
its fd
3cffd7d Soumya Koduri FSAL_GLUSTER: Routines to open and close global_fd
85319f0 Soumya Koduri FSAL_GLUSTER: Added routines to open/close my_fd
dd5044d Soumya Koduri FSAL_GLUSTER: Add skeletons of *fops2*
94b9b23 Soumya Koduri FSAL_GLUSTER: Use struct glusterfs_fd to store 
common
fd
7e295a8 Soumya Koduri FSAL_GLUSTER: Define glusterfs fd structures and
allocate state
27a53a6 Frank S. Filz FSAL_GPFS: Optimize to only ask xstat for attributes
actually requested
f3ad267 Frank S. Filz Fixup checkpatch.conf to match what I (maintainer)
actually use
6395d61 Frank S. Filz Use correct mounted_on_fileid for NFS v4
73c6789 Frank S. Filz MDCACHE: Add debug when methods return attributes
e1085ca Frank S. Filz FSAL_CEPH: Use passed target obj handle type to 
decide
unlink or rmdir
cab069e Frank S. Filz FSAL_CEPH: Add some debug
3df6815 Frank S. Filz 9P: Get fileid directly out of obj_handle, don't use
fsal_fileid
3e068a2 Frank S. Filz MDCACHE: Add call to FSAL merge method back in
cf6c0c4 Frank S. Filz Make sure we don't leak ACLs on getattrs
003d88f Frank S. Filz MDCACHE: Manage ACL validity separate from rest of
attributes
4a29e5c Frank S. Filz FSAL_PSEUDO: Allow setting attributes on mkdir from
caller
52a03e7 Frank S. Filz MDCACHE: make mdc_unreachable and mdcache_kill_entry
more debuggable
6af776d Frank S. Filz MDCACHE: Add some debug
de7f571 Frank S. Filz Pass attributes out on fsal_obj_handle 

Re: [Nfs-ganesha-devel] no_subtree_check option

2016-06-21 Thread Marc Eshel
Do you have a general idea on how it would be accomplished? Would we need 
something like the kNFS  fh_to_parent call ? 
The reason I ask is that this call is the only reason we keep the parent 
inode in the fh and if we don't have this call we can make the fh much 
smaller.
Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: , "'Niels de Vos'" 
, "'NFS Ganesha Developers'" 

Date:   06/21/2016 12:40 PM
Subject:RE: [Nfs-ganesha-devel] no_subtree_check option



> Does Ganesha have something equivalent to  no_subtree_check, I am
> interested in the subtree_check part,  to verify that fh is not trying 
to
access
> files above the export point, like in kNFS.
> Thanks, Marc.

No, Ganesha doesn't have such an option. I'm not sure how feasible it 
would
be to implement...

I have looked at, but never finished, making sure that a filesystem is 
only
accessed using an exportid in the handle under which that filesystem is
exported, right now, if you have two VFS filesystems exported, one with 
more
restrictive export permissions, nothing prevents a malicious client from
using the replacing the exportid in a handle from the more restrictive
export with the exportid of the less restrictive export, and thus gaining
unexpected access.

It's not a hard problem to solve, just one bit of the fsal_filesystem 
stuff
I never finished...

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus






--
Attend Shape: An AT&T Tech Expo July 15-16. Meet us at AT&T Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] no_subtree_check option

2016-06-21 Thread Marc Eshel
Change misleading subject.



From:   Marc Eshel/Almaden/IBM@IBMUS
To: "Frank Filz" 
Cc: "'NFS Ganesha Developers'" 
, mala...@linux.vnet.ibm.com
Date:   06/21/2016 12:06 PM
Subject:Re: [Nfs-ganesha-devel] existing export modifications 
withoutinterruption



Does Ganesha have something equivalent to  no_subtree_check, I am 
interested in the subtree_check part,  to verify that fh is not trying to 
access files above the export point, like in kNFS.
Thanks, Marc.


--
Attend Shape: An AT&T Tech Expo July 15-16. Meet us at AT&T Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
Attend Shape: An AT&T Tech Expo July 15-16. Meet us at AT&T Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] existing export modifications withoutinterruption

2016-06-21 Thread Marc Eshel
Does Ganesha have something equivalent to  no_subtree_check, I am 
interested in the subtree_check part,  to verify that fh is not trying to 
access files above the export point, like in kNFS.
Thanks, Marc.


--
Attend Shape: An AT&T Tech Expo July 15-16. Meet us at AT&T Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Announce Push of V2.4-dev-21

2016-06-17 Thread Marc Eshel
This version is better I am mounting v3 and I can now do ls now, but 
coping a small file into the mount point I get
Marc.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f6595b0e280 (LWP 10125)]
0x00528775 in mdc_cur_export () at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:372
372 return mdc_export(op_ctx->fsal_export);
(gdb) where
#0  0x00528775 in mdc_cur_export () at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h:372
#1  0x00529bde in mdc_check_mapping (entry=0x7f63cc0014e0) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:160
#2  0x0052b2d2 in mdcache_find_keyed (key=0x7f6595b0cd50, 
entry=0x7f6595b0cd48) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:582
#3  0x0052b32e in mdcache_locate_keyed (key=0x7f6595b0cd50, 
export=0xb5d7f0, entry=0x7f6595b0cd48) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:612
#4  0x005263e4 in mdcache_create_handle (exp_hdl=0xb5d7f0, 
hdl_desc=0x7f6595b0cf50, handle=0x7f6595b0cdc0)
at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1130
#5  0x00521f76 in mdc_up_invalidate (export=0xb5d7f0, 
handle=0x7f6595b0cf50, flags=3) at 
/nas/ganesha/new-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_up.c:49
#6  0x7f6595b1ac14 in GPFSFSAL_UP_Thread (Arg=0xb6ee60) at 
/nas/ganesha/new-ganesha/src/FSAL/FSAL_GPFS/fsal_up.c:310
#7  0x7f6598491df3 in start_thread (arg=0x7f6595b0e280) at 
pthread_create.c:308
#8  0x7f6597b513dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113



From:   "Frank Filz" 
To: "'nfs-ganesha-devel'" 
Date:   06/17/2016 03:34 PM
Subject:[Nfs-ganesha-devel] Announce Push of V2.4-dev-21



Branch next

Tag:V2.4-dev-21

Release Highlights

* Remove FSAL_PT, FSAL_HPSS, FSAL_LUSTRE, Add FSAL_RGW to everything.cmake

* Some NFS v3 bug fixes

* [fridgethr.c] Prevent infinite loop for timed out sync.

* FSAL_GLUSTER : symlink operation fails when acl is enabled

* MDCACHE - call reopen for reopen, not open

Signed-off-by: Frank S. Filz 

Contents:

758a361 Frank S. Filz V2.4-dev-21
f8247e2 Daniel Gryniewicz MDCACHE - call reopen for reopen, not open
3c682c2 Jiffin Tony Thottan FSAL_GLUSTER : symlink operation fails when 
acl
is enabled
fd01c8c Swen Schillig [fridgethr.c] Prevent infinite loop for timed out
sync.
e0319db Malahal Naineni Stop MOUNT/NLM as additional services in NFSv4 
only
environments
96adc4c Frank S. Filz Reorganize nfs3_fsstat.c, nfs3_link.c, and
nfs3_write.c
c97be4a Frank S. Filz Change behavior - put_ref and get_ref are required.
b2a6ff2 Frank S. Filz In nfs3_Mnt.c do not release obj_handle
a88544f Frank S. Filz Remove FSAL_PT
34fccf2 Frank S. Filz Remove FSAL_HPSS
fe8476d Frank S. Filz Remove FSAL_LUSTRE
9aac4d8 Frank S. Filz Add FSAL_RGW to everything.cmake


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
What NetFlow Analyzer can do for you? Monitors network bandwidth and 
traffic
patterns at an interface-level. Reveals which users, apps, and protocols 
are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning
reports. http://sdm.link/zohomanageengine
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports. http://sdm.link/zohomanageengine
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Posted patches that include my continuation of Dan's work on removing the attrlist from fsal_obj_handle

2016-06-09 Thread Marc Eshel
So this logic is already in V2.4-dev-20 for FSAL_GPFS ?
Marc.



From:   Daniel Gryniewicz 
To: Marc Eshel/Almaden/IBM@IBMUS, Frank Filz 
Cc: "'nfs-ganesha-devel'" 
Date:   06/09/2016 12:02 PM
Subject:Re: [Nfs-ganesha-devel] Posted patches that include my 
continuation of Dan's work on removing the attrlist from fsal_obj_handle



On 06/09/2016 02:25 PM, Marc Eshel wrote:
> Hi Frank,
> I was not following of the recent changes to Ganesha and I understand
> that they are not complete but I just want to make sure we don't break 
it.
> What I see is that the recent version is that fh passed on the up call
> is used as the actual fh to do delegation call back, but up call was not
> passing a full fh it is passing a key that the old code used to use for
> a look-up of the full fh in the cache_entry_t and used that full fh to
> recall delegation from the NFS client.
> Am I missing something?
> Thanks, Marc.
>
>

All the up-calls take a handle key that is passed to create_handle(). 
In the cached case, MDCACHE will look up the cached handle in 
create_handle(), so everything should be working correctly.  No actual 
fsal_handle objects are passed up the chain in any of the up-calls.

Daniel





--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Posted patches that include my continuation of Dan's work on removing the attrlist from fsal_obj_handle

2016-06-09 Thread Marc Eshel
Hi Frank,
I was not following of the recent changes to Ganesha and I understand that 
they are not complete but I just want to make sure we don't break it.
What I see is that the recent version is that fh passed on the up call is 
used as the actual fh to do delegation call back, but up call was not 
passing a full fh it is passing a key that the old code used to use for a 
look-up of the full fh in the cache_entry_t and used that full fh to 
recall delegation from the NFS client.
Am I missing something?
Thanks, Marc. 



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: "'Dan Gryniewicz'" , "'nfs-ganesha-devel'" 

Date:   06/06/2016 12:13 PM
Subject:RE: [Nfs-ganesha-devel] Posted patches that include my 
continuation of Dan's work on removing the attrlist from fsal_obj_handle



Hmm, are you testing just the one patch or the whole branch? I have a 
patch “In nfs3_Mnt.c do not release obj_handle” that should fix what you 
are seeing.
 
You can pull my latest branch from github (though maybe wait until later 
today when I have pushed an update I’m in the middle of):
 
https://github.com/ffilz/nfs-ganesha/commits/ceph-4
 
Frank
 
 
From: Marc Eshel [mailto:es...@us.ibm.com] 
Sent: Friday, June 3, 2016 5:02 PM
To: Frank Filz 
Cc: 'Dan Gryniewicz' ; 'nfs-ganesha-devel' 

Subject: Re: [Nfs-ganesha-devel] Posted patches that include my 
continuation of Dan's work on removing the attrlist from fsal_obj_handle
 
FSAL_GPFS is not working  I belive that get_refis null, it crashes on the 
2nd mount.

fsal_status_t nfs_export_get_root_entry(structgsh_export *export,
structfsal_obj_handle **obj)
{
PTHREAD_RWLOCK_rdlock(&export->lock);

if(export->exp_root_obj)
 export->exp_root_obj->obj_ops.get_ref(export->exp_root_obj);


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f949db8e700 (LWP 18072)]
0x in ?? ()
(gdb) where
#0  0x in ?? ()
#1  0x004f6024 in nfs_export_get_root_entry (export=0x21fd7c8, 
obj=0x7f949db8c9d8) at /nas/ganesha/new-ganesha/src/support/exports.c:1554
#2  0x0042dbd6 in fsal_lookupp (obj=0x7f92cde8, 
parent=0x7f949db8ce88) at 
/nas/ganesha/new-ganesha/src/FSAL/fsal_helper.c:901
#3  0x0048ccb4 in nfs3_readdir (arg=0x7f930aa8, 
req=0x7f9308e8, res=0x7f92b8c0) at 
/nas/ganesha/new-ganesha/src/Protocols/NFS/nfs3_readdir.c:250
#4  0x004482d0 in nfs_rpc_execute (reqdata=0x7f9308c0) at 
/nas/ganesha/new-ganesha/src/MainNFSD/nfs_worker_thread.c:1306
#5  0x00448c12 in worker_run (ctx=0x22405b0) at 
/nas/ganesha/new-ganesha/src/MainNFSD/nfs_worker_thread.c:1570
#6  0x004fa3bf in fridgethr_start_routine (arg=0x22405b0) at 
/nas/ganesha/new-ganesha/src/support/fridgethr.c:550
#7  0x7f94b3774df3 in start_thread (arg=0x7f949db8e700) at 
pthread_create.c:308
#8  0x7f94b2e343dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) p export
No symbol "export" in current context.
(gdb) p *export
No symbol "export" in current context.
(gdb) up
#1  0x004f6024 in nfs_export_get_root_entry (export=0x21fd7c8, 
obj=0x7f949db8c9d8) at /nas/ganesha/new-ganesha/src/support/exports.c:1554
1554 export->exp_root_obj->obj_ops.get_ref(export->exp_root_obj);
(gdb) p *export
$1 = {exp_list = {next = 0x7ae230 , prev = 0x21f7f98}, node_k 
= {left = 0x0, right = 0x0, parent = 35618730}, exp_state_list = {next = 
0x21fd7f0, prev = 0x21fd7f0}, exp_lock_list = {
next = 0x21fd800, prev = 0x21fd800}, exp_nlm_share_list = {next = 
0x21fd810, prev = 0x21fd810}, exp_root_list = {next = 0x21fb6b0, prev = 
0x21fb6b0}, exp_work = {next = 0x0, prev = 0x0}, 
  mounted_exports_list = {next = 0x21fd840, prev = 0x21fd840}, 
mounted_exports_node = {next = 0x21f8010, prev = 0x21f8010}, exp_root_obj 
= 0x21fb328, clients = {next = 0x21fd280, 
prev = 0x21fdaa0}, exp_junction_obj = 0x2238688, exp_parent_exp = 
0x21f7f98, fsal_export = 0x21fe4d0, fullpath = 0x21fd370 "/gpfs/gpfs3", 
pseudopath = 0x21fd390 "/gpfs3", 
  FS_tag = 0x21fd3b0 "gpfs3", exp_mounted_on_file_id = 1, MaxRead = 
2097152, MaxWrite = 2097152, PrefRead = 2097152, PrefWrite = 2097152, 
PrefReaddir = 16384, 
  MaxOffsetWrite = 18446744073709551615, MaxOffsetRead = 
18446744073709551615, filesystem_id = {major = 666, minor = 666}, refcnt = 
3, lock = {__data = {__lock = 0, __nr_readers = 1, 
  __readers_wakeup = 0, __writer_wakeup = 0, __nr_readers_queued = 0, 
__nr_writers_queued = 0, __writer = 0, __shared = 0, __pad1 = 0, __pad2 = 
0, __flags = 0}, 
__size = "\000\000\000\000\001", '\000' , __align = 
4294967296}, export_perms = {anonymous_uid = 4294967294, anonymous_gid = 
4294967294, options = 120594672, set = 7467260}, 
  last_update = 207783253234, opt

Re: [Nfs-ganesha-devel] Posted patches that include my continuation of Dan's work on removing the attrlist from fsal_obj_handle

2016-06-03 Thread Marc Eshel
FSAL_GPFS is not working  I belive that get_ref is null, it crashes on the 
2nd mount.

fsal_status_t nfs_export_get_root_entry(struct gsh_export *export,
struct fsal_obj_handle **obj)
{
PTHREAD_RWLOCK_rdlock(&export->lock);

if (export->exp_root_obj)
 export->exp_root_obj->obj_ops.get_ref(export->exp_root_obj);


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f949db8e700 (LWP 18072)]
0x in ?? ()
(gdb) where
#0  0x in ?? ()
#1  0x004f6024 in nfs_export_get_root_entry (export=0x21fd7c8, 
obj=0x7f949db8c9d8) at /nas/ganesha/new-ganesha/src/support/exports.c:1554
#2  0x0042dbd6 in fsal_lookupp (obj=0x7f92cde8, 
parent=0x7f949db8ce88) at 
/nas/ganesha/new-ganesha/src/FSAL/fsal_helper.c:901
#3  0x0048ccb4 in nfs3_readdir (arg=0x7f930aa8, 
req=0x7f9308e8, res=0x7f92b8c0) at 
/nas/ganesha/new-ganesha/src/Protocols/NFS/nfs3_readdir.c:250
#4  0x004482d0 in nfs_rpc_execute (reqdata=0x7f9308c0) at 
/nas/ganesha/new-ganesha/src/MainNFSD/nfs_worker_thread.c:1306
#5  0x00448c12 in worker_run (ctx=0x22405b0) at 
/nas/ganesha/new-ganesha/src/MainNFSD/nfs_worker_thread.c:1570
#6  0x004fa3bf in fridgethr_start_routine (arg=0x22405b0) at 
/nas/ganesha/new-ganesha/src/support/fridgethr.c:550
#7  0x7f94b3774df3 in start_thread (arg=0x7f949db8e700) at 
pthread_create.c:308
#8  0x7f94b2e343dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) p export
No symbol "export" in current context.
(gdb) p *export
No symbol "export" in current context.
(gdb) up
#1  0x004f6024 in nfs_export_get_root_entry (export=0x21fd7c8, 
obj=0x7f949db8c9d8) at /nas/ganesha/new-ganesha/src/support/exports.c:1554
1554 export->exp_root_obj->obj_ops.get_ref(export->exp_root_obj);
(gdb) p *export
$1 = {exp_list = {next = 0x7ae230 , prev = 0x21f7f98}, node_k 
= {left = 0x0, right = 0x0, parent = 35618730}, exp_state_list = {next = 
0x21fd7f0, prev = 0x21fd7f0}, exp_lock_list = {
next = 0x21fd800, prev = 0x21fd800}, exp_nlm_share_list = {next = 
0x21fd810, prev = 0x21fd810}, exp_root_list = {next = 0x21fb6b0, prev = 
0x21fb6b0}, exp_work = {next = 0x0, prev = 0x0}, 
  mounted_exports_list = {next = 0x21fd840, prev = 0x21fd840}, 
mounted_exports_node = {next = 0x21f8010, prev = 0x21f8010}, exp_root_obj 
= 0x21fb328, clients = {next = 0x21fd280, 
prev = 0x21fdaa0}, exp_junction_obj = 0x2238688, exp_parent_exp = 
0x21f7f98, fsal_export = 0x21fe4d0, fullpath = 0x21fd370 "/gpfs/gpfs3", 
pseudopath = 0x21fd390 "/gpfs3", 
  FS_tag = 0x21fd3b0 "gpfs3", exp_mounted_on_file_id = 1, MaxRead = 
2097152, MaxWrite = 2097152, PrefRead = 2097152, PrefWrite = 2097152, 
PrefReaddir = 16384, 
  MaxOffsetWrite = 18446744073709551615, MaxOffsetRead = 
18446744073709551615, filesystem_id = {major = 666, minor = 666}, refcnt = 
3, lock = {__data = {__lock = 0, __nr_readers = 1, 
  __readers_wakeup = 0, __writer_wakeup = 0, __nr_readers_queued = 0, 
__nr_writers_queued = 0, __writer = 0, __shared = 0, __pad1 = 0, __pad2 = 
0, __flags = 0}, 
__size = "\000\000\000\000\001", '\000' , __align = 
4294967296}, export_perms = {anonymous_uid = 4294967294, anonymous_gid = 
4294967294, options = 120594672, set = 7467260}, 
  last_update = 207783253234, options = 0, options_set = 0, 
expire_time_attr = 60, export_id = 79, export_status = 0 '\000', 
has_pnfs_ds = true}
(gdb) p *export->exp_root_obj
$2 = {handles = {next = 0x0, prev = 0x0}, fs = 0x220f800, fsal = 0x0, 
obj_ops = {get_ref = 0x0, put_ref = 0x0, release = 0x0, merge = 0x0, 
lookup = 0x0, readdir = 0x0, create = 0x0, mkdir = 0x0, 
mknode = 0x0, symlink = 0x0, readlink = 0x0, test_access = 0x0, 
getattrs = 0x0, setattrs = 0x0, link = 0x0, fs_locations = 0x0, rename = 
0x0, unlink = 0x0, open = 0x0, reopen = 0x0, 
status = 0x0, read = 0x0, read_plus = 0x0, write = 0x0, write_plus = 
0x0, seek = 0x0, io_advise = 0x0, commit = 0x0, lock_op = 0x0, share_op = 
0x0, close = 0x0, list_ext_attrs = 0x0, 
getextattr_id_by_name = 0x0, getextattr_value_by_name = 0x0, 
getextattr_value_by_id = 0x0, setextattr_value = 0x0, 
setextattr_value_by_id = 0x0, getextattr_attrs = 0x0, 
remove_extattr_by_id = 0x0, remove_extattr_by_name = 0x0, handle_is = 
0x0, handle_digest = 0x0, handle_to_key = 0x0, handle_cmp = 0x0, layoutget 
= 0x0, layoutreturn = 0x0, layoutcommit = 0x0, 
getxattrs = 0x0, setxattrs = 0x0, removexattrs = 0x0, listxattrs = 
0x0, open2 = 0x0, check_verifier = 0x0, status2 = 0x0, reopen2 = 0x0, 
read2 = 0x0, write2 = 0x0, seek2 = 0x0, 
io_advise2 = 0x0, commit2 = 0x0, lock_op2 = 0x0, setattr2 = 0x0, 
close2 = 0x0}, lock = {__data = {__lock = 0, __nr_readers = 0, 
__readers_wakeup = 0, __writer_wakeup = 0, 
  __nr_readers_queued = 0, __nr_writers_queued = 0, __writer = 0, 
__shared = 0, __pad1 = 0, __pad2 = 0, __flags = 0}, __size = '\000' 
, __align = 0}, attrs = 0x21fb1c8, 
  

Re: [Nfs-ganesha-devel] change export

2016-01-19 Thread Marc Eshel
So it sound like you prefer config re-reading, is it possible to do it 
with a clear specification of what will be updated at this time and over 
time we can add attributes that can be changed?
Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: , 
, Sven Oehme/Almaden/IBM@IBMUS
Date:   01/19/2016 02:27 PM
Subject:RE: [Nfs-ganesha-devel] change export



Netgroup should be a single element in the client list.
 
Changing between rw and ro would be simple if the clients don’t specify 
that part.
 
If you have blocks like:
 
CLIENT
{
Clients = client1, client2, client3;
Access = rw;
}
 
Changing from rw to ro would require changing the access type for each 
client individually. The above is semantically equivalent to:
 
CLIENT
{
Clients = client1;
Access = rw;
}
 
CLIENT
{
Clients = client2;
Access = rw;
}
 
CLIENT
{
Clients = client3;
Access = rw;
}
 
So this starts to shed light on the complexity of changing an export 
without going all the way and doing it by re-reading config and 
discovering the changes and applying them…
 
Frank
 
From: Marc Eshel [mailto:es...@us.ibm.com] 
Sent: Tuesday, January 19, 2016 1:45 PM
To: Frank Filz 
Cc: mala...@linux.vnet.ibm.com; nfs-ganesha-devel@lists.sourceforge.net; 
Sven Oehme 
Subject: RE: [Nfs-ganesha-devel] change export
 
We need add/delete individual clients for sure but we also need the 
ability to change netgroups. 
Another requirement is to switch export to or from ro/rw which should not 
be to complicated.

Marc.



From:"Frank Filz" 
To:Marc Eshel/Almaden/IBM@IBMUS, 
Cc:, Sven 
Oehme/Almaden/IBM@IBMUS
Date:01/18/2016 11:12 AM
Subject:RE: [Nfs-ganesha-devel] change export




Unfortunately I’m pretty sure updating the client list is one of the 
things that is necessary…
 
But maybe we could do a halfway thing by having a command to add and 
remove client list entries.
 
The only issue would be the complexity of specification of which element 
to remove from the list, or where to add the new element (can’t just be 
add at the end or front of the list).
 
It would help if there was a way to dump the entire client list for an 
export, with each element numbered.
 
Then the add is just add before/after existing element number.
 
Then an individual element could be downgraded by adding the new version 
before the old one, then removing the old one.
 
An element can be re-positioned by adding the a new element at the new 
position, then removing the old element.
 
Frank
 
From: Marc Eshel [mailto:es...@us.ibm.com] 
Sent: Monday, January 18, 2016 10:38 AM
To: mala...@linux.vnet.ibm.com
Cc: Frank Filz ; 
nfs-ganesha-devel@lists.sourceforge.net; Sven Oehme 
Subject: Re: [Nfs-ganesha-devel] change export
 
Yes, I think that re-reading the config file might be to complicated, lets 
compile a list of the attributes that we must be able to change and than 
decide on the best way to do it.
Marc.



From:mala...@linux.vnet.ibm.com
To:Frank Filz 
Cc:Marc Eshel/Almaden/IBM@IBMUS, 
nfs-ganesha-devel@lists.sourceforge.net
Date:01/15/2016 04:30 PM
Subject:Re: [Nfs-ganesha-devel] change export





Frank Filz [ffilz...@mindspring.com] wrote:
> The simple numeric attributes would be easiest to change (they could 
become
> atomic integers), and could easily be changed one at a time by a DBUS
> command.
> 
> The hardest to change by DBUS command would be the client lists. 
Changing
> global export permissions, or even permissions for a specific client
> specification would not be too bad, adding or removing a client
> specification wouldn't be too much worse. Changing the order of clients
> would be painful. Extensive permission changes may need to be atomic, 
and
> thus would not be well suited to DBUS changing, but would most easily be
> done by re-reading the config.
> 
> The problem with re-reading the config is matching up the exports, and
> dealing with someone changing Path, Pseudo, or Export-Id.
> 
> I think a re-read the export config could be done that would not be too
> awful. Re-read the config, creating a new off-line list of exports. Then
> match each of them up one by one, atomically updating the parameters
> (including atomically swapping out the client list). Changing Path, 
Pseudo,
> or Export-Id would be treated as the same as a remove export combined 
with
> an add export.

I am not sure what attributes Marc has in mind, but many people add or
delete clients or their access mode. Ideally, re-reading the export
configuration would be best but we have export object pointers in many
places making it difficult.

Regards, Malahal.


 


This email has been checked for viruses by Avast antivirus software. 
www.avast.com
 

Re: [Nfs-ganesha-devel] change export

2016-01-19 Thread Marc Eshel
We need add/delete individual clients for sure but we also need the 
ability to change netgroups. 
Another requirement is to switch export to or from ro/rw which should not 
be to complicated.

Marc.



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS, 
Cc: , Sven 
Oehme/Almaden/IBM@IBMUS
Date:   01/18/2016 11:12 AM
Subject:RE: [Nfs-ganesha-devel] change export



Unfortunately I’m pretty sure updating the client list is one of the 
things that is necessary…
 
But maybe we could do a halfway thing by having a command to add and 
remove client list entries.
 
The only issue would be the complexity of specification of which element 
to remove from the list, or where to add the new element (can’t just be 
add at the end or front of the list).
 
It would help if there was a way to dump the entire client list for an 
export, with each element numbered.
 
Then the add is just add before/after existing element number.
 
Then an individual element could be downgraded by adding the new version 
before the old one, then removing the old one.
 
An element can be re-positioned by adding the a new element at the new 
position, then removing the old element.
 
Frank
 
From: Marc Eshel [mailto:es...@us.ibm.com] 
Sent: Monday, January 18, 2016 10:38 AM
To: mala...@linux.vnet.ibm.com
Cc: Frank Filz ; 
nfs-ganesha-devel@lists.sourceforge.net; Sven Oehme 
Subject: Re: [Nfs-ganesha-devel] change export
 
Yes, I think that re-reading the config file might be to complicated, lets 
compile a list of the attributes that we must be able to change and than 
decide on the best way to do it.
Marc.



From:mala...@linux.vnet.ibm.com
To:Frank Filz 
Cc:    Marc Eshel/Almaden/IBM@IBMUS, 
nfs-ganesha-devel@lists.sourceforge.net
Date:01/15/2016 04:30 PM
Subject:Re: [Nfs-ganesha-devel] change export




Frank Filz [ffilz...@mindspring.com] wrote:
> The simple numeric attributes would be easiest to change (they could 
become
> atomic integers), and could easily be changed one at a time by a DBUS
> command.
> 
> The hardest to change by DBUS command would be the client lists. 
Changing
> global export permissions, or even permissions for a specific client
> specification would not be too bad, adding or removing a client
> specification wouldn't be too much worse. Changing the order of clients
> would be painful. Extensive permission changes may need to be atomic, 
and
> thus would not be well suited to DBUS changing, but would most easily be
> done by re-reading the config.
> 
> The problem with re-reading the config is matching up the exports, and
> dealing with someone changing Path, Pseudo, or Export-Id.
> 
> I think a re-read the export config could be done that would not be too
> awful. Re-read the config, creating a new off-line list of exports. Then
> match each of them up one by one, atomically updating the parameters
> (including atomically swapping out the client list). Changing Path, 
Pseudo,
> or Export-Id would be treated as the same as a remove export combined 
with
> an add export.

I am not sure what attributes Marc has in mind, but many people add or
delete clients or their access mode. Ideally, re-reading the export
configuration would be best but we have export object pointers in many
places making it difficult.

Regards, Malahal.








This email has been checked for viruses by Avast antivirus software. 
www.avast.com 




--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] change export

2016-01-18 Thread Marc Eshel
Yes, I think that re-reading the config file might be to complicated, lets 
compile a list of the attributes that we must be able to change and than 
decide on the best way to do it.
Marc.



From:   mala...@linux.vnet.ibm.com
To: Frank Filz 
Cc: Marc Eshel/Almaden/IBM@IBMUS, 
nfs-ganesha-devel@lists.sourceforge.net
Date:   01/15/2016 04:30 PM
Subject:Re: [Nfs-ganesha-devel] change export



Frank Filz [ffilz...@mindspring.com] wrote:
> The simple numeric attributes would be easiest to change (they could 
become
> atomic integers), and could easily be changed one at a time by a DBUS
> command.
> 
> The hardest to change by DBUS command would be the client lists. 
Changing
> global export permissions, or even permissions for a specific client
> specification would not be too bad, adding or removing a client
> specification wouldn't be too much worse. Changing the order of clients
> would be painful. Extensive permission changes may need to be atomic, 
and
> thus would not be well suited to DBUS changing, but would most easily be
> done by re-reading the config.
> 
> The problem with re-reading the config is matching up the exports, and
> dealing with someone changing Path, Pseudo, or Export-Id.
> 
> I think a re-read the export config could be done that would not be too
> awful. Re-read the config, creating a new off-line list of exports. Then
> match each of them up one by one, atomically updating the parameters
> (including atomically swapping out the client list). Changing Path, 
Pseudo,
> or Export-Id would be treated as the same as a remove export combined 
with
> an add export.

I am not sure what attributes Marc has in mind, but many people add or
delete clients or their access mode. Ideally, re-reading the export
configuration would be best but we have export object pointers in many
places making it difficult.

Regards, Malahal.






--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] change export

2016-01-15 Thread Marc Eshel
Hi Frank,

We have some critical attributes of export that we need to change 
dynamically with out restarting Ganesha. Do you or anyone else remember 
why we decided it was difficult to support change export? are there some 
specific attributes that are difficult to change and some that can be 
easily changed? 

Thanks, Marc.

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Fw: Change in ffilz/nfs-ganesha[next]: Add first step for xattr.

2015-12-02 Thread Marc Eshel
Hi Frank,

Congratulation!


I can fix WARNING: line over 80 characters
but the error with the missing (void) I am just following existing 
practice should I change it?



- Forwarded by Marc Eshel/Almaden/IBM on 12/02/2015 05:19 PM -

From:   GerritHub 
To: Marc Eshel/Almaden/IBM@IBMUS
Date:   12/02/2015 02:24 PM
Subject:Change in ffilz/nfs-ganesha[next]: Add first step for 
xattr.



>From CEA-HPC :

CEA-HPC has posted comments on this change.

Change subject: Add first step for xattr.
..


Patch Set 1:

(12 comments)

Checkpatch total: 8 errors, 4 warnings, 736 lines checked

https://review.gerrithub.io/#/c/254035/1/src/Protocols/NFS/nfs4_op_xattr.c
File src/Protocols/NFS/nfs4_op_xattr.c:

Line 185:res_LISTXATTR4->status = 
nfs4_sanity_check_FH(data, NO_FILE_TYPE, false);
WARNING: line over 80 characters
+res_LISTXATTR4->status = nfs4_sanity_check_FH(data, 
NO_FILE_TYPE, false);


Line 229:REMOVEXATTR4args * const arg_REMOVEXATTR4 = 
&op->nfs_argop4_u.opremovexattr;
WARNING: line over 80 characters
+REMOVEXATTR4args * const arg_REMOVEXATTR4 = 
&op->nfs_argop4_u.opremovexattr;


Line 230:REMOVEXATTR4res * const res_REMOVEXATTR4 = 
&resp->nfs_resop4_u.opremovexattr;
WARNING: line over 80 characters
+REMOVEXATTR4res * const res_REMOVEXATTR4 = 
&resp->nfs_resop4_u.opremovexattr;


Line 241:res_REMOVEXATTR4->status = 
nfs4_sanity_check_FH(data, NO_FILE_TYPE, false);
WARNING: line over 80 characters
+res_REMOVEXATTR4->status = nfs4_sanity_check_FH(data, 
NO_FILE_TYPE, false);


https://review.gerrithub.io/#/c/254035/1/src/include/nfsv41.h
File src/include/nfsv41.h:

Line 10342:  static inline bool xdr_GETXATTR4args();
ERROR: Bad function definition - bool xdr_GETXATTR4args() should probably 
be bool xdr_GETXATTR4args(void)
+static inline bool xdr_GETXATTR4args();


Line 10343:  static inline bool xdr_GETXATTR4res();
ERROR: Bad function definition - bool xdr_GETXATTR4res() should probably 
be bool xdr_GETXATTR4res(void)
+static inline bool xdr_GETXATTR4res();


Line 10344:  static inline bool xdr_SETXATTR4args();
ERROR: Bad function definition - bool xdr_SETXATTR4args() should probably 
be bool xdr_SETXATTR4args(void)
+static inline bool xdr_SETXATTR4args();


Line 10345:  static inline bool xdr_SETXATTR4res();
ERROR: Bad function definition - bool xdr_SETXATTR4res() should probably 
be bool xdr_SETXATTR4res(void)
+static inline bool xdr_SETXATTR4res();


Line 10346:  static inline bool xdr_LISTXATTR4args();
ERROR: Bad function definition - bool xdr_LISTXATTR4args() should probably 
be bool xdr_LISTXATTR4args(void)
+static inline bool xdr_LISTXATTR4args();


Line 10347:  static inline bool xdr_LISTXATTR4res();
ERROR: Bad function definition - bool xdr_LISTXATTR4res() should probably 
be bool xdr_LISTXATTR4res(void)
+static inline bool xdr_LISTXATTR4res();


Line 10348:  static inline bool xdr_REMOVEXATTR4args();
ERROR: Bad function definition - bool xdr_REMOVEXATTR4args() should 
probably be bool xdr_REMOVEXATTR4args(void)
+static inline bool xdr_REMOVEXATTR4args();


Line 10349:  static inline bool xdr_REMOVEXATTR4res();
ERROR: Bad function definition - bool xdr_REMOVEXATTR4res() should 
probably be bool xdr_REMOVEXATTR4res(void)
+static inline bool xdr_REMOVEXATTR4res();


-- 
To view, visit https://review.gerrithub.io/254035
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: Ied0e08319da4a94dc7dc761664d373c243e337ca
Gerrit-PatchSet: 1
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Owner: es...@us.ibm.com
Gerrit-Reviewer: CEA-HPC 
Gerrit-HasComments: Yes




--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Topic for discussion - Out of Memory Handling

2015-11-04 Thread Marc Eshel
Not sure I would purge the cache before failing allocate but I would make 
sure that we don't exceed any of the limits that we have set for the 
different caches, the problem is that we don't caheck for limits on all 
memory that can just grow limitless. We must have limits for all caches, 
including inodes, exports, number of clients, number of locks, ...
If we don't have limits on all memory structs and in-force them we are 
going to allow malicious clients bring down the server.
Frank, you are doing the easy part of aborting any failed allocate but we 
should not make this change until we have solutions for all other issues 
that where discussed. 
Marc.



From:   mala...@linux.vnet.ibm.com
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: Frank Filz , 
nfs-ganesha-devel@lists.sourceforge.net
Date:   11/02/2015 01:24 PM
Subject:Re: [Nfs-ganesha-devel] Topic for discussion - Out of 
Memory  Handling



Marc Eshel [es...@us.ibm.com] wrote:
>Yes, it looks like I am outvoted, memory management is complicated. 
Let me
>first say that under no condition we should reboot the node any 
action
>should be limited to the Ganesha process. When we fail to get heap 
memory
>than yes kill the process, it would be nice at that point to get as 
much
>information as possible to debug the problem, it can be a leak or 
memory
>corruption, so we might need some memory in reserve to collect the
>information. We should manage Ganesha cache in a way that will not 
cause
>it to run out of memory so if we are getting memory to extend a cache 
we
>should not abort before try to reduce the cache size.
>Marc.

If I understand, what Marc is recommending (probably the best) is that
we try to allocate memory and if that fails, we empty our caches. After
purging our caches, we try again to allocate. If second allocation
fails, then we are SOL. If not, continue as though nothing has happened!

We could also call memory defrags (malloc_trim) in addition to purging
our caches...

This is what Linux kernel does (actually many OSes), but I am not
sure how easy this to implement.

Maybe: First pass, abort on the first failure. Slowly implement cache
purges and plug in the second allocation technique...

To be honest, we had first hand experience of working on a memory
allocation failure in the past. The Linux OOM happened before we ever
got ENOMEM. So the second allocation idea may not be useful on Linux!

Regards, Malahal.


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Topic for discussion - Out of Memory Handling

2015-11-02 Thread Marc Eshel
Yes, it looks like I am outvoted, memory management is complicated. Let me 
first say that under no condition we should reboot the node any action 
should be limited to the Ganesha process. When we fail to get heap memory 
than yes kill the process, it would be nice at that point to get as much 
information as possible to debug the problem, it can be a leak or memory 
corruption, so we might need some memory in reserve to collect the 
information. We should manage Ganesha cache in a way that will not cause 
it to run out of memory so if we are getting memory to extend a cache we 
should not abort before try to reduce the cache size. 
Marc. 



From:   "Frank Filz" 
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: 
Date:   11/02/2015 11:24 AM
Subject:RE: [Nfs-ganesha-devel] Topic for discussion - Out of 
Memory Handling



There seems to be overwhelming support for log and abort on out of memory, 
but before I just say “you’re outvoted”, I’d like to understand which 
ENOMEM situations you feel are worth trying to recover from rather than 
abort. I’m especially interested in what you think might be going on in 
the system that will raise an ENOMEM, but that we will quickly recover to 
a point where we stop getting ENOMEM (because if we handle the error, but 
we just continue to get ENOMEM for a long period of time, nothing will be 
accomplished).
 
In the meantime, I’d rather look at where we can productively throttle 
memory usage so we never actually get ENOMEM in the first place.
 
Frank
 
From: Marc Eshel [mailto:es...@us.ibm.com] 
Sent: Wednesday, October 28, 2015 7:38 PM
To: Frank Filz 
Cc: nfs-ganesha-devel@lists.sourceforge.net
Subject: Re: [Nfs-ganesha-devel] Topic for discussion - Out of Memory 
Handling
 
I don't believe that we need to restart Ganesha on every out of memory 
calls for many reasons, but I will agree that we can have two types or 
calls one that can accept no memory rc and one that terminate Ganesha if 
the call is not successful.   
Marc. 



From:"Frank Filz"  
To: 
Date:10/28/2015 11:55 AM 
Subject:[Nfs-ganesha-devel] Topic for discussion - Out of Memory 
Handling 




We have had various discussions over the years as to how to best handle 
out
of memory conditions.

In the meantime, our code is littered with attempts to handle the 
situation,
however, it is not clear to me these really solve anything. If we don't 
have
100% recoverability, likely we just delay the crash. Even if we manage to
avoid crashing, we may wobble along not really handling things well, 
causing
retry storms and such (that just dig us in deeper). Another possibility is
we return an error to the client that gets translated into EIO or some 
other
error the application isn't prepared to handle.

If instead, we just aborted, the HA systems most of us run under would
restart Ganesha. The clients would see some delay, but there should be no
visible errors to the clients. Depending on how well grace period/state
recovery is implemented (and in particular how well it's integrated with
other file servers such as CIFS/SMB or across a cluster), there could be
some openings for lock violation (someone is able to steal a lock from one
of our clients while Ganesha is down).

Aborting would have several advantages. First, it would immediately clear 
up
any memory leaks. Second, if there was some transient activity that 
resulted
in high memory utilization, that might also be cleared up. Third, it would
avoid retry storms and such that might just aggravate the low memory
condition. In addition, it would force the sysadmin to deal with a 
workload
that overloaded the server, possibly by adding additional nodes in a
clustered environment, or adding memory to the server.

No matter what we decide to do, another thing we need to look at is more
memory throttling. Cache inode has a limit on the number of inodes. This 
is
helpful, but is incomplete. Other candidates for memory throttling would 
be:

Number of clients
Number of state (opens, locks, delegations, layouts) (per client and/or
global)
Size of ACLs and number of ACLs cached

I'm sure there's more, discuss.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel






This email has been checked for viruses by Avast antivirus software. 
www.avast.com 



--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Topic for discussion - Out of Memory Handling

2015-10-28 Thread Marc Eshel
I don't believe that we need to restart Ganesha on every out of memory 
calls for many reasons, but I will agree that we can have two types or 
calls one that can accept no memory rc and one that terminate Ganesha if 
the call is not successful. 
Marc.



From:   "Frank Filz" 
To: 
Date:   10/28/2015 11:55 AM
Subject:[Nfs-ganesha-devel] Topic for discussion - Out of Memory 
Handling



We have had various discussions over the years as to how to best handle 
out
of memory conditions.

In the meantime, our code is littered with attempts to handle the 
situation,
however, it is not clear to me these really solve anything. If we don't 
have
100% recoverability, likely we just delay the crash. Even if we manage to
avoid crashing, we may wobble along not really handling things well, 
causing
retry storms and such (that just dig us in deeper). Another possibility is
we return an error to the client that gets translated into EIO or some 
other
error the application isn't prepared to handle.

If instead, we just aborted, the HA systems most of us run under would
restart Ganesha. The clients would see some delay, but there should be no
visible errors to the clients. Depending on how well grace period/state
recovery is implemented (and in particular how well it's integrated with
other file servers such as CIFS/SMB or across a cluster), there could be
some openings for lock violation (someone is able to steal a lock from one
of our clients while Ganesha is down).

Aborting would have several advantages. First, it would immediately clear 
up
any memory leaks. Second, if there was some transient activity that 
resulted
in high memory utilization, that might also be cleared up. Third, it would
avoid retry storms and such that might just aggravate the low memory
condition. In addition, it would force the sysadmin to deal with a 
workload
that overloaded the server, possibly by adding additional nodes in a
clustered environment, or adding memory to the server.

No matter what we decide to do, another thing we need to look at is more
memory throttling. Cache inode has a limit on the number of inodes. This 
is
helpful, but is incomplete. Other candidates for memory throttling would 
be:

Number of clients
Number of state (opens, locks, delegations, layouts) (per client and/or
global)
Size of ACLs and number of ACLs cached

I'm sure there's more, discuss.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel



--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Export_id larger than 16 bits

2015-09-23 Thread Marc Eshel
if we want to continue to support vmware we are limited to 56 bytes file 
handle for NFSv3.

Marc.



From:   "Frank Filz" 
To: mala...@linux.vnet.ibm.com
Cc: "'NFS Ganesha Developers'" 

Date:   09/23/2015 01:45 PM
Subject:Re: [Nfs-ganesha-devel] Export_id larger than 16 bits



> Frank Filz [ffilz...@mindspring.com] wrote:
> > Exports don't necessarily correspond to connections.
> >
> > The Linux client (and I'm guessing most others) will use one 
connection
to
> the server (or a few if it does some trunking) for all exports it mounts
from
> that server. Now true, if the exports are intended for different clients
there
> might be some issues, however, a clustered system would distribute the
> client connections among multiple hosts while having the convenience of
> managing all the exports as if there was a single server.
> >
> > Frank
> 
> There are also people who create exports, remove them, and recreate
> exports. They will not have anything close to 64K exports at the same
time,
> but they will pass that mark after few weeks/months as every export 
create
> needs an unused exportid.

Well, exportid could be managed to keep within 16 bits in such a scenario.

Your GPFS handles are one of the larger handles (if not the largest, 
though
BTRFS has a 40 byte kernel handle).

> Wondering if we can avoid using exportid in the file handle!

Only if we were strict and had only one export per FSID, but then a lot of
front end logic would have to change, and handles would not be backward
compatible.

If folks with the largest handles got creative, we COULD probably squeeze
out an extra 2 bytes.

There's also an intermediate option, a 24 bit exportid since the header is 
5
bytes.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel



--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] fs locations

2015-09-01 Thread Marc Eshel
Hi Frank,
The latest version of Ganesha still works fine with fs-locations, at least 
with GPFS, the only thing that can be improved on is to add a check to see 
if the FSAL supports fs-location in check_fs_location() before calling 
fsal_obj_ops.fs_locations() which is more expensive, or add another FSAL 
operation that just checks for fs_locations and doesn't get it.
Marc. --
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Fw: New Version Notification for draft-ietf-nfsv4-xattrs-01.txt

2015-08-27 Thread Marc Eshel
- Forwarded by Marc Eshel/Almaden/IBM on 08/27/2015 09:24 AM -

From:   internet-dra...@ietf.org
To: Manoj Naik/Almaden/IBM@IBMUS, Marc Eshel/Almaden/IBM@IBMUS, Manoj 
Naik/Almaden/IBM@IBMUS, Marc Eshel/Almaden/IBM@IBMUS
Date:   08/18/2015 10:09 PM
Subject:New Version Notification for 
draft-ietf-nfsv4-xattrs-01.txt




A new version of I-D, draft-ietf-nfsv4-xattrs-01.txt
has been successfully submitted by Manoj Naik and posted to the
IETF repository.

Name:draft-ietf-nfsv4-xattrs
Revision:01
Title:   File System Extended Attributes in NFSv4
Document date:   2015-08-18
Group:   nfsv4
Pages:   26
URL:
https://www.ietf.org/internet-drafts/draft-ietf-nfsv4-xattrs-01.txt
Status: https://datatracker.ietf.org/doc/draft-ietf-nfsv4-xattrs/
Htmlized:   https://tools.ietf.org/html/draft-ietf-nfsv4-xattrs-01
Diff:   
https://www.ietf.org/rfcdiff?url2=draft-ietf-nfsv4-xattrs-01

Abstract:
   This document proposes extensions to the NFSv4 protocol which allow
   file extended attributes (hereinafter also referred to as xattrs) to
   be manipulated using NFSv4.  An xattr is a file system feature that
   allows opaque metadata, not interpreted by the file system, to be
   associated with files and directories.  Such support is present in
   many modern local file systems.  New file attributes are proposed to
   allow clients to query the server for xattr support, and new
   operations to get and set xattrs on file system objects are provided.


  


Please note that it may take a couple of minutes from the time of 
submission
until the htmlized version and diff are available at tools.ietf.org.

The IETF Secretariat

--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] number of fd

2015-07-30 Thread Marc Eshel
Hi Frank, Matt,
I remember we had the discussion of fd description limit but don't 
remember the details.
First how do we control the limit of fd in Ganesha.
We see low performance when running with high number of open files, are 
there known problems with fd management?
This is for NFSv3.
Thanks, Marc,--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] inode cache

2015-07-29 Thread Marc Eshel
"Matt W. Benjamin"  wrote on 07/29/2015 01:38:14 PM:

> From: "Matt W. Benjamin" 
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: "NFS Ganesha Developers (Nfs-ganesha-
> de...@lists.sourceforge.net)" 
> Date: 07/29/2015 01:38 PM
> Subject: Re: inode cache
> 
> Hi Marc,
> 
> Probably.  I was writing to malahal in irc that we have code changes 
that
> will reduce lock contention for xprt->xp_lock a LOT, and more 
> changes coming that
> will address latency in dispatch and reduce locking in SAL.  The first
> of those changes will be coming in hopefully still this week.
> 
> One thing I think could be out of whack is the lru lane selector, I cand
> send a hotfix if we have a skewed object-lane distribution in LRU. 
> Alternatively,
> there is tuning for #partitions and the size of a per-partition hash 
table in
> both the cache_inode "hash" and HashTable (used a lot of other places) 
which
> could apply, if that's the bottleneck.
> 
> Do you have a trivial reproducer to experiment with?

This is a customer application so I can not share it but please send me a 
patch and I can report the before and after numbers.
Thanks, Marc.

> 
> Matt
> 
> - "Marc Eshel"  wrote:
> 
> > Hi Matt,
> > I see bad perfromance when stating milloins of files, the inode cache
> > is set to 1.5 million. Are there any configuration changes that I can
> > make to the inode cache, even code changes of some hard coded values
> > that will help with performance of big nuber of files?
> > Thanks , Marc.
> 
> -- 
> Matt Benjamin
> CohortFS, LLC.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
> 
> http://cohortfs.com
> 
> tel.  734-761-4689 
> fax.  734-769-8938 
> cel.  734-216-5309 
> 


--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] inode cache

2015-07-29 Thread Marc Eshel
Hi Matt, 
I see bad perfromance when stating milloins of files, the inode cache is 
set to 1.5 million. Are there any configuration changes that I can make to 
the inode cache, even code changes of some hard coded values that will 
help with performance of big nuber of files?
Thanks , Marc.--
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] 'clustered' configuration parameter + use of nodeid

2015-06-24 Thread Marc Eshel
EVENT_TAKE_IP is not used by cNFS right now.





From:   mala...@linux.vnet.ibm.com
To: Soumya Koduri 
Cc: nfs-ganesha-devel@lists.sourceforge.net
Date:   06/24/2015 06:48 AM
Subject:Re: [Nfs-ganesha-devel] 'clustered' configuration 
parameter + use of nodeid



Soumya Koduri [skod...@redhat.com] wrote:
> 
> 
> On 06/24/2015 04:47 AM, Frank Filz wrote:
> >>As we were discussing over #ganesha, currently 'clustered' mode 
mandates
> >>that each of the NFS-Ganesha servers is associated with a nodeid which 
may
> >>not be applicable for all the clustering solutions. We may choose to 
use
> >>IP_ADDR or hostnames to store persistent state information for each of 
the
> >>nodes in the cluster, which would then require this option to be 
turned off.
> >>
> >>There could be cases where FSAL may like to know if its operating in
> >>'clustered' mode for any special handling (if needed). This cannot be
> >>achieved by using existing 'clustered' option (if not using nodeids).
> >>
> >>So we would like to know if we can de-couple 'clustered' option with 
the
> >>usage of nodeids and have different option if required to use nodeids.
> >>
> >>Please share your thoughts.
> >
> >I definitely agree that the "clustered" option is misnamed.
> >
> >Further, from my investigation, which is not complete, it looks like a 
pure IP based clustering solution will not currently work with 
EVENT_TAKEIP for NFS v4.
> >
> >For NFS v4, we persist information about each client-id so that we can 
determine if a client attempting to reclaim state has the right to do so, 
in particular, that it has not run afoul of the edge conditions documented 
in Section 9.6.3.4 of RFC 7530.
> >
> >The code appears to look for a directory 
NFS_V4_RECOV_ROOT/gsp->ipaddr/NFS_V4_RECOV_DIR when it receives an 
EVENT_TAKEIP. But from what I can see, no other code creates or puts 
anything in such a directory. Instead, it seems that clientid information 
is only persisted in a directory: 
NFS_V4_RECOV_ROOT/NFS_V4_RECOV_DIR/nodeid, which is what is searched for 
EVENT_TAKE_NODE.
> But we could sync information from
> 'NFS_V4_RECOV_ROOT/NFS_V4_RECOV_DIR' to
> 'NFS_V4_RECOV_ROOT/gsp->ipaddr/NFS_V4_RECOV_DIR' before sending
> EVENT_TAKEIP. We are making use of symlinks to do so at the moment.
> 
> >
> > From talking with Malahal on IRC, I think the intent of 
EVENT_TAKE_NODE is that it is broadcast to all nodes that receive an IP 
address from another node, rather than sending an EVENT_TAKE_IP for each 
IP address that is moved. That may save some messages, but it's unclear 
that there aren’t some pitfalls. We would expect the only clients that 
would attempt reclaim would be those that actually moved (so a node that 
got an EVENT_RELEASE_IP to failback an IP address would dump the state for 
those clients associated with that IP address, and the node that we failed 
back to would get the EVENT_TAKE_NODE). But what conditions do we remove 
entries from the directory? In the case of the failback, we should only 
remove the entries belonging to the failed back IP address, the node will 
have other entries in that directory for the IP addresses that it retains.
> >
> >Because of this, it would seem clearer to me if we only had 
EVENT_TAKE_IP, and that clientid persistent information was retained per 
IP address.
> 
> We can have all the functionality we get with EVENT_TAKE_NODE using
> EVENT_TAKEIP as well. As mentioned above, as long as we make sure
> 'NFS_V4_RECOV_ROOT/gsp->ipaddr/NFS_V4_RECOV_DIR' has the relevant
> state information of the node failed (either by using rsync/symlinks
> etc.,), we can let other nfs-ganesha servers read the state
> information of only that particular node by sending D-bus signal
> with EVENT_TAKEIP. We got it worked for our cluster.

Good to know that you got it all worked! I was told that we only use
EVENT_TAKE_NODE at the moment even while moving a simple IP address.
EVENT_TAKEIP is unused at the moment. Marc, how does this work in cnfs?

Regards, Malahal.


--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical & virtual servers, alerts via email & sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical & virtual servers, alerts via email & sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm

[Nfs-ganesha-devel] Announce for Ganesha 2.2

2015-05-03 Thread Marc Eshel
Hi Frank,
The Ganesha wiki still points to the 2.1 release.
Thanks, Marc.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel