Re: [Nfs-ganesha-devel] Patches not backported to V2.5-stable

2018-01-04 Thread Malahal Naineni
Hmm, you listed patches from very old tags as well. V2.5 should have only
patches that fix defects. We should NOT be adding new features, clean up
works etc from V2.6 to V2.5-stable. For example, you listed gerrirt
change-id . It implemented a new
feature and it is buggy as well, so no point in taking it.

I would like people to identify real bugs they fixed, and that should be
the list to be backported to V2.5.  Occasionally, folks may need a new
feature that is NOT intrusive, we could backport such kind as well.

Regards, Malahal.

On Fri, Jan 5, 2018 at 9:01 AM, Frank Filz  wrote:

> I did some work with my script for extracting patch titles and change ids
> and with some manual work on the output, produced the following list of
> patches not included in V2.5-stable (the V2.6 tags are included to help
> identify when the patches arrived):
>
> > I0ccf28339b9296115520a5d54538f02df5ee0089 V2.6-dev.22
> > I1425e3c3246ccd3fa13bd07e06069677b71abcfa NFS: don't trash stateid when
> returning error on FREE_STATEID
> > I789b76f8c5c5a158b846281d1c4491d3ccde538c Pullup NTIRPC through #98
> > Icc9523912e1e3653c3b40f83e0f603349c5e2a8f Consolidate 9P queues and
> workers
> > I367b7e9e2e51f8980d1296dfee50b6c847cd0ad2 FSAL_CEPH: no need to set
> credentials
> > I123dab91583379d191933363c7e99b0946a6b913 config_samples: fix config
> block
> examples
> > I767785d9615e2d9216bec2e4a47a72caaa2cc14d FSAL_GLUSTER : add support for
> rdma volumes
> > Id272de01fce18a19262426891a94ca66072ba232 Fix revoke_owner_layouts
> accessing uninitialized op_ctx
> > I9975962ad441c33302a41301cf4ef53f92737418 glist: preserve the order when
> two items have same priority
> > I95aa5269ecbd4883b1c9a9ea6d1f471b58a3e41a FSAL_GLUSTER: close fd without
> setting credentials at handle_release()
> > I2db688224f44e0e5ad390a643b8a0732eb77a7a4 nfs4 - Add missing put_ref in
> OP_FREE_STATEID
> > I039a5558e1e0bd845bed74a9158f3c732097463e MDCACHE - Fix stacking over
> NULL
> > Ia48857fddab0a334d3c3a815a677745dc6f7d51c NFS4.1 - Allow client to
> specifiy slot count
> > I1f012b50b7ad5f7e5d214072c7041d8a4f649b3a NFS4 STATE: Fixup export (and
> obj) refcounts for layout and delegation
> > Ia87a41cc6ed38659b45fe51dc38153c6ecef547f NFS4 STATE: Fixup export
> refcounts for lock and open states
> > Icc4f17e0a39498f8f07bf828212dea3c7c5ba19c MDCACHE and VFS: Improve debug
> of export release and don't crash
> > Icb6a7b682fd6fa3039c7968dafac6bf0328af98b Improve debug of export
> refcounts
> > I3022e83a8f30987c1429c1d61df450f161a3af6e V2.6-dev.21
> > Ib93b78cf68347a9cd5e39cfa98ff4deba40ddd45 TEST and TOOLS: Cleanup new
> checkpatch errors
> > Ifc595d60f52ef38cbb957322f096b7ddd9fd8619 SUPPORT: Cleanup new
> checkpatch
> errors
> > Ic0e034732f4e29e9e01d0c6fad49c2ecd48c380d LOG: Cleanup new checkpatch
> errors
> > Ide72a52987de228535c07cf5638da2d8632181d6 HASHTABLE: Cleanup new
> checkpatch errors
> > I0c996e15f70a351a76ad9108a29a937fd97c8d0a DBUS: Cleanup new checkpatch
> errors
> > I2fab11037ae8543e81a2edd4a2294ceb609a9cb9 SAL: Cleanup new checkpatch
> errors
> > I8d866f47efdcc04de0fa35ded62d246eb0c6cb82 RPCAL: Cleanup new checkpatch
> errors
> > I9115d5b8a7949b132b956639d79e2f096dcf1808 RQUOTA: Cleanup new checkpatch
> errors
> > I5366af45e9b2acaf3d4d1690ce27b7ce76046b12 NLM: Cleanup new checkpatch
> errors
> > I75bd17c2cd20431b0c36aedbddd85cceb8c6c4ab NFS: Cleanup new checkpatch
> errors
> > I2dd4e39d115449bd855f77187f9d98e357368eb2 NFS4: Cleanup new checkpatch
> errors
> > I8d78ffc75fc254b4ba016bd35348ab4ef906badd NFS3: Cleanup new checkpatch
> errors
> > Idc725fe5915dabb8e301b1bd4e0a4b91b1f4fc2f MNT: Cleanup new checkpatch
> errors
> > I481a4cbe21b5623f20f623ab30b04b23259e4918 9P: Cleanup new checkpatch
> errors
> > I5e7b9fcd169387eb4929067e135738666f12adca MainNFSD: Cleanup new
> checkpatch
> errors
> > I6efa95690dbee6d1af9de67df31d76932a426e24 FSAL and FSAL_UP: Cleanup new
> checkpatch errors
> > Ieb84f4fef3df09fbcc9a16cdb57ea94cfaa4325b NULL: Cleanup new checkpatch
> errors
> > I744df098a9b4fc75d15e0044f6555a1e07d51df9 MDCACHE: Cleanup new
> checkpatch
> errors
> > If42b81b3abcfde4561a0303a9a3521f2da55885a RGW: Cleanup new checkpatch
> errors
> > I939f5faa5339f0d5fef89fa1e0d91d7cbefade2e PROXY: Cleanup new checkpatch
> errors
> > I759c57121457c8fa4186b95c4480517c0d973de3 VFS: Cleanup new checkpatch
> errors
> > I34a1668d8d4216e60e46a1b11e41fbf8c79bc482 GPFS: Clean up new checkpatch
> errors
> > Ib47c7c5e256ff3e7d6dd70fae486eb2cba704534 GLUSTER: Clean up new
> checkpatch
> errors
> > I364aff7db1a143e554a07560f1878a56a372a9c7 CEPH: Cleanup new checkpatch
> errors
> > I42b182912722bb7704cf20ebb93e0d5e0ab7a5df Update checkpatch.pl from
> kernel
> v4.15-rc2
> > I51b00df253f7e63edfa2d85a649632a4bae1d9fa V2.6-dev.20
> > Iff945dbc5a645b0fd1bd8474f88d74bff49430bc ntirpc pullup - fix leak
> > Id8c66de80e6b998653878bfefcfcd22f74789dd9 NFS4.1 - Make the slot table
> size configurable
> > I178733cd95bb27f3875e802578d4a7f02844daca CMake - Allow per-processor
> settings
> > Ib46c

[Nfs-ganesha-devel] Patches not backported to V2.5-stable

2018-01-04 Thread Frank Filz
I did some work with my script for extracting patch titles and change ids
and with some manual work on the output, produced the following list of
patches not included in V2.5-stable (the V2.6 tags are included to help
identify when the patches arrived):

> I0ccf28339b9296115520a5d54538f02df5ee0089 V2.6-dev.22
> I1425e3c3246ccd3fa13bd07e06069677b71abcfa NFS: don't trash stateid when
returning error on FREE_STATEID
> I789b76f8c5c5a158b846281d1c4491d3ccde538c Pullup NTIRPC through #98
> Icc9523912e1e3653c3b40f83e0f603349c5e2a8f Consolidate 9P queues and
workers
> I367b7e9e2e51f8980d1296dfee50b6c847cd0ad2 FSAL_CEPH: no need to set
credentials
> I123dab91583379d191933363c7e99b0946a6b913 config_samples: fix config block
examples
> I767785d9615e2d9216bec2e4a47a72caaa2cc14d FSAL_GLUSTER : add support for
rdma volumes
> Id272de01fce18a19262426891a94ca66072ba232 Fix revoke_owner_layouts
accessing uninitialized op_ctx
> I9975962ad441c33302a41301cf4ef53f92737418 glist: preserve the order when
two items have same priority
> I95aa5269ecbd4883b1c9a9ea6d1f471b58a3e41a FSAL_GLUSTER: close fd without
setting credentials at handle_release()
> I2db688224f44e0e5ad390a643b8a0732eb77a7a4 nfs4 - Add missing put_ref in
OP_FREE_STATEID
> I039a5558e1e0bd845bed74a9158f3c732097463e MDCACHE - Fix stacking over NULL
> Ia48857fddab0a334d3c3a815a677745dc6f7d51c NFS4.1 - Allow client to
specifiy slot count
> I1f012b50b7ad5f7e5d214072c7041d8a4f649b3a NFS4 STATE: Fixup export (and
obj) refcounts for layout and delegation
> Ia87a41cc6ed38659b45fe51dc38153c6ecef547f NFS4 STATE: Fixup export
refcounts for lock and open states
> Icc4f17e0a39498f8f07bf828212dea3c7c5ba19c MDCACHE and VFS: Improve debug
of export release and don't crash
> Icb6a7b682fd6fa3039c7968dafac6bf0328af98b Improve debug of export
refcounts
> I3022e83a8f30987c1429c1d61df450f161a3af6e V2.6-dev.21
> Ib93b78cf68347a9cd5e39cfa98ff4deba40ddd45 TEST and TOOLS: Cleanup new
checkpatch errors
> Ifc595d60f52ef38cbb957322f096b7ddd9fd8619 SUPPORT: Cleanup new checkpatch
errors
> Ic0e034732f4e29e9e01d0c6fad49c2ecd48c380d LOG: Cleanup new checkpatch
errors
> Ide72a52987de228535c07cf5638da2d8632181d6 HASHTABLE: Cleanup new
checkpatch errors
> I0c996e15f70a351a76ad9108a29a937fd97c8d0a DBUS: Cleanup new checkpatch
errors
> I2fab11037ae8543e81a2edd4a2294ceb609a9cb9 SAL: Cleanup new checkpatch
errors
> I8d866f47efdcc04de0fa35ded62d246eb0c6cb82 RPCAL: Cleanup new checkpatch
errors
> I9115d5b8a7949b132b956639d79e2f096dcf1808 RQUOTA: Cleanup new checkpatch
errors
> I5366af45e9b2acaf3d4d1690ce27b7ce76046b12 NLM: Cleanup new checkpatch
errors
> I75bd17c2cd20431b0c36aedbddd85cceb8c6c4ab NFS: Cleanup new checkpatch
errors
> I2dd4e39d115449bd855f77187f9d98e357368eb2 NFS4: Cleanup new checkpatch
errors
> I8d78ffc75fc254b4ba016bd35348ab4ef906badd NFS3: Cleanup new checkpatch
errors
> Idc725fe5915dabb8e301b1bd4e0a4b91b1f4fc2f MNT: Cleanup new checkpatch
errors
> I481a4cbe21b5623f20f623ab30b04b23259e4918 9P: Cleanup new checkpatch
errors
> I5e7b9fcd169387eb4929067e135738666f12adca MainNFSD: Cleanup new checkpatch
errors
> I6efa95690dbee6d1af9de67df31d76932a426e24 FSAL and FSAL_UP: Cleanup new
checkpatch errors
> Ieb84f4fef3df09fbcc9a16cdb57ea94cfaa4325b NULL: Cleanup new checkpatch
errors
> I744df098a9b4fc75d15e0044f6555a1e07d51df9 MDCACHE: Cleanup new checkpatch
errors
> If42b81b3abcfde4561a0303a9a3521f2da55885a RGW: Cleanup new checkpatch
errors
> I939f5faa5339f0d5fef89fa1e0d91d7cbefade2e PROXY: Cleanup new checkpatch
errors
> I759c57121457c8fa4186b95c4480517c0d973de3 VFS: Cleanup new checkpatch
errors
> I34a1668d8d4216e60e46a1b11e41fbf8c79bc482 GPFS: Clean up new checkpatch
errors
> Ib47c7c5e256ff3e7d6dd70fae486eb2cba704534 GLUSTER: Clean up new checkpatch
errors
> I364aff7db1a143e554a07560f1878a56a372a9c7 CEPH: Cleanup new checkpatch
errors
> I42b182912722bb7704cf20ebb93e0d5e0ab7a5df Update checkpatch.pl from kernel
v4.15-rc2
> I51b00df253f7e63edfa2d85a649632a4bae1d9fa V2.6-dev.20
> Iff945dbc5a645b0fd1bd8474f88d74bff49430bc ntirpc pullup - fix leak
> Id8c66de80e6b998653878bfefcfcd22f74789dd9 NFS4.1 - Make the slot table
size configurable
> I178733cd95bb27f3875e802578d4a7f02844daca CMake - Allow per-processor
settings
> Ib46c9d34c6b5f046c9c73112277ab18cf3c30102 NFS4 - Free client in error case
> I1623a9672b58847b16feb05085f4574a595dd859 FSAL_MEM - Destroy up fridge to
avoid leak
> I9f714d51f549cf497802e454d8a2f9792faaf525 Don't leak the FSAL config block
for exports or DS
> Id66c8c3c9056a69c5306fca6895c66782985b1a5 pNFS - Fix memory leaks for
fsal_pnfs_ds
> I70e1c0e8c05c56dd02983dca2e0fd1539081db00 PNFS - Fix comments on
fsal_pnfs_ds
> Ia396978971d4a07ff42fd9dbd5da65c1af0450ba NFS: rework grace period
handling
> Ia94be7915ed014de1055aa9cb6c4f8089108ff0c export: avoid double free if
init_export_root fail at add_export
> I39a6553d4b47eb9de8d69f59d0b59caa7d38ed0e Fix closing global file
descriptors
> Id7868135f67945dd6cdff8ecb028042fef5789cc cmake: detect openSUSE
Tumbleweed
> I82116d8bc600fc71163383234078

[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: Add support for SEEK procedure

2018-01-04 Thread GerritHub
>From Girjesh Rajoria :

Girjesh Rajoria has uploaded this change for review. ( 
https://review.gerrithub.io/393642


Change subject: FSAL_GLUSTER: Add support for SEEK procedure
..

FSAL_GLUSTER: Add support for SEEK procedure

This makes it possible to detect sparse areas in files. The SEEK
procedure is part of NFSv4.2 and a description of the functionality is
described here:
  https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11

Change-Id: I78c007b89fd0dc50badaa65bc6c76c7b5b712b37
Signed-off-by: Girjesh Rajoria 
---
M src/FSAL/FSAL_GLUSTER/gluster_internal.h
M src/FSAL/FSAL_GLUSTER/handle.c
M src/Protocols/NFS/nfs4_op_read.c
3 files changed, 76 insertions(+), 1 deletion(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/42/393642/1
-- 
To view, visit https://review.gerrithub.io/393642
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I78c007b89fd0dc50badaa65bc6c76c7b5b712b37
Gerrit-Change-Number: 393642
Gerrit-PatchSet: 1
Gerrit-Owner: Girjesh Rajoria 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Implement a FSAL for S3-compatible storage

2018-01-04 Thread Matt Benjamin
That would work.

The logic that the RGW FSAL uses to do all of Frank describes is in
the Ceph source rather than FSAL_RGW for the same reason.  The
strategy I take is to represent the RGW file handle as concatenated
hashes of S3/swift container name and full object name, which yields
stable handles.  For directories, cookies are also the hash of the
entry name.  Frank's whence-is-name and compute-readdir-cookie apis
were invented to support the RGW FSAL. Using them, you avoid the need
to keep an indexed representation of the S3 namespace in the FSAL (or
in my case, librgw).

Matt

On Thu, Jan 4, 2018 at 7:18 AM, DENIEL Philippe  wrote:
> Hi Aurélien,
>
> I can provide you an alternate solution, still nfs-ganesha based. For the
> need of a project, I developed an open-source library that emulate a POSIX
> namespace using a KVS (for metadata) and an object store (for data). For
> example, you can use REDIS and RADOS. I have written a FSAL for it (it is
> not pushed in the official branch) but with no compliancy to support_ex,
> it's still using the former FSAL semantics (so it should be ported to
> support_ex). If you are interested, I can give you some pointers (the code
> is on github). You could use S3 as data storage for example. In particular,
> I had to solve the same "inode" issue that you met. This solution as very
> few impact on nfs-ganesha code (it just adds a new FSAL).
>
>  Regards
>
> Philippe
>
> On 01/03/18 19:58, Aurelien RAINONE wrote:
>
> To follow up on the development on an FSAL for S3, I have some doubts and
> questions I'd like to share.
>
> Apart from its full path, S3 doesn't have the concept of file descriptor, I
> mean, there's nothing else
>
> than the full path that I can provide to S3 in order to get attribute of
> content of a specific object.
>
> I have some doubts regarding the implementation of the S3 fsal object handle
> (s3_fsal_obj_handle).
>
>
>
> Should s3_fsal_obj_handle be very simple, for example should it only contain
> a key that maps to the full S3 filename, in an key-value store.
>
> Or on the contrary, should the handle implement a tree like structure, like
> I saw in FSAL_MEM?
>
> Or something in between, but what?
>
> Having a very simple handle has some advantages but may require some more
> frequent network calls,
>
> for example readdir won't have any kind of information about the content of
> the directory.
>
> Having a whole tree-like structure in the handle would allow to have direct
> access to directory content,
>
> but isn't that the role of ganesha cache to do that?
>
> My questions probably shows that I have problems to understand the
> responsability of my FSAL implementation
>
> regarding the cache. Who does what, what doesn't do what?
>
> Good evening,
>
> Aurélien
>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>
>
>
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Implement a FSAL for S3-compatible storage

2018-01-04 Thread Aurelien RAINONE
Hello Philippe,

Did you mean it like I could directly use the FSAL you developed, modify
some code (or not?) in order to use S3 as a storage? Or is it to share
solutions you found to some problems I will encounter during the
development of my FSAL_S3.

In both cases, I sure would be happy to have a look at your project, thank
you for that.

What do you mean by no compliancy to support_ex, does that imply a specific
range of ganesha versions? other constraints?

​Regards,

Aurélien




2018-01-04 13:18 GMT+01:00 DENIEL Philippe :

> Hi Aurélien,
>
> I can provide you an alternate solution, still nfs-ganesha based. For the
> need of a project, I developed an open-source library that emulate a POSIX
> namespace using a KVS (for metadata) and an object store (for data). For
> example, you can use REDIS and RADOS. I have written a FSAL for it (it is
> not pushed in the official branch) but with no compliancy to support_ex,
> it's still using the former FSAL semantics (so it should be ported to
> support_ex). If you are interested, I can give you some pointers (the code
> is on github). You could use S3 as data storage for example. In particular,
> I had to solve the same "inode" issue that you met. This solution as very
> few impact on nfs-ganesha code (it just adds a new FSAL).
>
>  Regards
>
> Philippe
>
> On 01/03/18 19:58, Aurelien RAINONE wrote:
>
> To follow up on the development on an FSAL for S3, I have some doubts and 
> questions I'd like to share.
>
>  Apart from its full path, S3 doesn't have the concept of file descriptor, I 
> mean, there's nothing else
>
> than the full path that I can provide to S3 in order to get attribute of 
> content of a specific object.
>
>  I have some doubts regarding the implementation of the S3 fsal object handle 
> (s3_fsal_obj_handle).
>
>Should s3_fsal_obj_handle be very simple, for example should it only 
> contain a key that maps to the full S3 filename, in an key-value store.
>
> Or on the contrary, should the handle implement a tree like structure, like I 
> saw in FSAL_MEM?
>
>  Or something in between, but what?
>
>  Having a very simple handle has some advantages but may require some more 
> frequent network calls,
>
> for example readdir won't have any kind of information about the content of 
> the directory.
>
> Having a whole tree-like structure in the handle would allow to have direct 
> access to directory content,
>
> but isn't that the role of ganesha cache to do that?
>
>  My questions probably shows that I have problems to understand the 
> responsability of my FSAL implementation
>
> regarding the cache. Who does what, what doesn't do what?
>
>  Good evening,
>
>  Aurélien
>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>
>
>
> ___
> Nfs-ganesha-devel mailing 
> listNfs-ganesha-devel@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Implement a FSAL for S3-compatible storage

2018-01-04 Thread DENIEL Philippe

Hi Aurélien,

I can provide you an alternate solution, still nfs-ganesha based. For 
the need of a project, I developed an open-source library that emulate a 
POSIX namespace using a KVS (for metadata) and an object store (for 
data). For example, you can use REDIS and RADOS. I have written a FSAL 
for it (it is not pushed in the official branch) but with no compliancy 
to support_ex, it's still using the former FSAL semantics (so it should 
be ported to support_ex). If you are interested, I can give you some 
pointers (the code is on github). You could use S3 as data storage for 
example. In particular, I had to solve the same "inode" issue that you 
met. This solution as very few impact on nfs-ganesha code (it just adds 
a new FSAL).


 Regards

        Philippe

On 01/03/18 19:58, Aurelien RAINONE wrote:
To follow up on the development on an FSAL for S3, I have some doubts 
and questions I'd like to share.
Apart from its full path, S3 doesn't have the concept of file 
descriptor, I mean, there's nothing else
than the full path that 
IcanprovidetoS3inordertogetattributeofcontentofaspecificobject.
I have some doubts regarding the implementation of the S3 fsal object 
handle (s3_fsal_obj_handle).
Should s3_fsal_obj_handle be very simple, for example should it only 
contain a key that maps to the full S3 filename, in an key-value store.
Or on the contrary, should the handle implement a tree like structure, 
like I saw in FSAL_MEM?

Or something in between, but what?
Having a very simple handle has some advantages but may require some 
more frequent network calls,
for example readdir won't have any kind of information about the 
content of the directory.
Having a whole tree-like structure in the handle would allow to have 
direct access to directory content,

but isn't that the role of ganesha cache to do that?
My questions probably shows that I have problems to understand the 
responsability of my FSAL implementation

regarding the cache. Who does what, what doesn't do what?
Good evening,
Aurélien


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot


___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel