On 03/16/2011 01:51 PM, Tim Spriggs wrote:
> Thanks for the offer but I think I have networking under control. What
> is not working properly is that NFS happens from a host IP instead of
> a context IP... even though it is started from the context IP.

I'm working on that.

Here's the kernel patch needed to get the basic minimal NFSv3
functionality to work.  (Note the big long mount invocation switching
off tons of stuff.  This patch makes it work ONCE YOU'VE SWITCHED ALL
THAT OFF.  No portmap, no lockd, no dns resolution...)

Also, note that if the host and container ever try to use the same IP,
the NFS cacheing stuff mixes stuff together and it all goes pear shaped.

As I said: working on it...

Rob
From: Rob Landley <rland...@parallels.com>

[PATCH] Make NFSv3 work in a container.

This is the minimal fix to mount an NFSv3 server from a container, via the
following (somewhat elaborate) invocation:

  mount -t nfs 10.0.2.2:/home/landley/nfs/unfs3-0.9.22/doc nfsdir \
    -o ro,port=9999,mountport=9999,nolock,nosharecache,nfsvers=3,udp

For a test server, I used unfs3.sourceforge.net:

  unfsd -d -s -p -e <(echo '/home/landley/nfs (no_root_squash,insecure)') \
    -l 10.0.2.2 -m 9999 -n 9999

As you can see, that mount options list is wiring around all sorts of things
(DNS resolution, portmapper, lock daemon, superblock merging, nontrivial
authentication mechanisms, the can of worms that is NFSv4...) which I need to
fix in future patches.  But this is enough to mount an NFS share in a
container that has a different network context than the host, which is new.

Notes: get_sb() is never called from anything other than mount's process
context, so we can direference current from the functions only it calls.

The rpc code is already doing the get_net() and put_net() reference counting
for lifetimes.  (Except for the bits where I still have to fix the cacheing,
but the above mostly disables that.)

Signed-off-by: Rob Landley <rland...@parallels.com>
---

 fs/nfs/client.c     |    3 ++-
 fs/nfs/mount_clnt.c |    7 +++++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 192f2f8..4fb94e9 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -39,6 +39,7 @@
 #include <net/ipv6.h>
 #include <linux/nfs_xdr.h>
 #include <linux/sunrpc/bc_xprt.h>
+#include <linux/user_namespace.h>
 
 #include <asm/system.h>
 
@@ -619,7 +620,7 @@ static int nfs_create_rpc_client(struct nfs_client *clp,
 {
 	struct rpc_clnt		*clnt = NULL;
 	struct rpc_create_args args = {
-		.net		= &init_net,
+		.net		= current->nsproxy->net_ns,
 		.protocol	= clp->cl_proto,
 		.address	= (struct sockaddr *)&clp->cl_addr,
 		.addrsize	= clp->cl_addrlen,
diff --git a/fs/nfs/mount_clnt.c b/fs/nfs/mount_clnt.c
index d4c2d6b..5564f64 100644
--- a/fs/nfs/mount_clnt.c
+++ b/fs/nfs/mount_clnt.c
@@ -14,6 +14,7 @@
 #include <linux/sunrpc/clnt.h>
 #include <linux/sunrpc/sched.h>
 #include <linux/nfs_fs.h>
+#include <linux/user_namespace.h>
 #include "internal.h"
 
 #ifdef RPC_DEBUG
@@ -140,6 +141,8 @@ struct mnt_fhstatus {
  * @info: pointer to mount request arguments
  *
  * Uses default timeout parameters specified by underlying transport.
+ *
+ * This is always called from process context.
  */
 int nfs_mount(struct nfs_mount_request *info)
 {
@@ -153,7 +156,7 @@ int nfs_mount(struct nfs_mount_request *info)
 		.rpc_resp	= &result,
 	};
 	struct rpc_create_args args = {
-		.net		= &init_net,
+		.net		= current->nsproxy->net_ns,
 		.protocol	= info->protocol,
 		.address	= info->sap,
 		.addrsize	= info->salen,
@@ -225,7 +228,7 @@ void nfs_umount(const struct nfs_mount_request *info)
 		.to_retries = 2,
 	};
 	struct rpc_create_args args = {
-		.net		= &init_net,
+		.net		= current->nsproxy->net_ns,
 		.protocol	= IPPROTO_UDP,
 		.address	= info->sap,
 		.addrsize	= info->salen,

------------------------------------------------------------------------------
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
_______________________________________________
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel

Reply via email to