In devcloud-kvm I just have this, no fsid:

/nfs/primary *(rw,insecure,async,no_root_squash)
/nfs/secondary *(rw,insecure,async,no_root_squash)

and in /proc/mounts it ends up like this:

localhost:/nfs/primary /tmp/mnt1 nfs4
rw,sync,relatime,vers=4,rsize=524288,wsize=524288,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=tcp6,port=0,timeo=600,retrans=2,sec=sys,clientaddr=::1,minorversion=0,local_lock=none,addr=::1
0 0

localhost:/nfs/secondary /tmp/mnt2 nfs4
rw,sync,relatime,vers=4,rsize=524288,wsize=524288,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=tcp6,port=0,timeo=600,retrans=2,sec=sys,clientaddr=::1,minorversion=0,local_lock=none,addr=::1
0 0


According to the exports man page, fsid=0 is the root for all exports.
For your setup, does this work?

/export  
10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmnt,fsid=0)

On my host if I do:

/nfs *(rw,async,no_root_squash,no_subtree_check,nohide,crossmnt,fsid=0)

then I mount:

mount localhost:/nfs/secondary /tmp/mnt1
mount localhost:/nfs/primary /tmp/mnt2

It works but defaults to nfs3

localhost:/nfs/secondary /tmp/mnt1 nfs
rw,sync,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=tcp6,timeo=600,retrans=2,sec=sys,mountaddr=0000:0000:0000:0000:0000:0000:0000:0001,mountvers=3,mountport=892,mountproto=udp6,local_lock=none,addr=::1
0 0
localhost:/nfs/primary /tmp/mnt2 nfs
rw,sync,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=tcp6,timeo=600,retrans=2,sec=sys,mountaddr=0000:0000:0000:0000:0000:0000:0000:0001,mountvers=3,mountport=892,mountproto=udp6,local_lock=none,addr=::1
0 0


On Thu, Jan 31, 2013 at 8:24 AM, Alex Mathiasen <a...@mira.dk> wrote:
> Dear Chiradeep,
>
> Yes, the hypervisor I am using is KVM.
>
> When I manually mount the NFS share exported on my host server – I can access 
> it without issues.
>
> View at agent logfile: http://pastebin.com/cUHmawuA
>
> After a while, the host server reboots.
>
> Med venlig hilsen / Best regards
>
> Alex Mathiasen
> Systemadministrator
> a...@mira.dk
> Tel. (+45) 96101515
> --------------------------------------------------
> Mira InternetSolutions ApS
> http://www.mira.dk/
> Tel. (+45) 9610 1510
> Fax. (+45) 9610 1511
> --------------------------------------------------
> Confidentiality statement
> The information in this e-mail and any attachments is confidential.
> It is intended for the specified recipient(s) only. If you are not one of the 
> specified recipients please notify the sender immediately.
> Distribution of an incorrectly received message may be unlawful.
>
>
> -----Oprindelig meddelelse-----
> Fra: Chiradeep Vittal [mailto:chiradeep.vit...@citrix.com]
> Sendt: 29. januar 2013 19:59
> Til: cloudstack-dev@incubator.apache.org
> Emne: Re: NFS Exported drives (FSID) issues.
>
> Which hypervisor is this? KVM? If so, please post the agent logs.
> I don't have much experience with DRBD, but CloudStack doesn't really care 
> about the underlying storage implementation. It (rather the hypervisor) cares 
> about the protocol (NFS/ISCSI/FC). If the hypervisor is unable to mount the 
> primary storage, then there's not much CloudStack can do.
>
> On 1/29/13 5:34 AM, "Alex Mathiasen" <a...@mira.dk> wrote:
>
>>Hello,
>>
>>I am trying to setup a Cloudstack using DRBD as primary storage.
>>
>>I have a setup with Cloudstack working beautifully with a single DRBD
>>storage, a management server and some host servers.
>>
>>However, I wish to add more primary storage from the same DRBD setup
>>and then all hell breaks loose.
>>
>>I have the following DRBD storage exported from my setup now:
>>
>>/export/drbd0
>>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn
>>t,f
>>sid=0)
>>
>>However, If I add DRBD1 within my /etc/exports file, the issues begin.
>>Even before I have added the new DRBD storage as Primary storage within
>>my Cloudstack, system VM’s, Routers and VM’s wont start.
>>
>>As I have understood, the fsid must be unique, and therefor none of the
>>exported drives must have the same value. So “0″ wont do. However I
>>have experienced that Cloudstack wont work with exported drives that
>>doesn’t have a value of “0″.
>>
>>/export/drbd0
>>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn
>>t,f
>>sid=0)
>>/export/drbd1
>>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn
>>t,f
>>sid=0)
>>
>>This wont work.
>>
>>/export/drbd0
>>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn
>>t,f
>>sid=1)
>>/export/drbd1
>>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn
>>t,f
>>sid=2)
>>
>>This wont work either.
>>
>>It seems as soon I try to export any drive without "fsid=0" - It wont
>>work.
>>
>>I have attached some paste from management-server.log @
>>http://pastebin.com/SYBxXfux
>>
>>Do any of you have any suggestions to solve this issue? I wish to
>>expand my primary storage, but at the moment I simply can’t get it to
>>work. :-(
>
>

Reply via email to