Can you write to the mount? This is the script that is timing out: /usr/lib/cloud/common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.212.212.249 -p /export/drbd10 -m /mnt/99898908-d913-39c8-b61d-c7e6b6dbd8f0 -h 212.70.6.4
You can uncomment the set -x in the script to see where it is stuck. Or you can run it by hand and see where it gets stuck On 1/31/13 7:24 AM, "Alex Mathiasen" <a...@mira.dk> wrote: >Dear Chiradeep, > >Yes, the hypervisor I am using is KVM. > >When I manually mount the NFS share exported on my host server I can >access it without issues. > >View at agent logfile: http://pastebin.com/cUHmawuA > >After a while, the host server reboots. > >Med venlig hilsen / Best regards > >Alex Mathiasen >Systemadministrator >a...@mira.dk >Tel. (+45) 96101515 >-------------------------------------------------- >Mira InternetSolutions ApS >http://www.mira.dk/ >Tel. (+45) 9610 1510 >Fax. (+45) 9610 1511 >-------------------------------------------------- >Confidentiality statement >The information in this e-mail and any attachments is confidential. >It is intended for the specified recipient(s) only. If you are not one of >the specified recipients please notify the sender immediately. >Distribution of an incorrectly received message may be unlawful. > > >-----Oprindelig meddelelse----- >Fra: Chiradeep Vittal [mailto:chiradeep.vit...@citrix.com] >Sendt: 29. januar 2013 19:59 >Til: cloudstack-dev@incubator.apache.org >Emne: Re: NFS Exported drives (FSID) issues. > >Which hypervisor is this? KVM? If so, please post the agent logs. >I don't have much experience with DRBD, but CloudStack doesn't really >care about the underlying storage implementation. It (rather the >hypervisor) cares about the protocol (NFS/ISCSI/FC). If the hypervisor is >unable to mount the primary storage, then there's not much CloudStack can >do. > >On 1/29/13 5:34 AM, "Alex Mathiasen" <a...@mira.dk> wrote: > >>Hello, >> >>I am trying to setup a Cloudstack using DRBD as primary storage. >> >>I have a setup with Cloudstack working beautifully with a single DRBD >>storage, a management server and some host servers. >> >>However, I wish to add more primary storage from the same DRBD setup >>and then all hell breaks loose. >> >>I have the following DRBD storage exported from my setup now: >> >>/export/drbd0 >>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn >>t,f >>sid=0) >> >>However, If I add DRBD1 within my /etc/exports file, the issues begin. >>Even before I have added the new DRBD storage as Primary storage within >>my Cloudstack, system VM’s, Routers and VM’s wont start. >> >>As I have understood, the fsid must be unique, and therefor none of the >>exported drives must have the same value. So “0″ wont do. However I >>have experienced that Cloudstack wont work with exported drives that >>doesn’t have a value of “0″. >> >>/export/drbd0 >>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn >>t,f >>sid=0) >>/export/drbd1 >>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn >>t,f >>sid=0) >> >>This wont work. >> >>/export/drbd0 >>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn >>t,f >>sid=1) >>/export/drbd1 >>10.212.212.0/24(rw,async,no_root_squash,no_subtree_check,nohide,crossmn >>t,f >>sid=2) >> >>This wont work either. >> >>It seems as soon I try to export any drive without "fsid=0" - It wont >>work. >> >>I have attached some paste from management-server.log @ >>http://pastebin.com/SYBxXfux >> >>Do any of you have any suggestions to solve this issue? I wish to >>expand my primary storage, but at the moment I simply can’t get it to >>work. :-( > >