I doubt NFS is the issue. What are the specs of the VMs? What is the network like? What are the disks? Etc. There are too many variables to say "NFS is a CloudStack bottleneck". I just implemented a 35TB storage cluster that is capable of 10's of thousands of IOPS and >1GB (yes, gigabyte, not gigabit) network bandwidth via NFS.
On Tue, Aug 6, 2013 at 9:48 AM, WXR <1485739...@qq.com> wrote: > I use kvm as hypervisor and nfs(nfs server on centos 6.4) as primary and > secondary storage. > > I use server A as host node and B with 1 hdd as primary storage.When I > create 20 vms,I find the disk io performance is very low. > At first I think the bottleneck is from the hard disk,because there are 20 > vms on a single hdd.So I attach another 4hdds on server B and increase the > number of primary storage from 1 to 5.Now there are 20 vms allocated > averagely on 5 primary storage(4 vms per storage),but the vm disk IO > performance is the same as before. > > I think NFS may be the bottleneck,but I don't know if it is true.Does > anyone have a good idea to help me finding the real reason? -- Regards, Kirk Jantzer c: (678) 561-5475 http://about.met/kirkjantzer