Thank you very much for your explaination! Frankly speaking, I was pretty sure that mentioning of CentOS as hosts' operating system would allude to KVM, but I'm very sorry if it wasn't obvious! So, yes, I use KVM as a hypervisor. :)
On Sun, Jun 05, 2016 at 08:13:44PM +0300, Mindaugas Milinavičius wrote: > Он спрашивают: XEN, KVM, HyperV, VMWARE или что.... > > > > > Pagarbiai > Mindaugas Milinavičius > UAB STARNITA > Direktorius > http://www.clustspace.com > LT: +37068882880 > RU: +79199993933 > > Tomorrow's possibilities today > <http://www.clustspace.com/> > > - 1 core CPU, 512MB RAM, 20GB (€ 5.00) > - 1 core CPU, 1GB RAM, 30GB (€ 10.00) > - 2 core CPU, 2GB RAM, 40GB (€ 20.00) > - 2 core CPU, 4GB RAM, 60GB (€ 40.00) > - 4 core CPU, 8GB RAM, 80GB (€ 80.00) > - 8 core CPU, 16GB RAM, 160GB (€ 160.00) > > > On Sun, Jun 5, 2016 at 7:50 PM, Vladimir Melnik <[email protected]> wrote: > > > I use CentOS-6.8 as the operating system of a host. > > > > Thanks! > > > > On Sun, Jun 05, 2016 at 07:05:30PM +0200, Timothy Lothering wrote: > > > Hi Vladimir, > > > > > > What hypervisor are you using? > > > > > > -----Original Message----- > > > From: Vladimir Melnik [mailto:[email protected]] > > > Sent: Sunday, 05 June 2016 6:06 PM > > > To: [email protected] > > > Subject: Storage Performance > > > > > > Hello, > > > > > > I have an ACS-driven environment with a storage subsystem which is built > > on > > > Gluster over InfiniBand. The storage shows pretty good performance when I > > > mount a volume on a host and run a simple test ("dd if=/dev/zero > > > of=/mnt/tmp/test.1G bs=1G count=1 conv=fdatasync"), it shows about 400 > > MB/s > > > and that's okay. But when I deploy a virtual machine (I tried it with > > > CentOS-6.8-x64 as a guest OS), I can't gain so good result from inside > > of a > > > guest (it shows about 40 MB/s with the same simple test). > > > > > > What do you think, have I forgotten to do something important when I was > > > seting this environment up? > > > > > > Thank you very much for sharing your ideas and clues! > > > > > > -- > > > V.Melnik > > > > > > > -- > > V.Melnik > > -- V.Melnik
