Hi,
I have been doing some testing with KVM and Virtuozzo(containers based
virtualisation) and various storage devices and have some results I would like
some help analyzing. I have a nice big ZFS box from Oracle (Yes, evil but
Solaris NFS is amazing). I have 10G and IB connecting these to
Hello,
I have a iSCSI storage array connected to 4 physical hosts. On these 4 hosts I
have configured 40 odd logical volumes with clvm. each logical volume is the
root volume for a VM.
How should I set up io scheduling with this configuration. Performance is not
so great and I have a feeling
Hello,
I have been using NFS with kvm for a little while and have been wondering about
appropriate NFS mount options.
Could someone please explain to me what mount options should be used and why?
Also, what are the differences between NFSv3 and NFSv4 in regards to KVM?
Thanks,
Andrew
--
To
Hi,
No the NFS is not hung and yes I can access the image on the host.
Its just seems to happen occasionally with some VMs..
Thanks,
Andrew
On Dec 11, 2012, at 11:59 AM, Stefan Hajnoczi wrote:
On Fri, Dec 7, 2012 at 1:09 PM, Andrew Holway a.hol...@syseleven.de wrote:
-drivefile=/rhev/data
Hello,
I have been using rhev 3.1 and created a few VMs. I have provisioning system
that boots machines via pxe with centos 6.3 images.
It creates the following:
/dev/vda1 on / type ext3 (rw,noatime,nodiratime)
/dev/vda6 on /local type ext3 (rw,noatime,nodiratime)
/dev/vda3 on /tmp type ext3
sorry. I forgot to mention that my images are being created on a Solaris based
NFS / ZFS server.
Thanks,
Andrew
On Dec 7, 2012, at 1:09 PM, Andrew Holway wrote:
Hello,
I have been using rhev 3.1 and created a few VMs. I have provisioning system
that boots machines via pxe with centos
Hello,
I am testing KVM on an Oracle NFS box that I have.
Does the list have any advice on best practice? I remember reading that there
is stuff you can do with I/O schedulers and stuff to make it more efficient.
My VMs will primarily be running mysql databases. I am currently using o_direct.
O_DIRECT is good. I/O schedulers don't affect NFS so no need to tune
anything on the host. You might experiment with switching to the
deadline scheduler in the guest.
Ill give it a go. Any ideas how I should be tuning my NFS?
--
error compiling committee.c: too many arguments to
Hi Steve,
Do you think these patches will make their way into the redhat kernel sometime
soon?
What is the state of support for NFS over RDMA support at redhat?
Thanks,
Andrew
On Sep 11, 2012, at 7:03 PM, Steve Dickson wrote:
On 09/04/2012 05:31 AM, Andrew Holway wrote:
Hello
On Sep 5, 2012, at 4:02 PM, Avi Kivity wrote:
On 09/04/2012 03:04 PM, Myklebust, Trond wrote:
On Tue, 2012-09-04 at 11:31 +0200, Andrew Holway wrote:
Hello.
# Avi Kivity avi(a)redhat recommended I copy kvm in on this. It would also
seem relevent to libvirt. #
I have a Centos 6.2
Hello.
# Avi Kivity avi(a)redhat recommended I copy kvm in on this. It would also seem
relevent to libvirt. #
I have a Centos 6.2 server and Centos 6.2 client.
[root@store ~]# cat /etc/exports
/dev/shm
10.149.0.0/16(rw,fsid=1,no_root_squash,insecure)(I have
That is expected behaviour. DIRECT_IO over RDMA needs to be page aligned
so that it can use the more efficient RDMA READ and RDMA WRITE memory
semantics (instead of the SEND/RECEIVE channel semantics).
Yes, I think I am understanding that now.
I need to find a way of getting around the
and report which (if any) of the output files (x1, x2, y1, y2) are
corrupted, by comparing them against the original. This will tell us
whether O_DIRECT is broken, or 512 byte block size, or neither.
Looks like you were directly on the money there. 512, 1K and 2K O_DIRECT looks
broken.
tried doing a FULL install?
Best regards,
Martijn
On Fri, Aug 31, 2012 at 7:06 PM, Andrew Holway
supp...@brightcomputing.comwrote:
Fri Aug 31 19:06:00 2012: Request 2699 was acted upon.
Transaction: Ticket created by a.hol...@syseleven.de
Queue: Bright
Subject: NFSoRDMA
Hi,
I am creating a VM with the following command:
virt-install --connect qemu:///system -n vm001 -r 2048 --vcpus=2 --disk
path=/local/vm001.img,device=disk,bus=virtio,size=45 --vnc --noautoconsole
--os-type linux --accelerate --network=bridge:br0,mac=00:00:00:00:00:0E
pxelinux.cfg' to show your
pxelinux configuration, thanks.
--
Regards,
Alex
- Original Message -
From: Andrew Holway a.hol...@syseleven.de
To: kvm@vger.kernel.org
Sent: Thursday, August 16, 2012 8:25:35 PM
Subject: [libvirt-users] vm pxe fail
Hallo
I have a kvm vm that I am
Stupid boy. I didnt have the virtIO modules loaded.
On Aug 31, 2012, at 3:19 PM, Andrew Holway wrote:
Hi,
I am creating a VM with the following command:
virt-install --connect qemu:///system -n vm001 -r 2048 --vcpus=2 --disk
path=/local/vm001.img,device=disk,bus=virtio,size=45 --vnc
Hi,
I am trying to host KVM machines on an NFSoRDMA mount.
This works:
-drive file=/mnt/vm001.img,if=none,id=drive-virtio-disk0,format=raw -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0
This Doesn't!
-drive
82 Linux swap / Solaris
/dev/vda68000716894371839 7182336 83 LinuxOn Aug 31, 2012,
at 7:05 PM, Andrew Holway wrote:
Hi,
I am trying to host KVM machines on an NFSoRDMA mount.
This works:
-drive file=/mnt/vm001.img,if=none,id=drive-virtio-disk0,format=raw -device
Hi,
I am trying out a couple of methods to get VLANs to the VM. In both cases the
VM can ping google et all without problem and DNS works fine but it does not
want to do any TCP. I thought this might be a frame size problem but even using
telnet (which I understand sends tiny packets) fails to
Message -
From: Andrew Holway a.hol...@syseleven.de
To: kvm@vger.kernel.org
Sent: Thursday, August 16, 2012 8:25:35 PM
Subject: [libvirt-users] vm pxe fail
Hallo
I have a kvm vm that I am attempting to boot from pxe. The dhcp works
perfectly and I can see the VM in the pxe server arp
Hostname Option 12, length 17: vm001.internalnet
Domain-Name Option 15, length 65: eth.cluster brightcomputing.com
ib.cluster ilo.cluster cm.cluster
--
Regards,
Alex
- Original Message -
From: Andrew Holway a.hol...@syseleven.de
To: kvm@vger.kernel.org
Hallo
I have a kvm vm that I am attempting to boot from pxe. The dhcp works perfectly
and I can see the VM in the pxe server arp. but the tftp just times out. I
don't see any tftp traffic on either the physical host or on the pie server. I
am using a bridged interface. I have tried using
On Aug 16, 2012, at 3:54 PM, Stefan Hajnoczi wrote:
On Thu, Aug 16, 2012 at 1:25 PM, Andrew Holway a.hol...@syseleven.de wrote:
I have a kvm vm that I am attempting to boot from pxe. The dhcp works
perfectly and I can see the VM in the pxe server arp. but the tftp just
times out. I don't
24 matches
Mail list logo