Here is a qemu-kvm that is running on top of gluster shared mountpoint:

/usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 2048 -smp
2,sockets=2,cores=1,threads=1 -name i-3-15-VM -uuid
7fa57327-d573-336b-a88f-f89dfb05b728 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/i-3-15-VM.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/gluster/qcow2/images/c73665b7-25f4-4664-bb52-f3d3aedd6855,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-drive
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none
-device
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=29 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=06:b3:c8:00:00:d6,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc
0.0.0.0:4 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5



On Sat, Feb 16, 2013 at 9:36 AM, John Kinsella <j...@stratosec.co> wrote:

> Bryan - are you running the KVM disks with writethrough or writeback
> caching? From my exp, I had to switch to writethrough for gluster to work,
> and that resulted in a 4x performance hit for a VM to write to the gluster
> store vs writing to gluster directly on the host...
>
> John
>
> On Feb 7, 2013, at 1:28 PM, Bryan Whitehead <dri...@megahappy.net> wrote:
>
> > I'm currently using glusterfs as a SharedMountPoint with great success.
> > I've had server failures and HA smoothly powered up VM's that got hit
> > without a hitch. (Each VM host has about ~4TB contributing to a volume
> with
> > replica=2). I've also been able to migrate VM's around for maintenance on
> > specific hosts.
> >
> > NOTE: I have an infinband/IPoIB interconnect so glusterfs has all the IO
> it
> > needs. I easily can push 130MB/sec write speeds inside a VM with
> kvm/qcow2
> > backed setup.
> >
> >
> > On Wed, Feb 6, 2013 at 12:55 PM, Nux! <n...@li.nux.ro> wrote:
> >
> >> On 06.02.2013 19:55, Chris Sears wrote:
> >>
> >>>
> >>> I'm not sure anyone could give you a "recommended" option for primary
> >>> storage without knowing more about your requirements and environment,
> >>> but NFS seems to be fairly common for production usage. For KVM, your
> >>> storage options are NFS, RDB, CLVM, or SharedMountPoint (which could
> >>> be any shared file system, eg GFS).
> >>>
> >>
> >> Thanks, CLVM looks really neat and I imagine the snapshotting is also
> >> superior to what we can see today with kvm+qcow2. I guess I could also
> use
> >> Glusterfs as SharedMountPoint.
> >>
> >>
> >>
> >>> Yes, CS can resize volumes, but it doesn't do anything inside the
> >>> guest to resize the local filesystem/partitions.
> >>>
> >>
> >> That's fair enough.
> >>
> >>
> >>
> >>> If the requested resize needs more resources than the current physical
> >>>> host
> >>>> can provide, can CS (live) migrate the VM to another one?
> >>>>
> >>>
> >>> I'm not aware of any such automatic migration feature. Most of the
> >>> primary storage options would expose the same shares/LUNs to all the
> >>> hosts in a cluster, so I'm not sure how often this would come up.
> >>>
> >>
> >> In my case the local storage is significantly faster than anything
> >> "shared" I could come up with so this feature is quite appealing. The
> other
> >> competition's stack can do this and I wondered if cloudstack can also do
> >> it. But CLVM might be a decent compromise, remains to be seen.
> >>
> >> Thanks a lot!
> >>
> >>
> >> Lucian
> >>
> >>
> >> --
> >> Sent from the Delta quadrant using Borg technology!
> >>
> >> Nux!
> >> www.nux.ro
> >>
>
> Stratosec - Secure Infrastructure as a Service
> o: 415.315.9385
> @johnlkinsella
>
>

Reply via email to