[ovirt-users] Local storage with self-hosted mode
Is there any way to use local storage with self-hosted mode for VMs other than the engine? The interface does not seem to allow it. I can hack in local storage on vdsm, but its not discovered/used by the engine (so i assume this is because it keeps its own metadata). I tried using a posix domain but there seems to be an expectation that the posix domain is accessible to all other hosts. My use case is 2 physical servers with no shared storage options, and we need fast I/O since the VMs are used for CI, so local storage is the ideal setup. -Jason ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Local storage with self-hosted mode
Thanks for the suggestions. Following Maor’s suggestion I was able to add a local domain, but that required maintenance mode, so I had to failure the engine over to another host to make the change to the current host. I like the appliance solution a little better, although I think it’s best if I were to run it under its own private KVM process unmanaged by ovirt, so that its possible to edit and cycle the host. Unfortunately it’s still a bit cumbersome as you need to have an engine appliance per system or shuffle around the image with some sort of disaster recovery plan. I also looked into using gluster or cephfs as a way to share state, but noticed the BZs about the lack of complete atomicity leading to duplicate engines. This is probably not the right place for dev musings, but IMO it would be great if in a future release there could be a solution that doesn’t require shared storage, which for smaller use-cases is often too pricey of a requirement. Ideally, under such a “horizontal” setup, each host could govern its own management data, and the engine could act more as an authoritative aggregator, thereby reducing the need for ha (if it fails just reinstall a clean one and let it reimport everything). It seems like most of the pieces are already there, with the per host-vdsm instance already containing much of the data. I’m guessing the missing element is having the engine support pulling that information as opposed to just pushing it. This is sort of like a capability that an unnamed proprietary competitor has, so it might have some sort of appeal. Of course such setups do have limitations, like you still need shared storage for live migrations and so on. So I certainly understand the rational behind the existing design. Anyway it’s just some food for thought. Thanks -Jason > On Dec 7, 2014, at 6:26 AM, Doron Fediuck wrote: > > Hi Jason, > Hosted Engine was designed to work with a shared storage since all hosts > need to share information on their status, and by that support > high-availability > for this VM. > > If you do not need high-availability you can use RHEV appliance to get a VM > running with the engine inside. Remember that failure of this host will kill > the engine VM as well. > > Doron > > - Original Message - >> From: "Maor Lipchuk" >> To: "Jason Greene" >> Cc: users@ovirt.org >> Sent: Sunday, December 7, 2014 1:22:44 PM >> Subject: Re: [ovirt-users] Local storage with self-hosted mode >> >> Hi Jason, >> >> Did you try to create a new local Data Center, and add a local storage domain >> there? >> or it have to be on the same Data Center containing the hosted engine? >> >> Regards, >> Maor >> >> >> - Original Message - >>> From: "Jason Greene" >>> To: users@ovirt.org >>> Sent: Friday, December 5, 2014 11:20:31 PM >>> Subject: [ovirt-users] Local storage with self-hosted mode >>> >>> >>> Is there any way to use local storage with self-hosted mode for VMs other >>> than the engine? The interface does not seem to allow it. I can hack in >>> local storage on vdsm, but its not discovered/used by the engine (so i >>> assume this is because it keeps its own metadata). I tried using a posix >>> domain but there seems to be an expectation that the posix domain is >>> accessible to all other hosts. >>> >>> My use case is 2 physical servers with no shared storage options, and we >>> need >>> fast I/O since the VMs are used for CI, so local storage is the ideal >>> setup. >>> >>> -Jason >>> ___ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [RFI] oVirt 3.6 Planning
> On 12.09.2014 14:22, Itamar Heim wrote: > > With oVirt 3.5 nearing GA, time to ask for "what do you want to see in > oVirt 3.6”? + Windows HV Support: https://bugzilla.redhat.com/show_bug.cgi?id=1125297 Without these flags, my testing shows a completely idle 4 vcpu win slave uses ~15% of a host core, which limits overcommit ability. With them it goes down to 3.6% in my testing. hv_relaxed on its own shows no improvement over idle time. Unfortunately there is a KVM kernel bug that leads to win hangs with these flags, and so until RHEL gets 3.16, which looks like 7.1, only Fedora works: https://bugzilla.redhat.com/show_bug.cgi?id=1091818 + Ability to add local storage without putting the host in maintenance mode + Some out-of-the-box option for self-hosted engine without shared storage (e.g. gluser, ceph, drdb, application directed replication, etc) Thanks! -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Local storage with self-hosted mode
> On Dec 12, 2014, at 7:19 AM, Itamar Heim wrote: > > On 12/10/2014 05:47 PM, Jason Greene wrote: >> Thanks for the suggestions. >> >> Following Maor’s suggestion I was able to add a local domain, but that >> required maintenance mode, so I had to failure the engine over to another >> host to make the change to the current host. >> >> I like the appliance solution a little better, although I think it’s best if >> I were to run it under its own private KVM process unmanaged by ovirt, so >> that its possible to edit and cycle the host. Unfortunately it’s still a bit >> cumbersome as you need to have an engine appliance per system or shuffle >> around the image with some sort of disaster recovery plan. >> >> I also looked into using gluster or cephfs as a way to share state, but >> noticed the BZs about the lack of complete atomicity leading to duplicate >> engines. >> >> This is probably not the right place for dev musings, but IMO it would be >> great if in a future release there could be a solution that doesn’t require >> shared storage, which for smaller use-cases is often too pricey of a >> requirement. Ideally, under such a “horizontal” setup, each host could >> govern its own management data, and the engine could act more as an >> authoritative aggregator, thereby reducing the need for ha (if it fails just >> reinstall a clean one and let it reimport everything). It seems like most of >> the pieces are already there, with the per host-vdsm instance already >> containing much of the data. I’m guessing the missing element is having the >> engine support pulling that information as opposed to just pushing it. This >> is sort of like a capability that an unnamed proprietary competitor has, so >> it might have some sort of appeal. Of course such setups do have >> limitations, like you still need shared storage for live migrations and so >> on. So I certainly understand >> the rational behind the existing design. Anyway it’s just some food for >> thought. >> > before we go so far out... gluster should work with 3 hosts, we are working > on improving the flow for this for 3.6. today it requires quite a few manual > steps to do so. > I was looked into that, but I got scared away by: https://bugzilla.redhat.com/show_bug.cgi?id=1097639 The option I was thinking of trying was drdb to mirror the ovirt appliance copied to a block store, and then using something like pacemaker to control failover. This would ensure that the engine always follows its data. -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] firewalld rule for ovirt host?
> On Jan 21, 2015, at 9:45 AM, Jorick Astrego wrote: > > Hi, > > > > In the quickstart guide we have the iptables rules for a fedora 19 host, > > > but currently we run firewalld on the host (Centos 7) > > > > I've converted the rules to a service xml for the zone but I can't > > > figure out the firewalld translation for "-A FORWARD -m physdev ! > > > --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited " > > > > Anyone know how to do this in firewalld? > DISCLAIMER: I am just a lowly user of ovirt/RHEL/Fedora You can do almost anything you can do with iptables by using the passthrough option, although you have to make sure the rules fit the underlying iptables policy firewalld generates (by inspecting it afterwords). The following should work: firewall-cmd --permanent --direct --passthrough ipv4 -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT -- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users