[ovirt-users] setting iSCSI iface.net_ifacename (netIfaceName)

2017-04-03 Thread Devin A. Bougie
Where do I set the iSCSI iface to use when connecting to both the hosted_storate and VM Data Domain? I believe this is related to the difficulty I've had configuring iSCSI bonds within the oVirt engine as opposed to directly in the underlying OS. I've set "iscsi_default_ifaces = ovirtsan" in

Re: [ovirt-users] iSCSI Multipathing

2017-04-02 Thread Devin A. Bougie
a controller and a switch without impacting availability. Thanks again! Devin > On Apr 2, 2017, at 7:47 AM, Gianluca Cecchi <gianluca.cec...@gmail.com> wrote: > > > > Il 02 Apr 2017 05:20, "Devin A. Bougie" <devin.bou...@cornell.edu> ha scritto: > We have a n

[ovirt-users] iSCSI Multipathing

2017-04-01 Thread Devin A. Bougie
We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI hosted_storage and VM data domain (same target, different LUN's). Everything works fine, and I can configure iscsid and multipathd outside of the oVirt engine to ensure redundancy with our iSCSI device. However, if I

Re: [ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-29 Thread Devin A. Bougie
Just incase anyone else runs into this, you need to set "migration_ovs_hook_enabled=True" in vdsm.conf. It seems the vdsm.conf created by "hosted-engine --deploy" did not list all of the options, so I overlooked this one. Thanks for all the help, Devin On Mar 27, 2017

[ovirt-users] migration failures - libvirtError - listen attribute must match address attribute of first listen element

2017-03-29 Thread Devin A. Bougie
We have a new 4.1.1 cluster setup. Migration of VM's that have a console / graphics setup is failing. Migration of VM's that run headless succeeds. The red flag in vdsm.log on the source is: libvirtError: unsupported configuration: graphics 'listen' attribute '192.168.55.82' must match

[ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-27 Thread Devin A. Bougie
Hi, All. We have a new oVirt 4.1.1 cluster up with the OVS switch type. Everything seems to be working great, except for live migration. I believe the red flag in vdsm.log on the source is: Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287) Which results from vdsm

Re: [ovirt-users] NFS ISO domain from hosted-engine VM

2017-03-23 Thread Devin A. Bougie
On Mar 23, 2017, at 10:51 AM, Yedidyah Bar David <d...@redhat.com> wrote: > On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie > <devin.bou...@cornell.edu> wrote: >> Hi, All. Are there any recommendations or best practices WRT whether or not >> to host an NFS ISO do

[ovirt-users] NFS ISO domain from hosted-engine VM

2017-03-23 Thread Devin A. Bougie
Hi, All. Are there any recommendations or best practices WRT whether or not to host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine Appliance)? We have a hosted-engine 4.1.1 cluster up and running, and now just have to decide where to serve the NFS ISO domain from. Many

Re: [ovirt-users] OVS switch type for hosted-engine

2017-03-23 Thread Devin A. Bougie
engine VM and stop vdsmd on the host. - Manually change the switch type to ovs in /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt - Restart the host After that, everything seems to be working and new hosts are correctly setup with the OVS switch type. Thanks, Devin > On Mar 16, 2017, at 4

Re: [ovirt-users] hosted-engine with iscsi storage domain

2017-03-23 Thread Devin A. Bougie
Hi Simone, On Mar 21, 2017, at 4:06 PM, Simone Tiraboschi wrote: > Did you already add your first storage domain for regular VMs? > If also that one is on iSCSI, it should be connected trough a different iSCSI > portal. Sure enough, once we added the data storage the

Re: [ovirt-users] hosted-engine with iscsi storage domain

2017-03-21 Thread Devin A. Bougie
On Mar 20, 2017, at 12:54 PM, Simone Tiraboschi wrote: > The engine should import it by itself once you add your first storage domain > for regular VMs. > No manual import actions are required. It didn't seem to for us. I don't see it in the Storage tab (maybe I

[ovirt-users] hosted-engine with iscsi storage domain

2017-03-20 Thread Devin A. Bougie
We have a hosted-engine running on 4.1 with an iSCSI hosted_storage domain, and are able to import the domain. However, we cannot attache the domain to the data center. Just to make sure I'm not missing something basic, does the engine VM need to be able to connect to the iSCSI target itself?

[ovirt-users] OVS switch type for hosted-engine

2017-03-16 Thread Devin A. Bougie
Is it possible to setup a hosted engine using the OVS switch type instead of Legacy? If it's not possible to start out as OVS, instructions for switching from Legacy to OVS after the fact would be greatly appreciated. Many thanks, Devin ___ Users

[ovirt-users] hosted engine without the appliance?

2017-03-14 Thread Devin A. Bougie
Hi, All. Is it still possible or supported to run a hosted engine without using the oVirt Engine Appliance? In other words, to install our own OS on a VM and have it act as a hosted engine? "hosted-engine --deploy" now seems to insist on using the oVirt Engine Appliance, but if it's possible

Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Devin A. Bougie
On Mar 11, 2017, at 10:59 AM, Chris Adams wrote: > Hosted engine runs fine on iSCSI since oVirt 3.5. It needs a separate > target from VM storage, but then that access is managed by the hosted > engine HA system. Thanks so much, Chris. It sounds like that is exactly what I

Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Devin A. Bougie
utside of ovirt / vdsm, but wasn't sure if that would cause problems or if that was all that's needed to allow the hosted engine to boot automatically on an iSCSI data domain. Thanks again, Devin > 2017-03-10 15:22 GMT-03:00 Devin A. Bougie <devin.bou...@cornell.edu>: > We have an ovirt 4.1

[ovirt-users] iscsi data domain when engine is down

2017-03-10 Thread Devin A. Bougie
We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down the entire cluster and just boot the hosts, none of the hosts login to their iSCSI sessions until the engine comes up. Without logging into the sessions, sanlock doesn't obtain any leases and obviously none of the VMs

[ovirt-users] migrate to hosted engine

2017-03-10 Thread Devin A. Bougie
Hi, All. We have an ovirt 4.1 cluster setup using multiple paths to a single iSCSI LUN for the data storage domain. I would now like to migrate to a hosted engine. I setup the new engine VM, shutdown and backed-up the old VM, and restored to the new VM using engine-backup. After updating

Re: [ovirt-users] vdsm without sanlock

2015-11-07 Thread Devin A. Bougie
On Nov 7, 2015, at 2:10 AM, Nir Soffer wrote: >> Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.). > > There is no such dependency. > Sanlock is using either an lv on block device (iscsi, fop) Thanks, Nir! I was thinking sanlock required a

Re: [ovirt-users] vdsm without sanlock

2015-11-06 Thread Devin A. Bougie
Hi Nir, On Nov 6, 2015, at 5:02 AM, Nir Soffer <nsof...@redhat.com> wrote: > On Thu, Nov 5, 2015 at 11:33 PM, Devin A. Bougie <devin.bou...@cornell.edu> > wrote: >> Hi, All. Is it possible to run vdsm without sanlock? We'd prefer to run >> libvirtd with vi

[ovirt-users] vdsm without sanlock

2015-11-05 Thread Devin A. Bougie
Hi, All. Is it possible to run vdsm without sanlock? We'd prefer to run libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock overhead, but it looks like vdsmd / ovirt requires sanlock. Thanks, Devin ___ Users mailing list

Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-26 Thread Devin A. Bougie
Hi Maor, On Oct 26, 2015, at 1:50 AM, Maor Lipchuk wrote: > Looks like zeroing out the metadata volume with a dd operation was working. > Can u try to remove the Storage Domain and add it back again now The Storage Domain disappears from the GUI and isn't seen by

Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-26 Thread Devin A. Bougie
get the same "Cannot zero out volume" error. If I try to import, I can log into the target but it doesn't show any "Storage Name / Storage ID (VG Name)" to import. Thanks again, Devin > > Regards, > Maor > > > > - Original Message - >

Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-25 Thread Devin A. Bougie
Hi Maor, On Oct 25, 2015, at 12:03 PM, Maor Lipchuk wrote: > few questions: > Which RHEL version is installed on your Host? 7.1 > Can you please share the output of "ls -l > /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/" [root@lnx84 ~]# ls -l

Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-25 Thread Devin A. Bougie
Hi Maor, On Oct 25, 2015, at 6:36 AM, Maor Lipchuk wrote: > Does your host is working with enabled selinux? No, selinux is disabled. Sorry, I should have mentioned that initially. Any other suggestions would be greatly appreciated. Many thanks! Devin > - Original

[ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-23 Thread Devin A. Bougie
Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error while executing action New SAN Storage Domain: Cannot zero out volume" error. iscsid does login to the node, and the volumes appear to have been created. However, I cannot use it to create or import a Data / iSCSI