Where do I set the iSCSI iface to use when connecting to both the
hosted_storate and VM Data Domain? I believe this is related to the difficulty
I've had configuring iSCSI bonds within the oVirt engine as opposed to directly
in the underlying OS.
I've set "iscsi_default_ifaces = ovirtsan" in
controller and a switch without impacting availability.
> On Apr 2, 2017, at 7:47 AM, Gianluca Cecchi <gianluca.cec...@gmail.com> wrote:
> Il 02 Apr 2017 05:20, "Devin A. Bougie" <devin.bou...@cornell.edu> ha scritto:
> We have a n
We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI
hosted_storage and VM data domain (same target, different LUN's). Everything
works fine, and I can configure iscsid and multipathd outside of the oVirt
engine to ensure redundancy with our iSCSI device. However, if I
Just incase anyone else runs into this, you need to set
"migration_ovs_hook_enabled=True" in vdsm.conf. It seems the vdsm.conf created
by "hosted-engine --deploy" did not list all of the options, so I overlooked
Thanks for all the help,
On Mar 27, 2017
We have a new 4.1.1 cluster setup. Migration of VM's that have a console /
graphics setup is failing. Migration of VM's that run headless succeeds.
The red flag in vdsm.log on the source is:
libvirtError: unsupported configuration: graphics 'listen' attribute
'192.168.55.82' must match
Hi, All. We have a new oVirt 4.1.1 cluster up with the OVS switch type.
Everything seems to be working great, except for live migration.
I believe the red flag in vdsm.log on the source is:
Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
Which results from vdsm
On Mar 23, 2017, at 10:51 AM, Yedidyah Bar David <d...@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie
> <devin.bou...@cornell.edu> wrote:
>> Hi, All. Are there any recommendations or best practices WRT whether or not
>> to host an NFS ISO do
Hi, All. Are there any recommendations or best practices WRT whether or not to
host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine
Appliance)? We have a hosted-engine 4.1.1 cluster up and running, and now just
have to decide where to serve the NFS ISO domain from.
engine VM and stop vdsmd on the host.
- Manually change the switch type to ovs in
- Restart the host
After that, everything seems to be working and new hosts are correctly setup
with the OVS switch type.
> On Mar 16, 2017, at 4
On Mar 21, 2017, at 4:06 PM, Simone Tiraboschi wrote:
> Did you already add your first storage domain for regular VMs?
> If also that one is on iSCSI, it should be connected trough a different iSCSI
Sure enough, once we added the data storage the
On Mar 20, 2017, at 12:54 PM, Simone Tiraboschi wrote:
> The engine should import it by itself once you add your first storage domain
> for regular VMs.
> No manual import actions are required.
It didn't seem to for us. I don't see it in the Storage tab (maybe I
We have a hosted-engine running on 4.1 with an iSCSI hosted_storage domain, and
are able to import the domain. However, we cannot attache the domain to the
Just to make sure I'm not missing something basic, does the engine VM need to
be able to connect to the iSCSI target itself?
Is it possible to setup a hosted engine using the OVS switch type instead of
Legacy? If it's not possible to start out as OVS, instructions for switching
from Legacy to OVS after the fact would be greatly appreciated.
Hi, All. Is it still possible or supported to run a hosted engine without
using the oVirt Engine Appliance? In other words, to install our own OS on a
VM and have it act as a hosted engine? "hosted-engine --deploy" now seems to
insist on using the oVirt Engine Appliance, but if it's possible
On Mar 11, 2017, at 10:59 AM, Chris Adams wrote:
> Hosted engine runs fine on iSCSI since oVirt 3.5. It needs a separate
> target from VM storage, but then that access is managed by the hosted
> engine HA system.
Thanks so much, Chris. It sounds like that is exactly what I
utside of ovirt / vdsm, but wasn't
sure if that would cause problems or if that was all that's needed to allow the
hosted engine to boot automatically on an iSCSI data domain.
> 2017-03-10 15:22 GMT-03:00 Devin A. Bougie <devin.bou...@cornell.edu>:
> We have an ovirt 4.1
We have an ovirt 4.1 cluster with an iSCSI data domain. If I shut down the
entire cluster and just boot the hosts, none of the hosts login to their iSCSI
sessions until the engine comes up. Without logging into the sessions, sanlock
doesn't obtain any leases and obviously none of the VMs
Hi, All. We have an ovirt 4.1 cluster setup using multiple paths to a single
iSCSI LUN for the data storage domain. I would now like to migrate to a hosted
I setup the new engine VM, shutdown and backed-up the old VM, and restored to
the new VM using engine-backup. After updating
On Nov 7, 2015, at 2:10 AM, Nir Soffer wrote:
>> Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.).
> There is no such dependency.
> Sanlock is using either an lv on block device (iscsi, fop)
Thanks, Nir! I was thinking sanlock required a
On Nov 6, 2015, at 5:02 AM, Nir Soffer <nsof...@redhat.com> wrote:
> On Thu, Nov 5, 2015 at 11:33 PM, Devin A. Bougie <devin.bou...@cornell.edu>
>> Hi, All. Is it possible to run vdsm without sanlock? We'd prefer to run
>> libvirtd with vi
Hi, All. Is it possible to run vdsm without sanlock? We'd prefer to run
libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock overhead,
but it looks like vdsmd / ovirt requires sanlock.
Users mailing list
On Oct 26, 2015, at 1:50 AM, Maor Lipchuk wrote:
> Looks like zeroing out the metadata volume with a dd operation was working.
> Can u try to remove the Storage Domain and add it back again now
The Storage Domain disappears from the GUI and isn't seen by
get the same "Cannot zero out volume"
If I try to import, I can log into the target but it doesn't show any "Storage
Name / Storage ID (VG Name)" to import.
> - Original Message -
On Oct 25, 2015, at 12:03 PM, Maor Lipchuk wrote:
> few questions:
> Which RHEL version is installed on your Host?
> Can you please share the output of "ls -l
[root@lnx84 ~]# ls -l
On Oct 25, 2015, at 6:36 AM, Maor Lipchuk wrote:
> Does your host is working with enabled selinux?
No, selinux is disabled. Sorry, I should have mentioned that initially.
Any other suggestions would be greatly appreciated.
> - Original
Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error
while executing action New SAN Storage Domain: Cannot zero out volume" error.
iscsid does login to the node, and the volumes appear to have been created.
However, I cannot use it to create or import a Data / iSCSI
Mail list logo