[ovirt-users] setting iSCSI iface.net_ifacename (netIfaceName)

2017-04-03 Thread Devin A. Bougie
Where do I set the iSCSI iface to use when connecting to both the 
hosted_storate and VM Data Domain?  I believe this is related to the difficulty 
I've had configuring iSCSI bonds within the oVirt engine as opposed to directly 
in the underlying OS.

I've set "iscsi_default_ifaces = ovirtsan" in vdsm.conf, but vdsmd still 
insists on using the default iface and vdsm.log shows:
2017-04-03 11:17:21,109-0400 INFO  (jsonrpc/5) [storage.ISCSI] iSCSI 
iface.net_ifacename not provided. Skipping. (iscsi:590)

Many thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing

2017-04-02 Thread Devin A. Bougie
Thanks for following up, Gianluca.  At this point, my main question is why 
should I configure iSCSI Bonds within the oVirt engine instead of or in 
addition to configuring iSCSI initiators and multipathd directly in the host's 
OS.

The multipath.conf created by VDSM works fine with our devices, as do the stock 
EL6/7 kernels and drivers.  We've had great success using these devices for 
over a decade in various EL6/7 High-Availability server clusters, and when we 
configure everything manually they seem to work great with oVirt.  We're just 
wondering exactly what the advantage is to taking the next step of configuring 
iSCSI Bonds within the oVirt engine.

For what it's worth, these are Infortrend ESDS devices with redundant 
controllers and two 10GbE ports per controller.  We connect each host and each 
controller to two separate switches, so we can simultaneously lose both a 
controller and a switch without impacting availability.

Thanks again!
Devin

> On Apr 2, 2017, at 7:47 AM, Gianluca Cecchi <gianluca.cec...@gmail.com> wrote:
> 
> 
> 
> Il 02 Apr 2017 05:20, "Devin A. Bougie" <devin.bou...@cornell.edu> ha scritto:
> We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI 
> hosted_storage and VM data domain (same target, different LUN's).  Everything 
> works fine, and I can configure iscsid and multipathd outside of the oVirt 
> engine to ensure redundancy with our iSCSI device.  However, if I try to 
> configure iSCSI Multipathing within the engine, all of the hosts get stuck in 
> the "Connecting" status and the Data Center and Storage Domains go down.  The 
> hosted engine, however, continues to work just fine.
> 
> Before I provide excerpts from our logs and more details on what we're 
> seeing, it would be helpful to understand better what the advantages are of 
> configuring iSCSI Bonds within the oVirt engine.  Is this mainly a feature 
> for oVirt users that don't have experience configuring and managing iscsid 
> and multipathd directly?  Or, is it important to actually setup iSCSI Bonds 
> within the engine instead of directly in the underlying OS?
> 
> Any advice or links to documentation I've overlooked would be greatly 
> appreciated.
> 
> Many thanks,
> Devin
> 
> What kind of iscsi storage stay are you using?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI Multipathing

2017-04-01 Thread Devin A. Bougie
We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI 
hosted_storage and VM data domain (same target, different LUN's).  Everything 
works fine, and I can configure iscsid and multipathd outside of the oVirt 
engine to ensure redundancy with our iSCSI device.  However, if I try to 
configure iSCSI Multipathing within the engine, all of the hosts get stuck in 
the "Connecting" status and the Data Center and Storage Domains go down.  The 
hosted engine, however, continues to work just fine.

Before I provide excerpts from our logs and more details on what we're seeing, 
it would be helpful to understand better what the advantages are of configuring 
iSCSI Bonds within the oVirt engine.  Is this mainly a feature for oVirt users 
that don't have experience configuring and managing iscsid and multipathd 
directly?  Or, is it important to actually setup iSCSI Bonds within the engine 
instead of directly in the underlying OS?  

Any advice or links to documentation I've overlooked would be greatly 
appreciated.

Many thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-29 Thread Devin A. Bougie
Just incase anyone else runs into this, you need to set 
"migration_ovs_hook_enabled=True" in vdsm.conf.  It seems the vdsm.conf created 
by "hosted-engine --deploy" did not list all of the options, so I overlooked 
this one.

Thanks for all the help,
Devin

On Mar 27, 2017, at 11:10 AM, Devin A. Bougie <devin.bou...@cornell.edu> wrote:
> Hi, All.  We have a new oVirt 4.1.1 cluster up with the OVS switch type.  
> Everything seems to be working great, except for live migration.
> 
> I believe the red flag in vdsm.log on the source is:
> Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
> 
> Which results from vdsm assigning an arbitrary bridge name to each ovs bridge.
> 
> Please see below for more details on the bridges and excerpts from the logs.  
> Any help would be greatly appreciated.
> 
> Many thanks,
> Devin
> 
> SOURCE OVS BRIDGES:
> # ovs-vsctl show
> 6d96d9a5-e30d-455b-90c7-9e9632574695
>Bridge "vdsmbr_QwORbsw2"
>Port "vdsmbr_QwORbsw2"
>Interface "vdsmbr_QwORbsw2"
>type: internal
>Port "vnet0"
>Interface "vnet0"
>Port classepublic
>Interface classepublic
>type: internal
>Port "ens1f0"
>Interface "ens1f0"
>Bridge "vdsmbr_9P7ZYKWn"
>Port ovirtmgmt
>Interface ovirtmgmt
>type: internal
>Port "ens1f1"
>Interface "ens1f1"
>Port "vdsmbr_9P7ZYKWn"
>Interface "vdsmbr_9P7ZYKWn"
>type: internal
>ovs_version: "2.7.0"
> 
> DESTINATION OVS BRIDGES:
> # ovs-vsctl show
> f66d765d-712a-4c81-b18e-da1acc9cfdde
>Bridge "vdsmbr_vdpp0dOd"
>Port "vdsmbr_vdpp0dOd"
>Interface "vdsmbr_vdpp0dOd"
>type: internal
>Port "ens1f0"
>Interface "ens1f0"
>Port classepublic
>Interface classepublic
>type: internal
>Bridge "vdsmbr_3sEwEKd1"
>Port "vdsmbr_3sEwEKd1"
>Interface "vdsmbr_3sEwEKd1"
>type: internal
>Port "ens1f1"
>Interface "ens1f1"
>Port ovirtmgmt
>Interface ovirtmgmt
>type: internal
>ovs_version: "2.7.0"
> 
> 
> SOURCE VDSM LOG:
> ...
> 2017-03-27 10:57:02,567-0400 INFO  (jsonrpc/1) [vdsm.api] START migrate 
> args=(, {u'incomingLimit': 2, u'src': 
> u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', 
> u'tunneled': u'false', u'enableGuestEvents': False, u'dst': 
> u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': 
> u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
> u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, 
> u'method': u'online', 'mode': 'remote'}) kwargs={} (api:37)
> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH migrate 
> return={'status': {'message': 'Migration in progress', 'code': 0}, 
> 'progress': 0} (api:43)
> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC 
> call VM.migrate succeeded in 0.01 seconds (__init__:515)
> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM 
> took: 0 seconds (migration:455)
> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
> qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri 
> tcp://192.168.55.81 (migration:480)
> 2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 
> 'vdsmbr_QwORbsw2': No such device (migration:287)
> 2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate 
> (migration:429)
> Traceback (most recent call last):
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
> run
>self._startUnderlyingMigration(time.time())
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
> _startUnderlyingMigration
>self._perform_with_downtime_thread(duri, muri)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in 
> _perform_with_downtime_thread
>self._perform_migration(duri, muri)
>  File "/usr/lib/

[ovirt-users] migration failures - libvirtError - listen attribute must match address attribute of first listen element

2017-03-29 Thread Devin A. Bougie
We have a new 4.1.1 cluster setup.  Migration of VM's that have a console / 
graphics setup is failing.  Migration of VM's that run headless succeeds.

The red flag in vdsm.log on the source is:
libvirtError: unsupported configuration: graphics 'listen' attribute 
'192.168.55.82' must match 'address' attribute of first listen element (found 
'192.168.55.84')

This happens when the console is set to either VNC or SPICE.  Please see below 
for larger excerpts from vdsm.log on the source and destination.  Any help 
would be greatly appreciated.

Many thanks,
Devin

SOURCE:
--
2017-03-29 09:53:30,314-0400 INFO  (jsonrpc/5) [vdsm.api] START migrate 
args=(, {u'incomingLimit': 2, u'src': 
u'192.168.55.82', u'dstqemu': u'192.168.55.84', u'autoConverge': u'false', 
u'tunneled': u'false', u'en
ableGuestEvents': False, u'dst': u'192.168.55.84:54321', u'vmId': 
u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, u'method': 
u'online', 'mode': 'remote'}) k
wargs={} (api:37)
2017-03-29 09:53:30,315-0400 INFO  (jsonrpc/5) [vdsm.api] FINISH migrate 
return={'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 
0} (api:43)
2017-03-29 09:53:30,315-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
VM.migrate succeeded in 0.00 seconds (__init__:515)
2017-03-29 09:53:30,444-0400 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:52494 
(protocoldetector:72)
2017-03-29 09:53:30,450-0400 INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from ::1:52494 (protocoldetector:127)
2017-03-29 09:53:30,450-0400 INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT request (stompreactor:102)
2017-03-29 09:53:30,451-0400 INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-29 09:53:30,628-0400 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515)
2017-03-29 09:53:30,630-0400 INFO  (jsonrpc/7) [dispatcher] Run and protect: 
repoStats(options=None) (logUtils:51)
2017-03-29 09:53:30,631-0400 INFO  (jsonrpc/7) [dispatcher] Run and protect: 
repoStats, Return response: {u'016ceee8-9117-4e8a-b611-f58f6763a098': {'code': 
0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000226545', 
'lastCheck': 
'3.2', 'valid': True}, u'2438f819-e7f5-4bb1-ad0d-5349fa371e6e': {'code': 0, 
'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000232943', 
'lastCheck': '3.1', 'valid': True}, u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a': 
{'code': 0, 'actual
': True, 'version': 4, 'acquired': True, 'delay': '0.000612878', 'lastCheck': 
'8.3', 'valid': True}} (logUtils:54)
2017-03-29 09:53:30,631-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:515)
2017-03-29 09:53:30,701-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM took: 
0 seconds (migration:455)
2017-03-29 09:53:30,701-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
qemu+tls://192.168.55.84/system with miguri tcp://192.168.55.84 (migration:480)
2017-03-29 09:53:31,120-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') unsupported configuration: 
graphics 'listen' attribute '192.168.55.82' must match 'address' attribute of 
first listen element (found '1
92.168.55.84') (migration:287)
2017-03-29 09:53:31,206-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate (migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
run
self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
_startUnderlyingMigration
self._perform_with_downtime_thread(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_with_downtime_thread
self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in 
_perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in 
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: unsupported configuration: graphics 'listen' attribute 
'192.168.55.82' must match 'address' attribute of first 

[ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-27 Thread Devin A. Bougie
Hi, All.  We have a new oVirt 4.1.1 cluster up with the OVS switch type.  
Everything seems to be working great, except for live migration.

I believe the red flag in vdsm.log on the source is:
Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)

Which results from vdsm assigning an arbitrary bridge name to each ovs bridge.

Please see below for more details on the bridges and excerpts from the logs.  
Any help would be greatly appreciated.

Many thanks,
Devin

SOURCE OVS BRIDGES:
# ovs-vsctl show
6d96d9a5-e30d-455b-90c7-9e9632574695
Bridge "vdsmbr_QwORbsw2"
Port "vdsmbr_QwORbsw2"
Interface "vdsmbr_QwORbsw2"
type: internal
Port "vnet0"
Interface "vnet0"
Port classepublic
Interface classepublic
type: internal
Port "ens1f0"
Interface "ens1f0"
Bridge "vdsmbr_9P7ZYKWn"
Port ovirtmgmt
Interface ovirtmgmt
type: internal
Port "ens1f1"
Interface "ens1f1"
Port "vdsmbr_9P7ZYKWn"
Interface "vdsmbr_9P7ZYKWn"
type: internal
ovs_version: "2.7.0"

DESTINATION OVS BRIDGES:
# ovs-vsctl show
f66d765d-712a-4c81-b18e-da1acc9cfdde
Bridge "vdsmbr_vdpp0dOd"
Port "vdsmbr_vdpp0dOd"
Interface "vdsmbr_vdpp0dOd"
type: internal
Port "ens1f0"
Interface "ens1f0"
Port classepublic
Interface classepublic
type: internal
Bridge "vdsmbr_3sEwEKd1"
Port "vdsmbr_3sEwEKd1"
Interface "vdsmbr_3sEwEKd1"
type: internal
Port "ens1f1"
Interface "ens1f1"
Port ovirtmgmt
Interface ovirtmgmt
type: internal
ovs_version: "2.7.0"


SOURCE VDSM LOG:
...
2017-03-27 10:57:02,567-0400 INFO  (jsonrpc/1) [vdsm.api] START migrate 
args=(, {u'incomingLimit': 2, u'src': 
u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', 
u'tunneled': u'false', u'enableGuestEvents': False, u'dst': 
u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': 
u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, u'method': 
u'online', 'mode': 'remote'}) kwargs={} (api:37)
2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH migrate 
return={'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 
0} (api:43)
2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
VM.migrate succeeded in 0.01 seconds (__init__:515)
2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM took: 
0 seconds (migration:455)
2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri 
tcp://192.168.55.81 (migration:480)
2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 
'vdsmbr_QwORbsw2': No such device (migration:287)
2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate (migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
run
self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
_startUnderlyingMigration
self._perform_with_downtime_thread(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_with_downtime_thread
self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in 
_perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in 
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device
2017-03-27 10:57:03,435-0400 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:33716 
(protocoldetector:72)
2017-03-27 10:57:03,452-0400 INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from ::1:33716 (protocoldetector:127)
2017-03-27 10:57:03,452-0400 INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT 

Re: [ovirt-users] NFS ISO domain from hosted-engine VM

2017-03-23 Thread Devin A. Bougie
On Mar 23, 2017, at 10:51 AM, Yedidyah Bar David <d...@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie
> <devin.bou...@cornell.edu> wrote:
>> Hi, All.  Are there any recommendations or best practices WRT whether or not 
>> to host an NFS ISO domain from the hosted-engine VM (running the oVirt 
>> Engine Appliance)?  We have a hosted-engine 4.1.1 cluster up and running, 
>> and now just have to decide where to serve the NFS ISO domain from.
> 
> NFS ISO domain on the engine machine is generally deprecated, and
> specifically problematic for hosted-engine, see also:

Thanks, Didi!  I'll go ahead and setup the NFS ISO domain in a separate cluster.

Sincerely,
Devin

> https://bugzilla.redhat.com/show_bug.cgi?id=1332813
> I recently pushed a patch to remove the question about it altogether:
> https://gerrit.ovirt.org/74409
> 
> Best,
> -- 
> Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS ISO domain from hosted-engine VM

2017-03-23 Thread Devin A. Bougie
Hi, All.  Are there any recommendations or best practices WRT whether or not to 
host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine 
Appliance)?  We have a hosted-engine 4.1.1 cluster up and running, and now just 
have to decide where to serve the NFS ISO domain from.

Many thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVS switch type for hosted-engine

2017-03-23 Thread Devin A. Bougie
Just to close this thread, we were able to manually convert our hosted-engine 
4.1.1 cluster from the Legacy to OVS switch types.  Very roughly:
- Install first host and VM using "hosted-engine --deploy"
- In the Engine UI, change Cluster switch type from Legacy to OVS
- Shutdown the engine VM and stop vdsmd on the host.
- Manually change the switch type to ovs in 
/var/lib/vdsm/persistence/netconf/nets/ovirtmgmt
- Restart the host

After that, everything seems to be working and new hosts are correctly setup 
with the OVS switch type.

Thanks,
Devin

> On Mar 16, 2017, at 4:06 PM, Devin A. Bougie <devin.bou...@cornell.edu> wrote:
> 
> Is it possible to setup a hosted engine using the OVS switch type instead of 
> Legacy?  If it's not possible to start out as OVS, instructions for switching 
> from Legacy to OVS after the fact would be greatly appreciated.
> 
> Many thanks,
> Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine with iscsi storage domain

2017-03-23 Thread Devin A. Bougie
Hi Simone,

On Mar 21, 2017, at 4:06 PM, Simone Tiraboschi  wrote:
> Did you already add your first storage domain for regular VMs?
> If also that one is on iSCSI, it should be connected trough a different iSCSI 
> portal.

Sure enough, once we added the data storage the hosted-storage imported and 
attached successfully.  Both our hosted-storage and our VM data storage are 
from the same iSCSI target(s), but separate LUNs.

Many thanks!
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine with iscsi storage domain

2017-03-21 Thread Devin A. Bougie
On Mar 20, 2017, at 12:54 PM, Simone Tiraboschi  wrote:
> The engine should import it by itself once you add your first storage domain 
> for regular VMs.
> No manual import actions are required.

It didn't seem to for us.  I don't see it in the Storage tab (maybe I 
shouldn't?).  I can install a new host from the engine web ui, but I don't see 
any hosted-engine options.  If I put the new host in maintenance and reinstall, 
I can select DEPLOY under "Choose hosted engine deployment action."  However, 
the web UI than complains that:
Cannot edit Host.  You are using an unmanaged hosted engine VM.  P{ease upgrade 
the cluster level to 3.6 and wait for the hosted engine storage domain to be 
properly imported.

This is on a new 4.1 cluster with the hosted-engine created using hosted-engine 
--deploy on the first host.

> No, a separate network for the storage is even recommended.

Glad to hear, thanks!

Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine with iscsi storage domain

2017-03-20 Thread Devin A. Bougie
We have a hosted-engine running on 4.1 with an iSCSI hosted_storage domain, and 
are able to import the domain.  However, we cannot attache the domain to the 
data center.

Just to make sure I'm not missing something basic, does the engine VM need to 
be able to connect to the iSCSI target itself?  In other words, does the iSCSI 
traffic need to go over the ovirtmgmt bridged network?  Currently we have the 
iSCSI SAN on a separate subnet from ovirtmgmt, so the hosted-engine VM can't 
directly see the iSCSI storage.

Thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OVS switch type for hosted-engine

2017-03-16 Thread Devin A. Bougie
Is it possible to setup a hosted engine using the OVS switch type instead of 
Legacy?  If it's not possible to start out as OVS, instructions for switching 
from Legacy to OVS after the fact would be greatly appreciated.

Many thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted engine without the appliance?

2017-03-14 Thread Devin A. Bougie
Hi, All.  Is it still possible or supported to run a hosted engine without 
using the oVirt Engine Appliance?  In other words, to install our own OS on a 
VM and have it act as a hosted engine?  "hosted-engine --deploy" now seems to 
insist on using the oVirt Engine Appliance, but if it's possible and not a 
mistake we'd prefer to run and manage our own OS.

Thanks!
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Devin A. Bougie
On Mar 11, 2017, at 10:59 AM, Chris Adams  wrote:
> Hosted engine runs fine on iSCSI since oVirt 3.5.  It needs a separate
> target from VM storage, but then that access is managed by the hosted
> engine HA system.

Thanks so much, Chris.  It sounds like that is exactly what I was missing.  

It would be great to know how to add multiple paths to the hosted engine's 
iSCSI target, but hopefully I can figure that out once I have things up and 
running.

Thanks again,
Devin

> 
> If all the engine hosts are shut down together, it will take a bit after
> boot for the HA system to converge and try to bring the engine back
> online (including logging in to the engine iSCSI LUN).  You can force
> this on one host by running "hosted-engine --vm-start".
> 
> -- 
> Chris Adams 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Devin A. Bougie
On Mar 10, 2017, at 1:28 PM, Juan Pablo <pablo.localh...@gmail.com> wrote:
> Hi, what kind of setup you have? hosted engine just runs on nfs or gluster 
> afaik.

Thanks for replying, Juan.  I was under the impression that the hosted engine 
would run on an iSCSI data domain, based on 
http://www.ovirt.org/develop/release-management/features/engine/self-hosted-engine-iscsi-support/
 and the fact that "hosted-engine --deploy" does give you the option to choose 
iscsi storage (but only one path, as far as I can tell).

I certainly could manage the iSCSI sessions outside of ovirt / vdsm, but wasn't 
sure if that would cause problems or if that was all that's needed to allow the 
hosted engine to boot automatically on an iSCSI data domain.

Thanks again,
Devin

> 2017-03-10 15:22 GMT-03:00 Devin A. Bougie <devin.bou...@cornell.edu>:
> We have an ovirt 4.1 cluster with an iSCSI data domain.  If I shut down the 
> entire cluster and just boot the hosts, none of the hosts login to their 
> iSCSI sessions until the engine comes up.  Without logging into the sessions, 
> sanlock doesn't obtain any leases and obviously none of the VMs start.
> 
> I'm sure there's something I'm missing, as it looks like it should be 
> possible to run a hosted engine on a cluster using an iSCSI data domain.
> 
> Any pointers or suggestions would be greatly appreciated.
> 
> Many thanks,
> Devin
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iscsi data domain when engine is down

2017-03-10 Thread Devin A. Bougie
We have an ovirt 4.1 cluster with an iSCSI data domain.  If I shut down the 
entire cluster and just boot the hosts, none of the hosts login to their iSCSI 
sessions until the engine comes up.  Without logging into the sessions, sanlock 
doesn't obtain any leases and obviously none of the VMs start.

I'm sure there's something I'm missing, as it looks like it should be possible 
to run a hosted engine on a cluster using an iSCSI data domain.

Any pointers or suggestions would be greatly appreciated.

Many thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] migrate to hosted engine

2017-03-10 Thread Devin A. Bougie
Hi, All.  We have an ovirt 4.1 cluster setup using multiple paths to a single 
iSCSI LUN for the data storage domain.  I would now like to migrate to a hosted 
engine.

I setup the new engine VM, shutdown and backed-up the old VM, and restored to 
the new VM using engine-backup.  After updating DNS to change our engine's FQDN 
to point to the hosted engine, everything seems to work properly.  However, 
when rebooting the entire cluster, the engine VM doesn't come up automatically.

Is there anything that now needs to be done to tell the cluster that it's now 
using a hosted engine?  

I started with a  standard engine setup, as I didn't see a way to specify 
multiple paths to a single iSCSI LUN when using "hosted-engine --deploy."

Any tips would be greatly appreciated.

Many thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm without sanlock

2015-11-07 Thread Devin A. Bougie
On Nov 7, 2015, at 2:10 AM, Nir Soffer  wrote:
>> Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.).
> 
> There is no such dependency.
> Sanlock is using either an lv on block device (iscsi, fop)

Thanks, Nir!  I was thinking sanlock required a disk_lease_dir, which all the 
documentation says to put on NFS or GFS2.  However, as you say I now see that 
ovirt can use sanlock with block devices without requiring a disk_lease_dir.

Thanks again,
Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm without sanlock

2015-11-06 Thread Devin A. Bougie
Hi Nir,

On Nov 6, 2015, at 5:02 AM, Nir Soffer <nsof...@redhat.com> wrote:
> On Thu, Nov 5, 2015 at 11:33 PM, Devin A. Bougie <devin.bou...@cornell.edu> 
> wrote:
>> Hi, All.  Is it possible to run vdsm without sanlock?  We'd prefer to run 
>> libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock 
>> overhead, but it looks like vdsmd / ovirt requires sanlock.
> 
> True, we require sanlock.
> What is "sanlock overhead"?

Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.).  I 
have no problem setting up the filesystem or configuring sanlock to use it, but 
then the vm's fail if the shared filesystem blocks or fails.  We'd like to have 
our vm images use block devices and avoid any dependency on a remote or shared 
file system.  My understanding is that virtlockd can lock a block device 
directly, while sanlock requires something like gfs2 or nfs.

Perhaps it's my misunderstanding or misreading, but it seemed like things were 
moving in the direction of virtlockd.  For example:
http://lists.ovirt.org/pipermail/devel/2015-March/010127.html

Thanks for following up!

Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vdsm without sanlock

2015-11-05 Thread Devin A. Bougie
Hi, All.  Is it possible to run vdsm without sanlock?  We'd prefer to run 
libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock overhead, 
but it looks like vdsmd / ovirt requires sanlock.

Thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-26 Thread Devin A. Bougie
Hi Maor,

On Oct 26, 2015, at 1:50 AM, Maor Lipchuk  wrote:
> Looks like zeroing out the metadata volume with a dd operation was working.
> Can u try to remove the Storage Domain and add it back again now

The Storage Domain disappears from the GUI and isn't seen by ovirt-shell, so 
I'm not sure how to delete it.  Is there a more low-level command I should run?

Nothing changed after upgrading to 3.5.5, so I've now created a bug report.

https://bugzilla.redhat.com/show_bug.cgi?id=1275381

Thanks again,
Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-26 Thread Devin A. Bougie
On Oct 26, 2015, at 2:27 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:
> If you try to import (or add) the Storage Domain again, what do you get?

If I try to add it again, it first says "The following LUNs are already in use: 
...," but if I "Approve operation" I get the same "Cannot zero out volume" 
error.  

If I try to import, I can log into the target but it doesn't show any "Storage 
Name / Storage ID (VG Name)" to import.

Thanks again,
Devin

> 
> Regards,
> Maor
> 
> 
> 
> - Original Message -
>> From: "Devin A. Bougie" <devin.bou...@cornell.edu>
>> To: "Maor Lipchuk" <mlipc...@redhat.com>
>> Cc: Users@ovirt.org
>> Sent: Monday, October 26, 2015 7:47:31 PM
>> Subject: Re: [ovirt-users] Error while executing action New SAN Storage 
>> Domain: Cannot zero out volume
>> 
>> Hi Maor,
>> 
>> On Oct 26, 2015, at 1:50 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:
>>> Looks like zeroing out the metadata volume with a dd operation was working.
>>> Can u try to remove the Storage Domain and add it back again now
>> 
>> The Storage Domain disappears from the GUI and isn't seen by ovirt-shell, so
>> I'm not sure how to delete it.  Is there a more low-level command I should
>> run?
>> 
>> Nothing changed after upgrading to 3.5.5, so I've now created a bug report.
>> 
>> https://bugzilla.redhat.com/show_bug.cgi?id=1275381
>> 
>> Thanks again,
>> Devin
>> 
>> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-25 Thread Devin A. Bougie
Hi Maor,

On Oct 25, 2015, at 12:03 PM, Maor Lipchuk  wrote:
> few questions:
> Which RHEL version is installed on your Host?

7.1

> Can you please share the output of "ls -l 
> /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/"

[root@lnx84 ~]# ls -l /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/
total 0
lrwxrwxrwx 1 root root 8 Oct 23 16:05 ids -> ../dm-23
lrwxrwxrwx 1 root root 8 Oct 23 16:05 inbox -> ../dm-24
lrwxrwxrwx 1 root root 8 Oct 23 16:05 leases -> ../dm-22
lrwxrwxrwx 1 root root 8 Oct 23 16:05 metadata -> ../dm-20
lrwxrwxrwx 1 root root 8 Oct 23 16:05 outbox -> ../dm-21

> What happen when you run this command from your Host:
> /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero 
> of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 seek=0 
> skip=0 conv=notrunc count=40 oflag=direct

[root@lnx84 ~]# /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd 
if=/dev/zero of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 
seek=0 skip=0 conv=notrunc count=40 oflag=direct
40+0 records in
40+0 records out
41943040 bytes (42 MB) copied, 0.435552 s, 96.3 MB/s

> Also, please consider to open a bug at 
> https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt, with all the logs 
> and output so it can be resolved ASAP.

I'll open a bug report in the morning unless you have any other suggestions.  
Many thanks for following up!

Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-25 Thread Devin A. Bougie
Hi Maor,

On Oct 25, 2015, at 6:36 AM, Maor Lipchuk  wrote:
> Does your host is working with enabled selinux?

No, selinux is disabled.  Sorry, I should have mentioned that initially.

Any other suggestions would be greatly appreciated.

Many thanks!
Devin

> - Original Message -
>> 
>> Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error
>> while executing action New SAN Storage Domain: Cannot zero out volume"
>> error.
>> 
>> iscsid does login to the node, and the volumes appear to have been created.
>> However, I cannot use it to create or import a Data / iSCSI storage domain.
>> 
>> [root@lnx84 ~]# iscsiadm -m node
>> #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1
>> 
>> [root@lnx84 ~]# iscsiadm -m session
>> tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash)
>> 
>> [root@lnx84 ~]# pvscan
>>  PV /dev/mapper/1IET_00010001   VG f73c8720-77c3-42a6-8a29-9677db54bac6
>>  lvm2 [547.62 GiB / 543.75 GiB free]
>> ...
>> [root@lnx84 ~]# lvscan
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata'
>>  [512.00 MiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox'
>>  [128.00 MiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases' [2.00
>>  GiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00
>>  MiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox' [128.00
>>  MiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master' [1.00
>>  GiB] inherit
>> ...
>> 
>> Any help would be greatly appreciated.
>> 
>> Many thanks,
>> Devin
>> 
>> Here are the relevant lines from engine.log:
>> --
>> 2015-10-23 16:04:56,925 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84,
>> HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id:
>> 44a64578
>> 2015-10-23 16:04:57,681 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs
>> [id=1IET_00010001, physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn,
>> volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu,
>> serial=SIET_VIRTUAL-DISK, lunMapping=1, vendorId=IET,
>> productId=VIRTUAL-DISK, _lunConnections=[{ id: null, connection: #.#.#.#,
>> iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: null, mountOptions:
>> null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };],
>> deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI,
>> status=Used, diskId=null, diskAlias=null, storageDomainId=null,
>> storageDomainName=null]], log id: 44a64578
>> 2015-10-23 16:05:06,474 INFO
>> [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] Running command:
>> AddSANStorageDomainCommand internal: false. Entities affected :  ID:
>> aaa0----123456789aaa Type: SystemAction group
>> CREATE_STORAGE_DOMAIN with role type ADMIN
>> 2015-10-23 16:05:06,488 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName =
>> lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26,
>> storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19,
>> deviceList=[1IET_00010001], force=true), log id: 12acc23b
>> 2015-10-23 16:05:07,379 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return:
>> dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b
>> 2015-10-23 16:05:07,384 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] START,
>> CreateStorageDomainVDSCommand(HostName = lnx84, HostId =
>> a650e161-75f6-4916-bc18-96044bf3fc26,
>> storageDomain=StorageDomainStatic[lnx88,
>> cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19],
>> args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P), log id: cc93ec6
>> 2015-10-23 16:05:10,356 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] Failed in CreateStorageDomainVDS method
>> 2015-10-23 16:05:10,360 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] Command
>> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand
>> return value
>> StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=374,
>> mMessage=Cannot zero out volume:
>> ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)]]
>> 2015-10-23 16:05:10,367 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] HostName = lnx84
>> 2015-10-23 16:05:10,370 ERROR
>> 

[ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-23 Thread Devin A. Bougie
Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error 
while executing action New SAN Storage Domain: Cannot zero out volume" error.

iscsid does login to the node, and the volumes appear to have been created.  
However, I cannot use it to create or import a Data / iSCSI storage domain.

[root@lnx84 ~]# iscsiadm -m node
#.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1

[root@lnx84 ~]# iscsiadm -m session
tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash)

[root@lnx84 ~]# pvscan
  PV /dev/mapper/1IET_00010001   VG f73c8720-77c3-42a6-8a29-9677db54bac6   lvm2 
[547.62 GiB / 543.75 GiB free]
...
[root@lnx84 ~]# lvscan
  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata' 
[512.00 MiB] inherit
  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox' [128.00 
MiB] inherit
  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases' [2.00 
GiB] inherit
  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00 
MiB] inherit
  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox' [128.00 
MiB] inherit
  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master' [1.00 
GiB] inherit
...

Any help would be greatly appreciated.

Many thanks,
Devin

Here are the relevant lines from engine.log:
--
2015-10-23 16:04:56,925 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] 
(ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84, HostId 
= a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id: 44a64578
2015-10-23 16:04:57,681 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] 
(ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs 
[id=1IET_00010001, physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn, 
volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu, serial=SIET_VIRTUAL-DISK, 
lunMapping=1, vendorId=IET, productId=VIRTUAL-DISK, _lunConnections=[{ id: 
null, connection: #.#.#.#, iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: 
null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
};], deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI, 
status=Used, diskId=null, diskAlias=null, storageDomainId=null, 
storageDomainName=null]], log id: 44a64578
2015-10-23 16:05:06,474 INFO  
[org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] 
(ajp--127.0.0.1-8702-8) [53dd8c98] Running command: AddSANStorageDomainCommand 
internal: false. Entities affected :  ID: aaa0----123456789aaa 
Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2015-10-23 16:05:06,488 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] 
(ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName = lnx84, 
HostId = a650e161-75f6-4916-bc18-96044bf3fc26, 
storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19, 
deviceList=[1IET_00010001], force=true), log id: 12acc23b
2015-10-23 16:05:07,379 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] 
(ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return: 
dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b
2015-10-23 16:05:07,384 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp--127.0.0.1-8702-8) [53dd8c98] START, 
CreateStorageDomainVDSCommand(HostName = lnx84, HostId = 
a650e161-75f6-4916-bc18-96044bf3fc26, storageDomain=StorageDomainStatic[lnx88, 
cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], 
args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P), log id: cc93ec6
2015-10-23 16:05:10,356 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp--127.0.0.1-8702-8) [53dd8c98] Failed in CreateStorageDomainVDS method
2015-10-23 16:05:10,360 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp--127.0.0.1-8702-8) [53dd8c98] Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return 
value 
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=374, mMessage=Cannot 
zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)]]
2015-10-23 16:05:10,367 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp--127.0.0.1-8702-8) [53dd8c98] HostName = lnx84
2015-10-23 16:05:10,370 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp--127.0.0.1-8702-8) [53dd8c98] Command 
CreateStorageDomainVDSCommand(HostName = lnx84, HostId = 
a650e161-75f6-4916-bc18-96044bf3fc26, storageDomain=StorageDomainStatic[lnx88, 
cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], 
args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
CreateStorageDomainVDS, error = Cannot zero out volume: 
('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',), code = 374
2015-10-23 16:05:10,381 INFO