Re: [ovirt-users] Removing iSCSI domain: host side part

2017-07-13 Thread Yaniv Kaul
On Thu, Jul 13, 2017 at 5:59 PM, Gianluca Cecchi 
wrote:

> On Thu, Jul 13, 2017 at 6:23 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> I have cleanly removed an iSCSI domain from oVirt. There is another one
>> (connecting to another storage array) that is the master domain.
>> But I see that oVirt hosts still maintain the iscsi session to the LUN.
>> So I want to clean from os point of view before removing the LUN itself
>> from storage.
>>
>> At the moment I still see the multipath lun on both hosts
>>
>> [root@ov301 network-scripts]# multipath -l
>> . . .
>> 364817197b5dfd0e5538d959702249b1c dm-2 EQLOGIC ,100E-00
>> size=4.0T features='0' hwhandler='0' wp=rw
>> `-+- policy='round-robin 0' prio=0 status=active
>>   |- 9:0:0:0  sde 8:64  active undef  running
>>   `- 10:0:0:0 sdf 8:80  active undef  running
>>
>> and
>> [root@ov301 network-scripts]# iscsiadm -m session
>> tcp: [1] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-7
>> 71816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
>> tcp: [2] 10.10.100.9:3260,1 iqn.2001-05.com.equallogic:4-7
>> 71816-e5d0dfb59-1c9b240297958d53-ovsd3910 (non-flash)
>> . . .
>>
>> Do I have to clean the multipath paths and multipath device and then
>> iSCSI logout, or is it sufficient to iSCSI logout and the multipath device
>> and its path will be cleanly removed from OS point of view?
>>
>> I would like not to have multipath device in stale condition.
>>
>> Thanks
>> Gianluca
>>
>
>
> I have not understood why, if I destroy a storage domain, still oVirt
> maintains its LVM structures
>

Destroy is an Engine command - it does not touch the storage at all (the
assumption is that you've someow lost/deleted your storage domain and now
you want to get rid of it from the Engine side).


>
> Anyway, these were the step done at host side before removal of the LUN at
> storage array level
>

I assume removing the LUN + reboot would have been quicker.
Y.


>
> Pick up the VG of which the lun is still a PV for..
>
> vgchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b
> --> actually all lvs were already inactive
>
> vgremove 5ed04196-87f1-480e-9fee-9dd450a3b53b
> Do you really want to remove volume group 
> "5ed04196-87f1-480e-9fee-9dd450a3b53b"
> containing 22 logical volumes? [y/n]: y
>   Logical volume "metadata" successfully removed
>   Logical volume "outbox" successfully removed
>   Logical volume "xleases" successfully removed
>   Logical volume "leases" successfully removed
>   Logical volume "ids" successfully removed
>   Logical volume "inbox" successfully removed
>   Logical volume "master" successfully removed
>   Logical volume "bc141d0d-b648-409b-a862-9b6d950517a5" successfully
> removed
>   Logical volume "31255d83-ca67-4f47-a001-c734c498d176" successfully
> removed
>   Logical volume "607dbf59-7d4d-4fc3-ae5f-e8824bf82648" successfully
> removed
>   Logical volume "dfbf5787-36a4-4685-bf3a-43a55e9cd4a6" successfully
> removed
>   Logical volume "400ea884-3876-4a21-9ec6-b0b8ac706cee" successfully
> removed
>   Logical volume "1919f6e6-86cd-4a13-9a21-ce52b9f62e35" successfully
> removed
>   Logical volume "a3ea679b-95c0-475d-80c5-8dc4d86bd87f" successfully
> removed
>   Logical volume "32f433c8-a991-4cfc-9a0b-7f44422815b7" successfully
> removed
>   Logical volume "7f867f59-c977-47cf-b280-a2a0fef8b95b" successfully
> removed
>   Logical volume "6e2005f2-3ff5-42fa-867e-e7812c6726e4" successfully
> removed
>   Logical volume "42344cf4-8f9c-464d-ab0f-d62beb15d359" successfully
> removed
>   Logical volume "293e169e-53ed-4d60-b22a-65835f5b0d29" successfully
> removed
>   Logical volume "e86752c4-de73-4733-b561-2afb31bcc2d3" successfully
> removed
>   Logical volume "79350ec5-eea5-458b-a3ee-ba394d2cda27" successfully
> removed
>   Logical volume "77824fce-4f95-49e3-b732-f791151dd15c" successfully
> removed
>   Volume group "5ed04196-87f1-480e-9fee-9dd450a3b53b" successfully removed
>
> pvremove /dev/mapper/364817197b5dfd0e5538d959702249b1c
>
> multipath -f 364817197b5dfd0e5538d959702249b1c
>
> iscsiadm -m session -r 1 -u
> Logging out of session [sid: 1, target: iqn.2001-05.com.equallogic:4-
> 771816-e5d0dfb59-1c9b240297958d53-ovsd3910, portal: 10.10.100.9,3260]
> Logout of [sid: 1, target: 
> iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
> portal: 10.10.100.9,3260] successful.
>
> iscsiadm -m session -r 2 -u
> Logging out of session [sid: 2, target: iqn.2001-05.com.equallogic:4-
> 771816-e5d0dfb59-1c9b240297958d53-ovsd3910, portal: 10.10.100.9,3260]
> Logout of [sid: 2, target: 
> iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
> portal: 10.10.100.9,3260] successful.
>
> done.
>
> NOTE: on one node I missed the LVM clean before logging out of the iSCSI
> session
> this resulted in impossibility to have a clean status because the
> multipath device resulted as without paths but still used (by LVM)
> and the command
> multipath -f
> failed.
> Also vgs and lvs 

Re: [ovirt-users] Removing iSCSI domain: host side part

2017-07-13 Thread Gianluca Cecchi
On Thu, Jul 13, 2017 at 6:23 PM, Gianluca Cecchi 
wrote:

> Hello,
> I have cleanly removed an iSCSI domain from oVirt. There is another one
> (connecting to another storage array) that is the master domain.
> But I see that oVirt hosts still maintain the iscsi session to the LUN.
> So I want to clean from os point of view before removing the LUN itself
> from storage.
>
> At the moment I still see the multipath lun on both hosts
>
> [root@ov301 network-scripts]# multipath -l
> . . .
> 364817197b5dfd0e5538d959702249b1c dm-2 EQLOGIC ,100E-00
> size=4.0T features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 9:0:0:0  sde 8:64  active undef  running
>   `- 10:0:0:0 sdf 8:80  active undef  running
>
> and
> [root@ov301 network-scripts]# iscsiadm -m session
> tcp: [1] 10.10.100.9:3260,1 
> iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
> (non-flash)
> tcp: [2] 10.10.100.9:3260,1 
> iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
> (non-flash)
> . . .
>
> Do I have to clean the multipath paths and multipath device and then iSCSI
> logout, or is it sufficient to iSCSI logout and the multipath device and
> its path will be cleanly removed from OS point of view?
>
> I would like not to have multipath device in stale condition.
>
> Thanks
> Gianluca
>


I have not understood why, if I destroy a storage domain, still oVirt
maintains its LVM structures

Anyway, these were the step done at host side before removal of the LUN at
storage array level

Pick up the VG of which the lun is still a PV for..

vgchange -an 5ed04196-87f1-480e-9fee-9dd450a3b53b
--> actually all lvs were already inactive

vgremove 5ed04196-87f1-480e-9fee-9dd450a3b53b
Do you really want to remove volume group
"5ed04196-87f1-480e-9fee-9dd450a3b53b" containing 22 logical volumes?
[y/n]: y
  Logical volume "metadata" successfully removed
  Logical volume "outbox" successfully removed
  Logical volume "xleases" successfully removed
  Logical volume "leases" successfully removed
  Logical volume "ids" successfully removed
  Logical volume "inbox" successfully removed
  Logical volume "master" successfully removed
  Logical volume "bc141d0d-b648-409b-a862-9b6d950517a5" successfully removed
  Logical volume "31255d83-ca67-4f47-a001-c734c498d176" successfully removed
  Logical volume "607dbf59-7d4d-4fc3-ae5f-e8824bf82648" successfully removed
  Logical volume "dfbf5787-36a4-4685-bf3a-43a55e9cd4a6" successfully removed
  Logical volume "400ea884-3876-4a21-9ec6-b0b8ac706cee" successfully removed
  Logical volume "1919f6e6-86cd-4a13-9a21-ce52b9f62e35" successfully removed
  Logical volume "a3ea679b-95c0-475d-80c5-8dc4d86bd87f" successfully removed
  Logical volume "32f433c8-a991-4cfc-9a0b-7f44422815b7" successfully removed
  Logical volume "7f867f59-c977-47cf-b280-a2a0fef8b95b" successfully removed
  Logical volume "6e2005f2-3ff5-42fa-867e-e7812c6726e4" successfully removed
  Logical volume "42344cf4-8f9c-464d-ab0f-d62beb15d359" successfully removed
  Logical volume "293e169e-53ed-4d60-b22a-65835f5b0d29" successfully removed
  Logical volume "e86752c4-de73-4733-b561-2afb31bcc2d3" successfully removed
  Logical volume "79350ec5-eea5-458b-a3ee-ba394d2cda27" successfully removed
  Logical volume "77824fce-4f95-49e3-b732-f791151dd15c" successfully removed
  Volume group "5ed04196-87f1-480e-9fee-9dd450a3b53b" successfully removed

pvremove /dev/mapper/364817197b5dfd0e5538d959702249b1c

multipath -f 364817197b5dfd0e5538d959702249b1c

iscsiadm -m session -r 1 -u
Logging out of session [sid: 1, target:
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
portal: 10.10.100.9,3260]
Logout of [sid: 1, target:
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
portal: 10.10.100.9,3260] successful.

iscsiadm -m session -r 2 -u
Logging out of session [sid: 2, target:
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
portal: 10.10.100.9,3260]
Logout of [sid: 2, target:
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910,
portal: 10.10.100.9,3260] successful.

done.

NOTE: on one node I missed the LVM clean before logging out of the iSCSI
session
this resulted in impossibility to have a clean status because the multipath
device resulted as without paths but still used (by LVM)
and the command
multipath -f
failed.
Also vgs and lvs commands threw out many errors and many errors in messages
too

These were the commands to clean the situation also on that node.

dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/master
dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/inbox
dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/xleases
dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/leases
dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/outbox
dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/ids
dmsetup remove 5ed04196-87f1-480e-9fee-9dd450a3b53b/metadata

multipath -f 

Re: [ovirt-users] Failed to create template

2017-07-13 Thread Mike Farnam
Ok thanks. I'll try that today and report back if it made a difference or not. 

> On Jul 13, 2017, at 9:33 AM, Fred Rolland  wrote:
> 
> Yes. But the image should be uncompressed before you upload it.
> 
>> On Jul 13, 2017 6:34 PM, "aduckers"  wrote:
>> Ok.  I should be able to select QCOW2 for a SAN storage target?  If true, 
>> then I’ll need to figure out why that doesn’t work.
>> 
>>> On Jul 13, 2017, at 8:32 AM, Fred Rolland  wrote:
>>> 
>>> When you select RAW, the Vdsm will allocated the whole size of the image 
>>> (virtual size), this is why you will not encounter this issue in Block 
>>> Storage.
>>> 
 On Thu, Jul 13, 2017 at 6:17 PM, aduckers  wrote:
 Thanks Fred.  I haven’t run into the upload issue again, but if we do I’ll 
 try that.
 Regarding the template creation issue - could that just be user error on 
 my part?  I’ve found that if I select RAW format for the disk, when target 
 is SAN, it works fine.  QCOW2 format works for a target of NFS.  
 Is that the way it’s supposed to behave?
 
 
> On Jul 13, 2017, at 7:59 AM, Fred Rolland  wrote:
> 
> It seems you hit [1]
> If the image is compressed, the Vdsm will not compute the size as needed.
> In file storage, it will work OK as the file system is sparse.
> 
> As a workaround you can decompress before uploading:
> qemu-img convert -f qcow2 rhel-guest-image-7.3-35.x86_64.qcow2 -O qcow2 
> -o compat=1.1 uncompressed.qcow2
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1470435
> 
>> On Wed, Jul 5, 2017 at 10:44 AM, Fred Rolland  
>> wrote:
>> Can you please open bugs for the two issues for future tracking ?
>> These needs further investigations.
>> 
>>> On Mon, Jul 3, 2017 at 2:17 AM, aduckers  wrote:
>>> Thanks for the assistance.  Versions are:
>>> 
>>> vdsm.x86_64 4.19.15-1.el7.centos
>>> ovirt-engine.noarch4.1.2.2-1.el7.centos
>>> 
>>> Logs are attached.  The GUI shows a creation date of 2017-06-23 
>>> 11:30:13 for the disk image that is stuck finalizing, so that might be 
>>> a good place to start in the logs.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> > On Jul 2, 2017, at 3:52 AM, Fred Rolland  wrote:
>>> >
>>> > Hi,
>>> >
>>> > Thanks for the logs.
>>> >
>>> > What exact version are you using ? (VDSM,engine)
>>> >
>>> > Regarding the upload issue, can you please provide imageio-proxy and 
>>> > imageio-daemon logs ?
>>> > Issue in [1] looks with the same symptoms, but we need more info.
>>> >
>>> > Regarding the template issue, it looks like [2].
>>> > There were some issues when calculating the estimated size target 
>>> > volume, that should be already fixed.
>>> > Please provide the exact versions, so I can check if it includes the 
>>> > fixes.
>>> >
>>> > Thanks,
>>> >
>>> > Fred
>>> >
>>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1357269
>>> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1448606
>>> >
>>> >
>>> > On Fri, Jun 30, 2017 at 5:11 AM, aduckers  
>>> > wrote:
>>> >
>>> >
>>> > Attached.  I’ve also got an image upload to the ISO domain stuck in 
>>> > “Finalizing”, and can’t cancel or clear it.  Not sure if related or 
>>> > not, but it might show in the logs and if that can be cleared that’d 
>>> > be great too.
>>> >
>>> > Thanks
>>> >
>>> >
>>> >> On Jun 29, 2017, at 9:20 AM, Fred Rolland  
>>> >> wrote:
>>> >>
>>> >> Can you please attach engine and Vdsm logs ?
>>> >>
>>> >> On Thu, Jun 29, 2017 at 6:21 PM, aduckers  
>>> >> wrote:
>>> >> I’m running 4.1 with a hosted engine, using FC SAN storage.  I’ve 
>>> >> uploaded a qcow2 image, then created a VM and attached that image.
>>> >> When trying to create a template from that VM, we get failures with:
>>> >>
>>> >> failed: low level image copy failed
>>> >> VDSM command DeleteImageGroupVDS failed: Image does not exist in 
>>> >> domain
>>> >> failed to create template
>>> >>
>>> >> What should I be looking at to resolve this?  Anyone recognize this 
>>> >> issue?
>>> >>
>>> >> Thanks
>>> >>
>>> >>
>>> >> ___
>>> >> Users mailing list
>>> >> Users@ovirt.org
>>> >> http://lists.ovirt.org/mailman/listinfo/users
>>> >>
>>> >
>>> >
>>> >
>>> 
>>> 
>> 
> 
 
>>> 
>> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create template

2017-07-13 Thread Fred Rolland
Yes. But the image should be uncompressed before you upload it.

On Jul 13, 2017 6:34 PM, "aduckers"  wrote:

> Ok.  I should be able to select QCOW2 for a SAN storage target?  If true,
> then I’ll need to figure out why that doesn’t work.
>
> On Jul 13, 2017, at 8:32 AM, Fred Rolland  wrote:
>
> When you select RAW, the Vdsm will allocated the whole size of the image
> (virtual size), this is why you will not encounter this issue in Block
> Storage.
>
> On Thu, Jul 13, 2017 at 6:17 PM, aduckers  wrote:
>
>> Thanks Fred.  I haven’t run into the upload issue again, but if we do
>> I’ll try that.
>> Regarding the template creation issue - could that just be user error on
>> my part?  I’ve found that if I select RAW format for the disk, when target
>> is SAN, it works fine.  QCOW2 format works for a target of NFS.
>> Is that the way it’s supposed to behave?
>>
>>
>> On Jul 13, 2017, at 7:59 AM, Fred Rolland  wrote:
>>
>> It seems you hit [1]
>> If the image is compressed, the Vdsm will not compute the size as needed.
>> In file storage, it will work OK as the file system is sparse.
>>
>> As a workaround you can decompress before uploading:
>>
>> qemu-img convert -f qcow2 rhel-guest-image-7.3-35.x86_64.qcow2 -O qcow2 -o 
>> compat=1.1 uncompressed.qcow2
>>
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1470435
>>
>> On Wed, Jul 5, 2017 at 10:44 AM, Fred Rolland 
>> wrote:
>>
>>> Can you please open bugs for the two issues for future tracking ?
>>> These needs further investigations.
>>>
>>> On Mon, Jul 3, 2017 at 2:17 AM, aduckers  wrote:
>>>
 Thanks for the assistance.  Versions are:

 vdsm.x86_64 4.19.15-1.el7.centos
 ovirt-engine.noarch4.1.2.2-1.el7.centos

 Logs are attached.  The GUI shows a creation date of 2017-06-23
 11:30:13 for the disk image that is stuck finalizing, so that might be a
 good place to start in the logs.





 > On Jul 2, 2017, at 3:52 AM, Fred Rolland  wrote:
 >
 > Hi,
 >
 > Thanks for the logs.
 >
 > What exact version are you using ? (VDSM,engine)
 >
 > Regarding the upload issue, can you please provide imageio-proxy and
 imageio-daemon logs ?
 > Issue in [1] looks with the same symptoms, but we need more info.
 >
 > Regarding the template issue, it looks like [2].
 > There were some issues when calculating the estimated size target
 volume, that should be already fixed.
 > Please provide the exact versions, so I can check if it includes the
 fixes.
 >
 > Thanks,
 >
 > Fred
 >
 > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1357269
 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1448606
 >
 >
 > On Fri, Jun 30, 2017 at 5:11 AM, aduckers 
 wrote:
 >
 >
 > Attached.  I’ve also got an image upload to the ISO domain stuck in
 “Finalizing”, and can’t cancel or clear it.  Not sure if related or not,
 but it might show in the logs and if that can be cleared that’d be great
 too.
 >
 > Thanks
 >
 >
 >> On Jun 29, 2017, at 9:20 AM, Fred Rolland 
 wrote:
 >>
 >> Can you please attach engine and Vdsm logs ?
 >>
 >> On Thu, Jun 29, 2017 at 6:21 PM, aduckers 
 wrote:
 >> I’m running 4.1 with a hosted engine, using FC SAN storage.  I’ve
 uploaded a qcow2 image, then created a VM and attached that image.
 >> When trying to create a template from that VM, we get failures with:
 >>
 >> failed: low level image copy failed
 >> VDSM command DeleteImageGroupVDS failed: Image does not exist in
 domain
 >> failed to create template
 >>
 >> What should I be looking at to resolve this?  Anyone recognize this
 issue?
 >>
 >> Thanks
 >>
 >>
 >> ___
 >> Users mailing list
 >> Users@ovirt.org
 >> http://lists.ovirt.org/mailman/listinfo/users
 >>
 >
 >
 >



>>>
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Removing iSCSI domain: host side part

2017-07-13 Thread Gianluca Cecchi
Hello,
I have cleanly removed an iSCSI domain from oVirt. There is another one
(connecting to another storage array) that is the master domain.
But I see that oVirt hosts still maintain the iscsi session to the LUN.
So I want to clean from os point of view before removing the LUN itself
from storage.

At the moment I still see the multipath lun on both hosts

[root@ov301 network-scripts]# multipath -l
. . .
364817197b5dfd0e5538d959702249b1c dm-2 EQLOGIC ,100E-00
size=4.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 9:0:0:0  sde 8:64  active undef  running
  `- 10:0:0:0 sdf 8:80  active undef  running

and
[root@ov301 network-scripts]# iscsiadm -m session
tcp: [1] 10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
(non-flash)
tcp: [2] 10.10.100.9:3260,1
iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910
(non-flash)
. . .

Do I have to clean the multipath paths and multipath device and then iSCSI
logout, or is it sufficient to iSCSI logout and the multipath device and
its path will be cleanly removed from OS point of view?

I would like not to have multipath device in stale condition.

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.

2017-07-13 Thread richard anthony falzini
Hi,
i have the same problem with gluster.
this is a bug that i opened
https://bugzilla.redhat.com/show_bug.cgi?id=1461029 .
In the bug i used single disk vm but i start to notice the problem with
multiple disk vm.


2017-07-13 0:07 GMT+02:00 Devin Acosta :

> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
> question has multiple Disks (4 to be exact). It snapshotted OK while on
> iSCSI however when I went to delete the single snapshot that existed it
> went into Locked state and never came back. The deletion has been going for
> well over an hour, and I am not convinced since the snapshot is less than
> 12 hours old that it’s really doing anything.
>
> I have seen that doing some Googling indicates there might be some known
> issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
>
> In the logs on the engine it shows:
>
> 2017-07-12 21:59:42,473Z INFO  [org.ovirt.engine.core.bll.
> SerialChildCommandsExecutionCallback] (DefaultQuartzScheduler2)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 21:59:52,480Z INFO  [org.ovirt.engine.core.bll.
> ConcurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler2)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command 'RemoveSnapshot' (id:
> '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on child command id:
> '75c535fd-4558-459a-9992-875c48578a97' type:'ColdMergeSnapshotSingleDisk'
> to complete
> 2017-07-12 21:59:52,483Z INFO  [org.ovirt.engine.core.bll.
> SerialChildCommandsExecutionCallback] (DefaultQuartzScheduler2)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:02,490Z INFO  [org.ovirt.engine.core.bll.
> ConcurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler6)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command 'RemoveSnapshot' (id:
> '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on child command id:
> '75c535fd-4558-459a-9992-875c48578a97' type:'ColdMergeSnapshotSingleDisk'
> to complete
> 2017-07-12 22:00:02,493Z INFO  [org.ovirt.engine.core.bll.
> SerialChildCommandsExecutionCallback] (DefaultQuartzScheduler6)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:12,498Z INFO  [org.ovirt.engine.core.bll.
> ConcurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler3)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command 'RemoveSnapshot' (id:
> '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on child command id:
> '75c535fd-4558-459a-9992-875c48578a97' type:'ColdMergeSnapshotSingleDisk'
> to complete
> 2017-07-12 22:00:12,501Z INFO  [org.ovirt.engine.core.bll.
> SerialChildCommandsExecutionCallback] (DefaultQuartzScheduler3)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:22,508Z INFO  [org.ovirt.engine.core.bll.
> ConcurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler5)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command 'RemoveSnapshot' (id:
> '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on child command id:
> '75c535fd-4558-459a-9992-875c48578a97' type:'ColdMergeSnapshotSingleDisk'
> to complete
> 2017-07-12 22:00:22,511Z INFO  [org.ovirt.engine.core.bll.
> SerialChildCommandsExecutionCallback] (DefaultQuartzScheduler5)
> [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
>
> This is what I seen on the SPM when I grep’d the Snapshot ID.
>
> 2017-07-12 14:22:18,773-0700 INFO  (jsonrpc/6) [vdsm.api] START
> createVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
> spUUID=u'0001-0001-0001-0001-0311',
> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d', size=u'107374182400',
> volFormat=4, preallocate=2, diskType=2, 
> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845',
> desc=u'', srcImgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
> srcVolUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', initialSize=None)
> from=:::10.4.64.7,60016, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
> (api:46)
> 2017-07-12 14:22:19,095-0700 WARN  (tasks/6) [root] File:
> /rhev/data-center/0001-0001-0001-0001-0311/
> 0c02a758-4295-4199-97de-b041744b3b15/images/6a887015-
> 

Re: [ovirt-users] Failed to create template

2017-07-13 Thread aduckers
Ok.  I should be able to select QCOW2 for a SAN storage target?  If true, then 
I’ll need to figure out why that doesn’t work.

> On Jul 13, 2017, at 8:32 AM, Fred Rolland  wrote:
> 
> When you select RAW, the Vdsm will allocated the whole size of the image 
> (virtual size), this is why you will not encounter this issue in Block 
> Storage.
> 
> On Thu, Jul 13, 2017 at 6:17 PM, aduckers  > wrote:
> Thanks Fred.  I haven’t run into the upload issue again, but if we do I’ll 
> try that.
> Regarding the template creation issue - could that just be user error on my 
> part?  I’ve found that if I select RAW format for the disk, when target is 
> SAN, it works fine.  QCOW2 format works for a target of NFS.  
> Is that the way it’s supposed to behave?
> 
> 
>> On Jul 13, 2017, at 7:59 AM, Fred Rolland > > wrote:
>> 
>> It seems you hit [1]
>> If the image is compressed, the Vdsm will not compute the size as needed.
>> In file storage, it will work OK as the file system is sparse.
>> 
>> As a workaround you can decompress before uploading:
>> qemu-img convert -f qcow2 rhel-guest-image-7.3-35.x86_64.qcow2 -O qcow2 -o 
>> compat=1.1 uncompressed.qcow2
>> 
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1470435 
>> 
>> 
>> On Wed, Jul 5, 2017 at 10:44 AM, Fred Rolland > > wrote:
>> Can you please open bugs for the two issues for future tracking ?
>> These needs further investigations.
>> 
>> On Mon, Jul 3, 2017 at 2:17 AM, aduckers > > wrote:
>> Thanks for the assistance.  Versions are:
>> 
>> vdsm.x86_64 4.19.15-1.el7.centos
>> ovirt-engine.noarch4.1.2.2-1.el7.centos
>> 
>> Logs are attached.  The GUI shows a creation date of 2017-06-23 11:30:13 for 
>> the disk image that is stuck finalizing, so that might be a good place to 
>> start in the logs.
>> 
>> 
>> 
>> 
>> 
>> > On Jul 2, 2017, at 3:52 AM, Fred Rolland > > > wrote:
>> >
>> > Hi,
>> >
>> > Thanks for the logs.
>> >
>> > What exact version are you using ? (VDSM,engine)
>> >
>> > Regarding the upload issue, can you please provide imageio-proxy and 
>> > imageio-daemon logs ?
>> > Issue in [1] looks with the same symptoms, but we need more info.
>> >
>> > Regarding the template issue, it looks like [2].
>> > There were some issues when calculating the estimated size target volume, 
>> > that should be already fixed.
>> > Please provide the exact versions, so I can check if it includes the fixes.
>> >
>> > Thanks,
>> >
>> > Fred
>> >
>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1357269 
>> > 
>> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1448606 
>> > 
>> >
>> >
>> > On Fri, Jun 30, 2017 at 5:11 AM, aduckers > > > wrote:
>> >
>> >
>> > Attached.  I’ve also got an image upload to the ISO domain stuck in 
>> > “Finalizing”, and can’t cancel or clear it.  Not sure if related or not, 
>> > but it might show in the logs and if that can be cleared that’d be great 
>> > too.
>> >
>> > Thanks
>> >
>> >
>> >> On Jun 29, 2017, at 9:20 AM, Fred Rolland > >> > wrote:
>> >>
>> >> Can you please attach engine and Vdsm logs ?
>> >>
>> >> On Thu, Jun 29, 2017 at 6:21 PM, aduckers > >> > wrote:
>> >> I’m running 4.1 with a hosted engine, using FC SAN storage.  I’ve 
>> >> uploaded a qcow2 image, then created a VM and attached that image.
>> >> When trying to create a template from that VM, we get failures with:
>> >>
>> >> failed: low level image copy failed
>> >> VDSM command DeleteImageGroupVDS failed: Image does not exist in domain
>> >> failed to create template
>> >>
>> >> What should I be looking at to resolve this?  Anyone recognize this issue?
>> >>
>> >> Thanks
>> >>
>> >>
>> >> ___
>> >> Users mailing list
>> >> Users@ovirt.org 
>> >> http://lists.ovirt.org/mailman/listinfo/users 
>> >> 
>> >>
>> >
>> >
>> >
>> 
>> 
>> 
>> 
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create template

2017-07-13 Thread Fred Rolland
When you select RAW, the Vdsm will allocated the whole size of the image
(virtual size), this is why you will not encounter this issue in Block
Storage.

On Thu, Jul 13, 2017 at 6:17 PM, aduckers  wrote:

> Thanks Fred.  I haven’t run into the upload issue again, but if we do I’ll
> try that.
> Regarding the template creation issue - could that just be user error on
> my part?  I’ve found that if I select RAW format for the disk, when target
> is SAN, it works fine.  QCOW2 format works for a target of NFS.
> Is that the way it’s supposed to behave?
>
>
> On Jul 13, 2017, at 7:59 AM, Fred Rolland  wrote:
>
> It seems you hit [1]
> If the image is compressed, the Vdsm will not compute the size as needed.
> In file storage, it will work OK as the file system is sparse.
>
> As a workaround you can decompress before uploading:
>
> qemu-img convert -f qcow2 rhel-guest-image-7.3-35.x86_64.qcow2 -O qcow2 -o 
> compat=1.1 uncompressed.qcow2
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1470435
>
> On Wed, Jul 5, 2017 at 10:44 AM, Fred Rolland  wrote:
>
>> Can you please open bugs for the two issues for future tracking ?
>> These needs further investigations.
>>
>> On Mon, Jul 3, 2017 at 2:17 AM, aduckers  wrote:
>>
>>> Thanks for the assistance.  Versions are:
>>>
>>> vdsm.x86_64 4.19.15-1.el7.centos
>>> ovirt-engine.noarch4.1.2.2-1.el7.centos
>>>
>>> Logs are attached.  The GUI shows a creation date of 2017-06-23 11:30:13
>>> for the disk image that is stuck finalizing, so that might be a good place
>>> to start in the logs.
>>>
>>>
>>>
>>>
>>>
>>> > On Jul 2, 2017, at 3:52 AM, Fred Rolland  wrote:
>>> >
>>> > Hi,
>>> >
>>> > Thanks for the logs.
>>> >
>>> > What exact version are you using ? (VDSM,engine)
>>> >
>>> > Regarding the upload issue, can you please provide imageio-proxy and
>>> imageio-daemon logs ?
>>> > Issue in [1] looks with the same symptoms, but we need more info.
>>> >
>>> > Regarding the template issue, it looks like [2].
>>> > There were some issues when calculating the estimated size target
>>> volume, that should be already fixed.
>>> > Please provide the exact versions, so I can check if it includes the
>>> fixes.
>>> >
>>> > Thanks,
>>> >
>>> > Fred
>>> >
>>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1357269
>>> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1448606
>>> >
>>> >
>>> > On Fri, Jun 30, 2017 at 5:11 AM, aduckers 
>>> wrote:
>>> >
>>> >
>>> > Attached.  I’ve also got an image upload to the ISO domain stuck in
>>> “Finalizing”, and can’t cancel or clear it.  Not sure if related or not,
>>> but it might show in the logs and if that can be cleared that’d be great
>>> too.
>>> >
>>> > Thanks
>>> >
>>> >
>>> >> On Jun 29, 2017, at 9:20 AM, Fred Rolland 
>>> wrote:
>>> >>
>>> >> Can you please attach engine and Vdsm logs ?
>>> >>
>>> >> On Thu, Jun 29, 2017 at 6:21 PM, aduckers 
>>> wrote:
>>> >> I’m running 4.1 with a hosted engine, using FC SAN storage.  I’ve
>>> uploaded a qcow2 image, then created a VM and attached that image.
>>> >> When trying to create a template from that VM, we get failures with:
>>> >>
>>> >> failed: low level image copy failed
>>> >> VDSM command DeleteImageGroupVDS failed: Image does not exist in
>>> domain
>>> >> failed to create template
>>> >>
>>> >> What should I be looking at to resolve this?  Anyone recognize this
>>> issue?
>>> >>
>>> >> Thanks
>>> >>
>>> >>
>>> >> ___
>>> >> Users mailing list
>>> >> Users@ovirt.org
>>> >> http://lists.ovirt.org/mailman/listinfo/users
>>> >>
>>> >
>>> >
>>> >
>>>
>>>
>>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create template

2017-07-13 Thread aduckers
Thanks Fred.  I haven’t run into the upload issue again, but if we do I’ll try 
that.
Regarding the template creation issue - could that just be user error on my 
part?  I’ve found that if I select RAW format for the disk, when target is SAN, 
it works fine.  QCOW2 format works for a target of NFS.  
Is that the way it’s supposed to behave?


> On Jul 13, 2017, at 7:59 AM, Fred Rolland  wrote:
> 
> It seems you hit [1]
> If the image is compressed, the Vdsm will not compute the size as needed.
> In file storage, it will work OK as the file system is sparse.
> 
> As a workaround you can decompress before uploading:
> qemu-img convert -f qcow2 rhel-guest-image-7.3-35.x86_64.qcow2 -O qcow2 -o 
> compat=1.1 uncompressed.qcow2
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1470435 
> 
> 
> On Wed, Jul 5, 2017 at 10:44 AM, Fred Rolland  > wrote:
> Can you please open bugs for the two issues for future tracking ?
> These needs further investigations.
> 
> On Mon, Jul 3, 2017 at 2:17 AM, aduckers  > wrote:
> Thanks for the assistance.  Versions are:
> 
> vdsm.x86_64 4.19.15-1.el7.centos
> ovirt-engine.noarch4.1.2.2-1.el7.centos
> 
> Logs are attached.  The GUI shows a creation date of 2017-06-23 11:30:13 for 
> the disk image that is stuck finalizing, so that might be a good place to 
> start in the logs.
> 
> 
> 
> 
> 
> > On Jul 2, 2017, at 3:52 AM, Fred Rolland  > > wrote:
> >
> > Hi,
> >
> > Thanks for the logs.
> >
> > What exact version are you using ? (VDSM,engine)
> >
> > Regarding the upload issue, can you please provide imageio-proxy and 
> > imageio-daemon logs ?
> > Issue in [1] looks with the same symptoms, but we need more info.
> >
> > Regarding the template issue, it looks like [2].
> > There were some issues when calculating the estimated size target volume, 
> > that should be already fixed.
> > Please provide the exact versions, so I can check if it includes the fixes.
> >
> > Thanks,
> >
> > Fred
> >
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1357269 
> > 
> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1448606 
> > 
> >
> >
> > On Fri, Jun 30, 2017 at 5:11 AM, aduckers  > > wrote:
> >
> >
> > Attached.  I’ve also got an image upload to the ISO domain stuck in 
> > “Finalizing”, and can’t cancel or clear it.  Not sure if related or not, 
> > but it might show in the logs and if that can be cleared that’d be great 
> > too.
> >
> > Thanks
> >
> >
> >> On Jun 29, 2017, at 9:20 AM, Fred Rolland  >> > wrote:
> >>
> >> Can you please attach engine and Vdsm logs ?
> >>
> >> On Thu, Jun 29, 2017 at 6:21 PM, aduckers  >> > wrote:
> >> I’m running 4.1 with a hosted engine, using FC SAN storage.  I’ve uploaded 
> >> a qcow2 image, then created a VM and attached that image.
> >> When trying to create a template from that VM, we get failures with:
> >>
> >> failed: low level image copy failed
> >> VDSM command DeleteImageGroupVDS failed: Image does not exist in domain
> >> failed to create template
> >>
> >> What should I be looking at to resolve this?  Anyone recognize this issue?
> >>
> >> Thanks
> >>
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org 
> >> http://lists.ovirt.org/mailman/listinfo/users 
> >> 
> >>
> >
> >
> >
> 
> 
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create template

2017-07-13 Thread Fred Rolland
It seems you hit [1]
If the image is compressed, the Vdsm will not compute the size as needed.
In file storage, it will work OK as the file system is sparse.

As a workaround you can decompress before uploading:

qemu-img convert -f qcow2 rhel-guest-image-7.3-35.x86_64.qcow2 -O
qcow2 -o compat=1.1 uncompressed.qcow2


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1470435

On Wed, Jul 5, 2017 at 10:44 AM, Fred Rolland  wrote:

> Can you please open bugs for the two issues for future tracking ?
> These needs further investigations.
>
> On Mon, Jul 3, 2017 at 2:17 AM, aduckers  wrote:
>
>> Thanks for the assistance.  Versions are:
>>
>> vdsm.x86_64 4.19.15-1.el7.centos
>> ovirt-engine.noarch4.1.2.2-1.el7.centos
>>
>> Logs are attached.  The GUI shows a creation date of 2017-06-23 11:30:13
>> for the disk image that is stuck finalizing, so that might be a good place
>> to start in the logs.
>>
>>
>>
>>
>>
>> > On Jul 2, 2017, at 3:52 AM, Fred Rolland  wrote:
>> >
>> > Hi,
>> >
>> > Thanks for the logs.
>> >
>> > What exact version are you using ? (VDSM,engine)
>> >
>> > Regarding the upload issue, can you please provide imageio-proxy and
>> imageio-daemon logs ?
>> > Issue in [1] looks with the same symptoms, but we need more info.
>> >
>> > Regarding the template issue, it looks like [2].
>> > There were some issues when calculating the estimated size target
>> volume, that should be already fixed.
>> > Please provide the exact versions, so I can check if it includes the
>> fixes.
>> >
>> > Thanks,
>> >
>> > Fred
>> >
>> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1357269
>> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1448606
>> >
>> >
>> > On Fri, Jun 30, 2017 at 5:11 AM, aduckers 
>> wrote:
>> >
>> >
>> > Attached.  I’ve also got an image upload to the ISO domain stuck in
>> “Finalizing”, and can’t cancel or clear it.  Not sure if related or not,
>> but it might show in the logs and if that can be cleared that’d be great
>> too.
>> >
>> > Thanks
>> >
>> >
>> >> On Jun 29, 2017, at 9:20 AM, Fred Rolland  wrote:
>> >>
>> >> Can you please attach engine and Vdsm logs ?
>> >>
>> >> On Thu, Jun 29, 2017 at 6:21 PM, aduckers 
>> wrote:
>> >> I’m running 4.1 with a hosted engine, using FC SAN storage.  I’ve
>> uploaded a qcow2 image, then created a VM and attached that image.
>> >> When trying to create a template from that VM, we get failures with:
>> >>
>> >> failed: low level image copy failed
>> >> VDSM command DeleteImageGroupVDS failed: Image does not exist in domain
>> >> failed to create template
>> >>
>> >> What should I be looking at to resolve this?  Anyone recognize this
>> issue?
>> >>
>> >> Thanks
>> >>
>> >>
>> >> ___
>> >> Users mailing list
>> >> Users@ovirt.org
>> >> http://lists.ovirt.org/mailman/listinfo/users
>> >>
>> >
>> >
>> >
>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Detach disk from one VM, attach to another VM

2017-07-13 Thread Gianluca Cecchi
On Thu, Jul 13, 2017 at 2:26 PM, Davide Ferrari  wrote:

> Hello Victor
>
>
> how do you remove the disk from the VM's configuration? From the disk tab
> under the VM?
>
> On 11/07/17 20:51, Victor José Acosta Domínguez wrote:
>
> Hello
>
> Yes it is possible, you must remove from first VM's configuration (be
> careful, do not delete your virtual disk)
>
> After that you can attach ad that disk to another VM
>
> Process should be:
> - Detach disk from VM1
> - Delete disk from VM1's configuration
> - Attach disk to VM2
>
> Victor Acosta
>
>
Yes,
you select the VM, then the Disks sub-tab and there you select the disk and
then "Remove"
Pay attention in confirmation window to leave unselected the "Remove
permanently" checkbox.
Then in System -> Disks (or in Datacenter_name -> DIsks) you will see the
removed disk with nothing in the "Attached To" column.
You select target VM, disks subtab, you select "Attach" option (not "New")
and you will see a list of existing disks that can be attached to the VM.
Select the desired disk and click ok.
HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 2 hosts starting the engine at the same time?

2017-07-13 Thread Michal Skrivanek

> On 13 Jul 2017, at 11:51, Gianluca Cecchi  wrote:
> 
> On Thu, Jul 13, 2017 at 11:08 AM, Michal Skrivanek  > wrote:
> > On 12 Jul 2017, at 16:30, Gianluca Cecchi  > > wrote:
> >
> > In the mean time I have verified that the problem is with
> >
> > emulatedMachine=pc-i440fx-rhel7.3.0
> 
> Might be. The fact that it works with 7.2 in esxi nested environment
> is nice, but definitely not supported.
> Use lower-than-broadwell CPU - that might help. Instead of qemu64
> which is emulated...but if if works fast enough and reliably for you
> then it's fine
> 
> 
> The qemu64 is not a problem actually, because if I set cpu Broadwell and 
> machine type pc-i440fx-rhel7.2.0 things go well.
> Also, on an older physical hw with Westmere CPUs where I have oVirt 4.1 too, 
> the VMs start with emulatedMachine=pc-i440fx-rhel7.3.0, so this parameter 
> doesn't depend on cpu itself.
> 
> I think emulatedMachine is comparable to vSphere Virtual HW instead, correct?

yes

> And that this functionality is provided actually by qemu-kvm-ev (and perhaps 
> in junction with seabios?).

yes. By using -7.2.0 type you’re basically just using the backward 
compatibility code. Likely there was some change in how the hardware looks like 
in the guest which affected ESXi nesting for some CPUs

> If I run 
> 
> rpm -q --changelog qemu-kvm-ev I see in fact
> 
> ...
> * Mon Jun 06 2016 Miroslav Rezanina  > - rhev-2.6.0-5.el7
> ...
> - kvm-pc-New-default-pc-i440fx-rhel7.3.0-machine-type.patch [bz#1305121]
> ...
> 
> So it means that at a certain point, the default machine type used by 
> qemu-kvm-ev has become 7.3 and this generates problems in my specific lab 
> environment now (not searching "official" support for it.. ;-).
> For the other ordinary L2 VMs definedinside this oVirt nested environment, I 
> can set in System --> Advanced Parameters --> Custom Emulated Machine the 
> value pc-i440fx-rhel7.2.0 and I'm ok and they are able to start.
> The problem still remains for the engine vm itself, where I cannot manually 
> set it.
> Possibly is there a qemu-kvm-ev overall system configuration where I can tell 
> to force emulated machine type to pc-i440fx-rhel7.2.0 (without downgrading 
> qemu-kvm-ev)?

I suppose you can define it in HE OVF? Didi? That would be cleaner.
You can also use a vdsm hook just for that...

> 
> Otherwise I know that when I have to poweroff/restart the engine vm I have to 
> manually start it in 7.2 mode, as I'm testing right now.
> 
> Hope I have clarified better...
> 
> Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Detach disk from one VM, attach to another VM

2017-07-13 Thread Davide Ferrari

Hello Victor


how do you remove the disk from the VM's configuration? From the disk 
tab under the VM?



On 11/07/17 20:51, Victor José Acosta Domínguez wrote:

Hello

Yes it is possible, you must remove from first VM's configuration (be 
careful, do not delete your virtual disk)


After that you can attach ad that disk to another VM

Process should be:
- Detach disk from VM1
- Delete disk from VM1's configuration
- Attach disk to VM2

Victor Acosta




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-13 Thread knarra

On 07/13/2017 04:30 PM, Simone Marchioni wrote:

Il 12/07/2017 10:59, knarra ha scritto:

On 07/12/2017 01:43 PM, Simone Marchioni wrote:

Il 11/07/2017 11:23, knarra ha scritto:

Hi,

reply here to both Gianluca and Kasturi.

Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 
packages, but glusterfs-server was missing in my "yum install" 
command, so added glusterfs-server to my installation.


Kasturi: packages ovirt-hosted-engine-setup, gdeploy and 
cockpit-ovirt-dashboard already installed and updated. vdsm-gluster 
was missing, so added to my installation.

okay, cool.


:-)



Rerun deployment and IT WORKED! I can read the message "Succesfully 
deployed Gluster" with the blue button "Continue to Hosted Engine 
Deployment". There's a minor glitch in the window: the green "V" in 
the circle is missing, like there's a missing image (or a wrong 
path, as I had to remove "ansible" from the grafton-sanity-check.sh 
path...)
There is a bug for this and it will be fixed soon. Here is the bug id 
for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082


Ok, thank you!



Although the deployment worked, and the firewalld and gluterfs 
errors are gone, a couple of errors remains:



AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:

PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script 
"/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That 
is why this failure.


You're right: changed the path and now it's ok.



PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Run a command in the shell] 
**
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}

to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

This error can be safely ignored.


Ok




These are a problem for my installation or can I ignore them?
You can just manually run the script to disable hooks on all the 
nodes. Other error you can ignore.


Done it



By the way, I'm writing and documenting this process and can prepare 
a tutorial if someone is interested.


Thank you again for your support: now I'll proceed with the Hosted 
Engine Deployment.

Good to know that you can now start with Hosted Engine Deployment.



Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-13 Thread Simone Marchioni

Il 12/07/2017 10:59, knarra ha scritto:

On 07/12/2017 01:43 PM, Simone Marchioni wrote:

Il 11/07/2017 11:23, knarra ha scritto:

Hi,

reply here to both Gianluca and Kasturi.

Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 
packages, but glusterfs-server was missing in my "yum install" 
command, so added glusterfs-server to my installation.


Kasturi: packages ovirt-hosted-engine-setup, gdeploy and 
cockpit-ovirt-dashboard already installed and updated. vdsm-gluster 
was missing, so added to my installation.

okay, cool.


:-)



Rerun deployment and IT WORKED! I can read the message "Succesfully 
deployed Gluster" with the blue button "Continue to Hosted Engine 
Deployment". There's a minor glitch in the window: the green "V" in 
the circle is missing, like there's a missing image (or a wrong path, 
as I had to remove "ansible" from the grafton-sanity-check.sh path...)
There is a bug for this and it will be fixed soon. Here is the bug id 
for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082


Ok, thank you!



Although the deployment worked, and the firewalld and gluterfs errors 
are gone, a couple of errors remains:



AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:

PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script 
"/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That 
is why this failure.


You're right: changed the path and now it's ok.



PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Run a command in the shell] 
**
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}

to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

This error can be safely ignored.


Ok




These are a problem for my installation or can I ignore them?
You can just manually run the script to disable hooks on all the 
nodes. Other error you can ignore.


Done it



By the way, I'm writing and documenting this process and can prepare 
a tutorial if someone is interested.


Thank you again for your support: now I'll proceed with the Hosted 
Engine Deployment.

Good to know that you can now start with Hosted Engine Deployment.


Started the Hosted Engine Deployment, but I have a 

Re: [ovirt-users] 2 hosts starting the engine at the same time?

2017-07-13 Thread Gianluca Cecchi
On Thu, Jul 13, 2017 at 11:08 AM, Michal Skrivanek 
wrote:

> > On 12 Jul 2017, at 16:30, Gianluca Cecchi 
> wrote:
> >
> > In the mean time I have verified that the problem is with
> >
> > emulatedMachine=pc-i440fx-rhel7.3.0
>
> Might be. The fact that it works with 7.2 in esxi nested environment
> is nice, but definitely not supported.
> Use lower-than-broadwell CPU - that might help. Instead of qemu64
> which is emulated...but if if works fast enough and reliably for you
> then it's fine
>
>
The qemu64 is not a problem actually, because if I set cpu Broadwell and
machine type pc-i440fx-rhel7.2.0 things go well.
Also, on an older physical hw with Westmere CPUs where I have oVirt 4.1
too, the VMs start with emulatedMachine=pc-i440fx-rhel7.3.0, so this
parameter doesn't depend on cpu itself.

I think emulatedMachine is comparable to vSphere Virtual HW instead,
correct?
And that this functionality is provided actually by qemu-kvm-ev (and
perhaps in junction with seabios?).
If I run

rpm -q --changelog qemu-kvm-ev I see in fact

...
* Mon Jun 06 2016 Miroslav Rezanina  - rhev-2.6.0-5.el7
...
- kvm-pc-New-default-pc-i440fx-rhel7.3.0-machine-type.patch [bz#1305121]
...

So it means that at a certain point, the default machine type used by
qemu-kvm-ev has become 7.3 and this generates problems in my specific lab
environment now (not searching "official" support for it.. ;-).
For the other ordinary L2 VMs definedinside this oVirt nested environment,
I can set in System --> Advanced Parameters --> Custom Emulated Machine the
value pc-i440fx-rhel7.2.0 and I'm ok and they are able to start.
The problem still remains for the engine vm itself, where I cannot manually
set it.
Possibly is there a qemu-kvm-ev overall system configuration where I can
tell to force emulated machine type to pc-i440fx-rhel7.2.0 (without
downgrading qemu-kvm-ev)?

Otherwise I know that when I have to poweroff/restart the engine vm I have
to manually start it in 7.2 mode, as I'm testing right now.

Hope I have clarified better...

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problems with vdsm-tool and ovn-config option

2017-07-13 Thread Gianluca Cecchi
On Thu, Jul 13, 2017 at 10:23 AM, Gianluca Cecchi  wrote:

> Hello,
> on February I installed OVN controller on some hypervisors (CentOS 7.3
> hosts).
> At that time the vdsm-tool command was part of vdsm-python-4.19.4-1.el7.
> centos.noarch
> I was able to configure my hosts with command
>
> vdsm-tool ovn-config OVN_central_server_IP local_OVN_tunneling_IP
>
> as described here:
> https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
> and now here
> https://access.redhat.com/documentation/en-us/red_hat_
> virtualization/4.1/html/administration_guide/sect-
> adding_external_providers
>
> I have 2 hosts that I added later and on which I want to configure OVN
> external network provider too.
>
> But now, with vdsm-python at version vdsm-python-4.19.10.1-1.el7.centos.noarch
> I get an usage error trying to execute the command
>
> [root@ov301 ~]# vdsm-tool ovn-config 10.4.192.43 10.4.167.41
> Usage: /bin/vdsm-tool [options]  [arguments]
> Valid options:
>   -h, --help
> Show this help menu.
>   -l, --logfile 
> ...
>
> Also, the changelog of the package seems quite broken:
>
> [root@ov301 ~]# rpm -q --changelog vdsm-python
> * Wed Aug 03 2016 Yaniv Bronhaim  - 4.18.999
> - Re-review of vdsm.spec to return it to fedora Bug #1361659
>
> * Sun Oct 13 2013 Yaniv Bronhaim  - 4.13.0
> - Removing vdsm-python-cpopen from the spec
> - Adding dependency on formal cpopen package
>
> * Sun Apr 07 2013 Yaniv Bronhaim  - 4.9.0-1
> - Adding cpopen package
>
> * Wed Oct 12 2011 Federico Simoncelli  - 4.9.0-0
> - Initial upstream release
>
> * Thu Nov 02 2006 Simon Grinberg  -  0.0-1
> - Initial build
>
> [root@ov301 ~]#
>
> How can I configure OVN?
>
> Thanks in advance,
> Gianluca
>


Sorry I thought I had already installed ovirt-provider-ovn-driver package
on hosts, as documented above, but It was not so...
This package in fact
provides /usr/lib/python2.7/site-packages/vdsm/tool/ovn_config.py that is
probably dynamically loaded by vdsm-tool command at runtime if found...

Nevertheless the rpm changelog for the package could be corrected

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 2 hosts starting the engine at the same time?

2017-07-13 Thread Michal Skrivanek
> On 12 Jul 2017, at 16:30, Gianluca Cecchi  wrote:
>
> In the mean time I have verified that the problem is with
>
> emulatedMachine=pc-i440fx-rhel7.3.0

Might be. The fact that it works with 7.2 in esxi nested environment
is nice, but definitely not supported.
Use lower-than-broadwell CPU - that might help. Instead of qemu64
which is emulated...but if if works fast enough and reliably for you
then it's fine

>
> Summarizing:
>
> default boot ofengine VM after update to 4.1.3 is with cpuType=Broadwell and 
> emulatedMachine=pc-i440fx-rhel7.3.0 and the engine vm hangs at "Booting from 
> Hard Disk " screen
>
> When starting with cpuType=qemu64 and emulatedMachine=pc-i440fx-rhel7.2.0 the 
> engine comes up normally
> When starting with cpuType=Broadwell and emulatedMachine=pc-i440fx-rhel7.2.0 
> the engine comes up normally
> When starting with cpuType=qemu64 and emulatedMachine=pc-i440fx-rhel7.3.0 the 
> engine vm hangs at "Booting from Hard Disk " screen
>
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually moving disks from FC to iSCSI

2017-07-13 Thread Fred Rolland
The move command will first create the disk structure according to the
snapshots on the destination , then a 'qemui-mg convert' will be performed
for each snapshot.

On Wed, Jul 12, 2017 at 1:31 AM, Gianluca Cecchi 
wrote:

> On Tue, Jul 11, 2017 at 3:14 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>>
>>
>> On Tue, Jul 11, 2017 at 2:59 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> Hello,
>>> I have a source oVirt environment with storage domain on FC
>>> I have a destination oVirt environment with storage domain on iSCSI
>>> The two environments can communicate only via the network of their
>>> respective hypervisors.
>>> The source environment, in particular, is almost isolated and I cannot
>>> attach an export domain to it or something similar.
>>> So I'm going to plan a direct move through dd of the disks of some VMs
>>>
>>> The workflow would be
>>> On destination create a new VM with same config and same number of disks
>>> of the same size of corresponding source ones.
>>> Also I think same allocation policy (thin provision vs preallocated)
>>> Using lvs -o+lv_tags I can detect the names of my origin and destination
>>> LVs, corresponding to the disks
>>> When a VM is powered down, the LV that maps the disk will be not open,
>>> so I have to force its activation (both on source and on destination)
>>>
>>> lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
>>>
>>> copy source disk with dd through network (I use gzip to limit network
>>> usage basically...)
>>> on src_host:
>>> dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd
>>> bs=1024k of=/dev/dest_vg/dest_lv"
>>>
>>> deactivate LVs on source and dest
>>>
>>> lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
>>>
>>> Try to power on the VM on destination
>>>
>>> Some questions:
>>> - about overall workflow
>>> - about dd flags, in particular if source disks are thin vs preallocated
>>>
>>> Thanks,
>>> Gianluca
>>>
>>>
>>
>> Some further comments:
>>
>> - probably better/safe to use SPM hosts for lvchange commands both on
>> source and target, as this imply metadata manipulation, correct?
>> - when disks are preallocated, no problems, but when they are thin, I can
>> be in this situation
>>
>> source disk defined as 90Gb disk and during time it has expanded up to
>> 50Gb
>> dest disk at the beginning just after creation will normally be of few GB
>> (eg 4Gb), so the dd command will fail when fulll...
>> Does this mean that it will be better to create dest disk as preallocated
>> anyway or is it safe to run
>> lvextend -L+50G dest_vg/dest_lv
>> from command line?
>> Will oVirt recognize its actual size or what?
>>
>>
>>
> So I've done, both for thin provisioned (without having done snapshot on
> them, see below) and preallocated disks, and it seems to work, at least
> booting an OS boot disk copied over this way.
>
> I have one further doubt.
>
> For a VM I have a disk defined as thin provisioned and 90Gb in size.
> Some weeks ago I created a snapshot for it.
> Now, before copying over the LV, I have delete this snapshot.
> But I see that at the end of the process, the size of the LV backing the
> VM disk is now actually 92Gb, so I presume that my dd over network will
> fail
> What could I do to cover this scenario?
>
> What would be the command at os level in case I choose "move disk" in web
> admin gui to move a disk from a SD to another one?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1.3 / iSCSI / VM Multiple Disks / Snapshot deletion issue.

2017-07-13 Thread Benny Zlotnik
Hi,

Can you please attach full engine and vdsm logs?

On Thu, Jul 13, 2017 at 1:07 AM, Devin Acosta  wrote:
> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
> question has multiple Disks (4 to be exact). It snapshotted OK while on
> iSCSI however when I went to delete the single snapshot that existed it went
> into Locked state and never came back. The deletion has been going for well
> over an hour, and I am not convinced since the snapshot is less than 12
> hours old that it’s really doing anything.
>
> I have seen that doing some Googling indicates there might be some known
> issues with iSCSI/Block Storage/Multiple Disk Snapshot issues.
>
> In the logs on the engine it shows:
>
> 2017-07-12 21:59:42,473Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 21:59:52,480Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
> child command id: '75c535fd-4558-459a-9992-875c48578a97'
> type:'ColdMergeSnapshotSingleDisk' to complete
> 2017-07-12 21:59:52,483Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler2) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:02,490Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
> child command id: '75c535fd-4558-459a-9992-875c48578a97'
> type:'ColdMergeSnapshotSingleDisk' to complete
> 2017-07-12 22:00:02,493Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler6) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:12,498Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
> child command id: '75c535fd-4558-459a-9992-875c48578a97'
> type:'ColdMergeSnapshotSingleDisk' to complete
> 2017-07-12 22:00:12,501Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler3) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
> 2017-07-12 22:00:22,508Z INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'RemoveSnapshot' (id: '40482d09-8a7c-4dbd-8324-3e789296887a') waiting on
> child command id: '75c535fd-4558-459a-9992-875c48578a97'
> type:'ColdMergeSnapshotSingleDisk' to complete
> 2017-07-12 22:00:22,511Z INFO
> [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
> (DefaultQuartzScheduler5) [a5f6eaf2-7996-4d51-ba62-050272d1f097] Command
> 'ColdMergeSnapshotSingleDisk' (id: '75c535fd-4558-459a-9992-875c48578a97')
> waiting on child command id: 'd92e9a22-5f0f-4b61-aac6-5601f8ac2cda'
> type:'PrepareMerge' to complete
>
> This is what I seen on the SPM when I grep’d the Snapshot ID.
>
> 2017-07-12 14:22:18,773-0700 INFO  (jsonrpc/6) [vdsm.api] START
> createVolume(sdUUID=u'0c02a758-4295-4199-97de-b041744b3b15',
> spUUID=u'0001-0001-0001-0001-0311',
> imgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d', size=u'107374182400',
> volFormat=4, preallocate=2, diskType=2,
> volUUID=u'5921ba71-0f00-46cd-b0be-3c2ac1396845', desc=u'',
> srcImgUUID=u'6a887015-67cd-4f7b-b709-eef97142258d',
> srcVolUUID=u'0c3de1a8-ac18-4d7b-b348-3b097bf0a0ae', initialSize=None)
> from=:::10.4.64.7,60016, flow_id=e94eebf8-75dc-407a-8916-f4ff632f843e
> (api:46)
> 2017-07-12 14:22:19,095-0700 WARN  (tasks/6) [root] File:
> /rhev/data-center/0001-0001-0001-0001-0311/0c02a758-4295-4199-97de-b041744b3b15/images/6a887015-67cd-4f7b-b709-eef97142258d/5921ba71-0f00-46cd-b0be-3c2ac1396845
> already removed (utils:120)
> 2017-07-12 14:22:19,096-0700 INFO  (tasks/6) [storage.Volume] Request to
> create snapshot
> 

[ovirt-users] Problems with vdsm-tool and ovn-config option

2017-07-13 Thread Gianluca Cecchi
Hello,
on February I installed OVN controller on some hypervisors (CentOS 7.3
hosts).
At that time the vdsm-tool command was part
of vdsm-python-4.19.4-1.el7.centos.noarch
I was able to configure my hosts with command

vdsm-tool ovn-config OVN_central_server_IP local_OVN_tunneling_IP

as described here:
https://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
and now here
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-adding_external_providers

I have 2 hosts that I added later and on which I want to configure OVN
external network provider too.

But now, with vdsm-python at
version vdsm-python-4.19.10.1-1.el7.centos.noarch I get an usage error
trying to execute the command

[root@ov301 ~]# vdsm-tool ovn-config 10.4.192.43 10.4.167.41
Usage: /bin/vdsm-tool [options]  [arguments]
Valid options:
  -h, --help
Show this help menu.
  -l, --logfile 
...

Also, the changelog of the package seems quite broken:

[root@ov301 ~]# rpm -q --changelog vdsm-python
* Wed Aug 03 2016 Yaniv Bronhaim  - 4.18.999
- Re-review of vdsm.spec to return it to fedora Bug #1361659

* Sun Oct 13 2013 Yaniv Bronhaim  - 4.13.0
- Removing vdsm-python-cpopen from the spec
- Adding dependency on formal cpopen package

* Sun Apr 07 2013 Yaniv Bronhaim  - 4.9.0-1
- Adding cpopen package

* Wed Oct 12 2011 Federico Simoncelli  - 4.9.0-0
- Initial upstream release

* Thu Nov 02 2006 Simon Grinberg  -  0.0-1
- Initial build

[root@ov301 ~]#

How can I configure OVN?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bizzare oVirt network problem

2017-07-13 Thread Pavel Gashev
Fernando,

The issue can be triggered by a packet with the same source MAC address as VM 
has. If it’s received by network equipment from some different port, your VM 
could stop receiving network traffic.

There are two options. Either there is a network topology issue, or there is 
another VM with the same MAC address. The last is possible if you have another 
oVirt installation (testing? staging?) in the same network segment.


From: Fernando Frediani  on behalf of FERNANDO 
FREDIANI 
Date: Wednesday, 12 July 2017 at 23:10
To: Pavel Gashev , "users@ovirt.org" 
Subject: Re: [ovirt-users] Bizzare oVirt network problem


Hello Pavel

What you mean by another oVirt instance ? In one Datacenter it has 2 different 
clusters (or Datacenter in oVirt way of orrganizing things), but in the other 
Datacenter the oVirt Node is standlone.

Let me know.

Fernando

On 12/07/2017 16:49, Pavel Gashev wrote:
Fernando,

It looks like you have another oVirt instance in the same network segment(s). 
Don’t you?


From:  on behalf of 
FERNANDO FREDIANI 
Date: Wednesday, 12 July 2017 at 16:21
To: "users@ovirt.org" 

Subject: [ovirt-users] Bizzare oVirt network problem

Hello.

I am facing a pretty bizzare problem in two of my Nodes running oVirt. A given 
VM running a few hundred Mbps of traffic simply stops passing traffic and only 
recovers after a reboot. Checking the bridge with 'brctl showmacs BRIDGE' I see 
the VM's MAC address missing during this event.

It seems the bridge simply unlearn the VM's mac address which only returns when 
the VM is rebooted.
This problems happened in two different Nodes running in different hardware, in 
different datacenter, in different network architecture, different switch 
vendors and different bonding modes.

The main differences these Nodes have compared to others I have and which don't 
show this problem are:
- The CentOS 7 installed is a Minimal installation instead of oVirt-NG
- The Kernel used is 4.12 (elrepo) instead of the default 3.10
- The ovirtmgmt network is used also for the Virtual Machine showing this 
problem.

Has anyone have any idea if it may have anything to do with oVirt (any filters) 
or any of the components different from a oVirt-NG installation ?

Thanks
Fernando




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users