[ovirt-users] Re: VM hanging at sustained high throughput

2021-05-27 Thread dhanaraj.ramesh--- via Users
I tried recreating this issue with centos stream 8 vm and pure nas server, Yes  
yes the issue persists.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XO2DSDNQKWUWBFWCKDMAYD5SVFQOFQZC/


[ovirt-users] Re: Where to get the vm config file ?

2021-05-27 Thread dhanaraj.ramesh--- via Users
https://access.redhat.com/solutions/795203

In case of RHEV, these files are not stored under /etc/libvirt/qemu
The vdsm daemon dynamically fetches the information of the VM from the 
RHEV-Managers database to generate the XML files.
These files cannot be edited to make persistent changes, as they only exist 
throughout the lifecycle of the VM, however, they can be viewed read-only by 
using the following command or dumped to a location for viewing later (when the 
VM is poweredOFF)

virsh -r dumpxml vm_name > /tmp/vm_name
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X7A4MYOLZRTKXLGYTOBMUUF4H7SWRVTZ/


[ovirt-users] Re: unable to login cockpit using root after upgrading to 4.4.6

2021-05-27 Thread dhanaraj.ramesh--- via Users
Huge Thanks to all of you and the team.

Yes After executed the given command, I can able to access the cock pit. will 
wait for the fixes. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UWL6HTXFM5M2QBEFOC2N4N5BS3RACYH3/


[ovirt-users] Where to get the vm config file ?

2021-05-27 Thread tommy
Thanks.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKQDGEWLIIDJHGJN2YKAG2B2WFYFNFVF/


[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread Jayme
# rpm -qa | grep ovirt-node
ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
python3-ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch

I removed ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch but yum update
and check for updates in GUI still show no updates available.

I can attempt re-installing the package tomorrow, but I'm not confident it
will work since it was already installed.


On Thu, May 27, 2021 at 9:32 PM wodel youchi  wrote:

> Hi,
>
> On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed, if
> yes, try to remove them, then try the update again.
>
> You can try to install the ovirt-node rpm manually, here is the link
> https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>
>> # dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>>
>
> PS: remember to use tmux if executing via ssh.
>
> Regards.
>
> Le jeu. 27 mai 2021 à 22:21, Jayme  a écrit :
>
>> The good host:
>>
>> bootloader:
>>   default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>>   entries:
>> ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
>>   index: 0
>>   kernel:
>> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
>> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 
>> rd.lvm.lv=onn_orchard1/swap
>> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
>> img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>   initrd:
>> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
>>   title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>>   blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>   index: 1
>>   kernel:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
>> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
>> rd.lvm.lv=onn_orchard1/swap
>> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   initrd:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>   blsid:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
>> layers:
>>   ovirt-node-ng-4.4.5.1-0.20210323.0:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   ovirt-node-ng-4.4.6.3-0.20210518.0:
>> ovirt-node-ng-4.4.6.3-0.20210518.0+1
>> current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>
>>
>> The other two show:
>>
>> bootloader:
>>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>   entries:
>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>   index: 0
>>   kernel:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
>> rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
>> rd.lvm.lv=onn_orchard2/swap
>> rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   initrd:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>   blsid:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
>> layers:
>>   ovirt-node-ng-4.4.5.1-0.20210323.0:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>> current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>
>> On Thu, May 27, 2021 at 6:18 PM Jayme  wrote:
>>
>>> It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
>>> noting available nor does check upgrade in admin GUI.
>>>
>>> I believe these two hosts failed on first install and succeeded on
>>> second attempt which may have something to do with it. How can I force them
>>> to update to 4.4.6 image? Would reinstall host do it?
>>>
>>> On Thu, May 27, 2021 at 6:03 PM wodel youchi 
>>> wrote:
>>>
 Hi,

 What does "nodectl info" reports on all hosts?
 did you execute "refresh capabilities" after the update?

 Regards.


 
  Virus-free.
 www.avast.com
 

[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread wodel youchi
Hi,

On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed, if
yes, try to remove them, then try the update again.

You can try to install the ovirt-node rpm manually, here is the link
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm

> # dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>

PS: remember to use tmux if executing via ssh.

Regards.

Le jeu. 27 mai 2021 à 22:21, Jayme  a écrit :

> The good host:
>
> bootloader:
>   default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>   entries:
> ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
>   index: 0
>   kernel:
> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 
> rd.lvm.lv=onn_orchard1/swap
> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
> img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
>   initrd:
> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
>   title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>   blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>   index: 1
>   kernel:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
> rd.lvm.lv=onn_orchard1/swap
> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>   initrd:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
>   blsid:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
> layers:
>   ovirt-node-ng-4.4.5.1-0.20210323.0:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>   ovirt-node-ng-4.4.6.3-0.20210518.0:
> ovirt-node-ng-4.4.6.3-0.20210518.0+1
> current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
>
>
> The other two show:
>
> bootloader:
>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
>   entries:
> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>   index: 0
>   kernel:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>   args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
> rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
> rd.lvm.lv=onn_orchard2/swap
> rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>   root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>   initrd:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
>   blsid:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
> layers:
>   ovirt-node-ng-4.4.5.1-0.20210323.0:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1
> current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>
> On Thu, May 27, 2021 at 6:18 PM Jayme  wrote:
>
>> It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
>> noting available nor does check upgrade in admin GUI.
>>
>> I believe these two hosts failed on first install and succeeded on second
>> attempt which may have something to do with it. How can I force them to
>> update to 4.4.6 image? Would reinstall host do it?
>>
>> On Thu, May 27, 2021 at 6:03 PM wodel youchi 
>> wrote:
>>
>>> Hi,
>>>
>>> What does "nodectl info" reports on all hosts?
>>> did you execute "refresh capabilities" after the update?
>>>
>>> Regards.
>>>
>>>
>>> 
>>>  Virus-free.
>>> www.avast.com
>>> 
>>> <#m_1584774078427632385_m_-2192448828611170138_m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>
>>> Le jeu. 27 mai 2021 à 20:37, Jayme  a écrit :
>>>
 I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
 updated successfully and rebooted and are active. I notice that only one
 host out of the three is actually running oVirt node 4.4.6 and the other
 two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
 available.

 Why are two hosts still running 4.4.5 after being successfully
 upgraded/reb

[ovirt-users] VM hanging at sustained high throughput

2021-05-27 Thread David Johnson
Hi ovirt gurus,

This is an interesting issue, one I never expected to have.

When I push high volumes of writes to my NAS, I will cause VM's to go into
a paused state. I'm looking at this from a number of angles, including
upgrades on the NAS appliance.

I can reproduce this problem at will running a centos 7.9 VM on Ovirt 4.5.

*Questions:*

1. Is my analysis of the failure (below) reasonable/correct?

2. What am I looking for to validate this?

3. Is there a configuration that I can set to make it a little more robust
while I acquire the hardware to improve the NAS?


*Reproduction:*

Standard test of file write speed:

[root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=4096
oflag=direct
4096+0 records in
4096+0 records out
2147483648 bytes (2.1 GB) copied, 1.68431 s, 1.3 GB/s


Give it more data

[root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=12228
oflag=direct
12228+0 records in
12228+0 records out
6410993664 bytes (6.4 GB) copied, 7.22078 s, 888 MB/s


The odds are about 50/50 that 6 GB will kill the VM, but 100% when I hit 8
GB.

*Analysis:*

What I think appears to be happening is that the intent cache on the NAS is
on an SSD, and my VM's are pushing data about three times as fast as the
SSD can handle. When the SSD gets queued up beyond a certain point, the NAS
(which places reliability over speed) says "Whoah Nellie!", and the VM
chokes.


*David Johnson*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBNC7KI5W7T2QHRESQTVIPMRSN37G6S6/


[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread Jayme
The good host:

bootloader:
  default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
  entries:
ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
  index: 0
  kernel:
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
  args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
rd.lvm.lv=onn_orchard1/swap rhgb quiet
boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
  root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
  initrd:
/boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
  title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
  blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
  index: 1
  kernel:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
  args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn_orchard1/swap rhgb quiet
boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
  root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
  initrd:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
  title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
  blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
  ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
  ovirt-node-ng-4.4.6.3-0.20210518.0:
ovirt-node-ng-4.4.6.3-0.20210518.0+1
current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1


The other two show:

bootloader:
  default: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
  entries:
ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
  index: 0
  kernel:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
  args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn_orchard2/swap rhgb quiet
boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
  root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
  initrd:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
  title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
  blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
  ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1

On Thu, May 27, 2021 at 6:18 PM Jayme  wrote:

> It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
> noting available nor does check upgrade in admin GUI.
>
> I believe these two hosts failed on first install and succeeded on second
> attempt which may have something to do with it. How can I force them to
> update to 4.4.6 image? Would reinstall host do it?
>
> On Thu, May 27, 2021 at 6:03 PM wodel youchi 
> wrote:
>
>> Hi,
>>
>> What does "nodectl info" reports on all hosts?
>> did you execute "refresh capabilities" after the update?
>>
>> Regards.
>>
>>
>> 
>>  Virus-free.
>> www.avast.com
>> 
>> <#m_-2192448828611170138_m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>> Le jeu. 27 mai 2021 à 20:37, Jayme  a écrit :
>>
>>> I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
>>> updated successfully and rebooted and are active. I notice that only one
>>> host out of the three is actually running oVirt node 4.4.6 and the other
>>> two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
>>> available.
>>>
>>> Why are two hosts still running 4.4.5 after being successfully
>>> upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
>>> found?
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN57DRLYE3OIOP7O3SPKH7P5SHB4XJRJ/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.

[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread Jayme
It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows noting
available nor does check upgrade in admin GUI.

I believe these two hosts failed on first install and succeeded on second
attempt which may have something to do with it. How can I force them to
update to 4.4.6 image? Would reinstall host do it?

On Thu, May 27, 2021 at 6:03 PM wodel youchi  wrote:

> Hi,
>
> What does "nodectl info" reports on all hosts?
> did you execute "refresh capabilities" after the update?
>
> Regards.
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_1909242515811637061_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> Le jeu. 27 mai 2021 à 20:37, Jayme  a écrit :
>
>> I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
>> updated successfully and rebooted and are active. I notice that only one
>> host out of the three is actually running oVirt node 4.4.6 and the other
>> two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
>> available.
>>
>> Why are two hosts still running 4.4.5 after being successfully
>> upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
>> found?
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN57DRLYE3OIOP7O3SPKH7P5SHB4XJRJ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GRRCKNB2DLKBHNHAUXXDO3DZ7YXVK2UJ/


[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread wodel youchi
Hi,

What does "nodectl info" reports on all hosts?
did you execute "refresh capabilities" after the update?

Regards.


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le jeu. 27 mai 2021 à 20:37, Jayme  a écrit :

> I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
> updated successfully and rebooted and are active. I notice that only one
> host out of the three is actually running oVirt node 4.4.6 and the other
> two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
> available.
>
> Why are two hosts still running 4.4.5 after being successfully
> upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
> found?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN57DRLYE3OIOP7O3SPKH7P5SHB4XJRJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X7RRYQP6EAFZDY4CNWX32WALW6HFXW2Q/


[ovirt-users] Re: Data Centers status Not Operational

2021-05-27 Thread nexpron
vdsmd has been started

service vdsmd start
vdsm: already running  [  OK  ]
vdsm start [  OK  ]

I checked /var/log/vdsm/vdsm.log . 7 MB.
Most repeated errors

VM Channels Listener::DEBUG::2021-05-26 
15:01:01,741::vmChannels::128::vds::(_handle_unconnected) Trying to connect 
fileno 95.
GuestMonitor-SFTP-R::DEBUG::2021-05-26 
15:01:01,938::vm::645::vm.Vm::(_getDiskStats) 
vmId=`cc75abe4-94d3-44da-8a9c-f05a5c55f55f`::Disk hdc stats not available
GuestMonitor-SFTP-R::DEBUG::2021-05-26 
15:01:01,939::vm::645::vm.Vm::(_getDiskStats) 
vmId=`cc75abe4-94d3-44da-8a9c-f05a5c55f55f`::Disk vda stats not available
GuestMonitor-SFTP-R::DEBUG::2021-05-26 
15:01:01,939::vm::684::vm.Vm::(_getDiskLatency) 
vmId=`cc75abe4-94d3-44da-8a9c-f05a5c55f55f`::Disk hdc latency not available
GuestMonitor-SFTP-R::DEBUG::2021-05-26 
15:01:01,939::vm::684::vm.Vm::(_getDiskLatency) 
vmId=`cc75abe4-94d3-44da-8a9c-f05a5c55f55f`::Disk vda latency not available

VM Channels Listener::DEBUG::2021-05-26 
15:01:01,987::vmChannels::128::vds::(_handle_unconnected) Trying to connect 
fileno 95.
VM Channels Listener::DEBUG::2021-05-26 
15:01:02,988::vmChannels::128::vds::(_handle_unconnected) Trying to connect 
fileno 95.
BindingXMLRPC::ERROR::2021-05-26 
15:01:03,732::BindingXMLRPC::84::vds::(threaded_start) xml-rpc handler exception
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 80, in threaded_start
self.server.handle_request()
  File "/usr/lib64/python2.6/SocketServer.py", line 278, in handle_request
self._handle_request_noblock()
  File "/usr/lib64/python2.6/SocketServer.py", line 288, in 
_handle_request_noblock
request, client_address = self.get_request()
  File "/usr/lib64/python2.6/SocketServer.py", line 456, in get_request
return self.socket.accept()
  File "/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py", line 
136, in accept
raise SSL.SSLError("%s, client %s" % (e, address[0]))
SSLError: sslv3 alert certificate unknown, client 127.0.0.1
VM Channels Listener::DEBUG::2021-05-26 
15:01:03,742::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 91.
VM Channels Listener::DEBUG::2021-05-26 
15:01:03,743::vmChannels::128::vds::(_handle_unconnected) Trying to connect 
fileno 95.
VM Channels Listener::DEBUG::2021-05-26 
15:01:04,744::vmChannels::128::vds::(_handle_unconnected) Trying to connect 
fileno 95.

So I checked disks
df -h
df: `/rhev/data-center/mnt/10.10.0.56:_vm': Input/output error
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/vg_hv3-lv_root
   50G   21G   27G  44% /
tmpfs 7.8G   20K  7.8G   1% /dev/shm
/dev/sda1 485M   68M  393M  15% /boot
/dev/mapper/vg_hv3-lv_home
   49G   12G   35G  25% /home
/dev/mapper/vg_hv3-LogVol03
  445G  355G   68G  85% /vm
hv3.RCV-wmc.local:/ISO
   50G   21G   27G  44% 
/rhev/data-center/mnt/hv3.RCV-wmc.local:_ISO
hv3.RCV-wmc.local:/vm
  445G  355G   68G  85% 
/rhev/data-center/mnt/hv3.RCV-wmc.local:_vm
10.10.0.60:/vm489G   51G  438G  11% /rhev/data-center/mnt/10.10.0.60:_vm
10.10.0.28:/bacula457G  379G   79G  83% 
/rhev/data-center/mnt/10.10.0.28:_bacula
10.10.0.28:/BACKUP_oVirt
  457G  379G   79G  83% 
/rhev/data-center/mnt/10.10.0.28:_BACKUP__oVirt

I guess that below is a real problem

/rhev/data-center/mnt/10.10.0.56:_vm': Input/output error

I still digging :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H7Q6B7D2BWXMDXQNEMQG4ASNA6HER22P/


[ovirt-users] After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-27 Thread Jayme
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
available.

Why are two hosts still running 4.4.5 after being successfully
upgraded/rebooted and how can I get them on 4.4.6 if no upgrades are being
found?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN57DRLYE3OIOP7O3SPKH7P5SHB4XJRJ/


[ovirt-users] Re: Unable to migrate VMs

2021-05-27 Thread Jayme
The problem appears to be MTU related, I may have a network configuration
problem. Setting back to 1500 mtu seems to have solved it for now

On Thu, May 27, 2021 at 2:26 PM Jayme  wrote:

> I've gotten a bit further. I have a separate 10Gbe network for GlusterFS
> traffic which was also set as the migration network. I disabled migration
> on GlusterFS network and enabled on default management network and now
> migration seems to be working. I'm not sure why at this point, it used to
> work fine on GlusterFS migration network in the past.
>
> On Thu, May 27, 2021 at 2:11 PM Jayme  wrote:
>
>> I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
>> is mix of GlusterFS and NFS. Everything has been running smoothly, but the
>> other day I noticed many VMs had invalid snapshots. I run a script to
>> export OVA for VMs for backup purposes, exports seemed to have been fine
>> but snapshots failed to delete at the end. I was able to manually delete
>> the snapshots through oVirt admin GUI without any errors/warnings and the
>> VMs have been running fine and can restart them without problems.
>>
>> I thought this problem may be due to snapshot bug which is supposedly
>> fixed in oVirt 4.4.6. I decided to start upgrading cluster to 4.4.6 and am
>> now having a problem with VMs not being able to migrate.
>>
>> When I migrate any VM (doesn't seem to matter which host to and from) the
>> process starts but stops at 0-1%. Eventually after 15-30 minutes or more
>> the tasks are all completed by the VM is not migrated.
>>
>> I am unable to migrate any VMs and as such I cannot place any host in
>> maintenance mode.
>>
>> I've attaching some VDSM logs from source and destination hosts, these
>> were after initiating a migration of a single VM
>>
>> I'm seeing some errors in the logs regarding the migration stalling, but
>> not able to determine why its stalling.
>>
>> 2021-05-27 17:10:22,167+ INFO  (jsonrpc/4) [api.host] FINISH
>> getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
>> 'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
>> {'policy': [], 'current_values': [{'name': 'vda', 'path':
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
>> '2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
>> [{'name': 'sda', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
>> '26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
>> [{'name': 'vda', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
>> '60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
>> [{'name': 'sda', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
>> 'sdb', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
>> 'sdd', 'path': 
>> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
>> 'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
>> [{'name': 'sda', 'path':
>> '/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/310d8b3e-d578-418d-9802-dc0ebcea06d6/aa758c51-8478-4273-aeef-d4b374b8d6b4',
>> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
>> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
>> 'sdb', 'path':
>> '/rhev/data-center/mnt/glusterSD/gluster

[ovirt-users] Re: Unable to migrate VMs

2021-05-27 Thread Jayme
I've gotten a bit further. I have a separate 10Gbe network for GlusterFS
traffic which was also set as the migration network. I disabled migration
on GlusterFS network and enabled on default management network and now
migration seems to be working. I'm not sure why at this point, it used to
work fine on GlusterFS migration network in the past.

On Thu, May 27, 2021 at 2:11 PM Jayme  wrote:

> I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
> is mix of GlusterFS and NFS. Everything has been running smoothly, but the
> other day I noticed many VMs had invalid snapshots. I run a script to
> export OVA for VMs for backup purposes, exports seemed to have been fine
> but snapshots failed to delete at the end. I was able to manually delete
> the snapshots through oVirt admin GUI without any errors/warnings and the
> VMs have been running fine and can restart them without problems.
>
> I thought this problem may be due to snapshot bug which is supposedly
> fixed in oVirt 4.4.6. I decided to start upgrading cluster to 4.4.6 and am
> now having a problem with VMs not being able to migrate.
>
> When I migrate any VM (doesn't seem to matter which host to and from) the
> process starts but stops at 0-1%. Eventually after 15-30 minutes or more
> the tasks are all completed by the VM is not migrated.
>
> I am unable to migrate any VMs and as such I cannot place any host in
> maintenance mode.
>
> I've attaching some VDSM logs from source and destination hosts, these
> were after initiating a migration of a single VM
>
> I'm seeing some errors in the logs regarding the migration stalling, but
> not able to determine why its stalling.
>
> 2021-05-27 17:10:22,167+ INFO  (jsonrpc/4) [api.host] FINISH
> getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
> 'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
> {'policy': [], 'current_values': [{'name': 'vda', 'path':
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
> '2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
> [{'name': 'sda', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
> '26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
> [{'name': 'vda', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
> '60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
> [{'name': 'sda', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
> 'sdb', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
> 'sdd', 'path': 
> '/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
> 'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
> [{'name': 'sda', 'path':
> '/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/310d8b3e-d578-418d-9802-dc0ebcea06d6/aa758c51-8478-4273-aeef-d4b374b8d6b4',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
> 'sdb', 'path':
> '/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/4072fda1-ec82-45c9-b353-91fceb13bf08/891f5982-dead-48b4-8907-caa1e309fa82',
> 'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
> 'total_iops_sec': 0, 'write_

[ovirt-users] Unable to migrate VMs

2021-05-27 Thread Jayme
I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
is mix of GlusterFS and NFS. Everything has been running smoothly, but the
other day I noticed many VMs had invalid snapshots. I run a script to
export OVA for VMs for backup purposes, exports seemed to have been fine
but snapshots failed to delete at the end. I was able to manually delete
the snapshots through oVirt admin GUI without any errors/warnings and the
VMs have been running fine and can restart them without problems.

I thought this problem may be due to snapshot bug which is supposedly fixed
in oVirt 4.4.6. I decided to start upgrading cluster to 4.4.6 and am now
having a problem with VMs not being able to migrate.

When I migrate any VM (doesn't seem to matter which host to and from) the
process starts but stops at 0-1%. Eventually after 15-30 minutes or more
the tasks are all completed by the VM is not migrated.

I am unable to migrate any VMs and as such I cannot place any host in
maintenance mode.

I've attaching some VDSM logs from source and destination hosts, these were
after initiating a migration of a single VM

I'm seeing some errors in the logs regarding the migration stalling, but
not able to determine why its stalling.

2021-05-27 17:10:22,167+ INFO  (jsonrpc/4) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'code': 0, 'message': 'Done'},
'io_tune_policies_dict': {'f8f4e4a1-b565-4663-8962-c8804dbb86fb':
{'policy': [], 'current_values': [{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme1n1/bce04425-1d25-4489-bdab-2834a1a57db8/images/38b27cce-c744-4a12-85a3-3af07d386da2/93c1e793-f8cb-42c9-86a6-0e9ce4a6023a',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'2b87204f-f695-474a-9f08-47b85fcac366': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/f2e0c9f3-ab0d-441a-85a6-07a42e78b5a8/848f353e-6787-4e20-ab7b-0541ebd852c6',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'26332421-54a3-4afc-90e7-551a7e314c80': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/b7a785f9-307b-42af-9bbe-23cac884fe97/ed1d027e-a36a-4e6b-9207-119915044e06',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'60edbd80-dad7-4bf8-8fd1-e138413cf9f6': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/535fcb2e-ece9-4d50-86fe-bf6264d11ae1/6c01a036-8a14-46ba-a4b4-fe4f66a586a3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path': 
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/1f467fb5-5ea7-42ba-bace-f175c86791b2/cbe8327f-9b7f-442f-a650-6888bb11a674',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdd', 'path': 
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme2n1/a7efa448-201b-4453-9bc9-900559b891ca/images/c93956d5-c88d-41f9-8c38-9f5f62cc90dd/3920b46c-5fab-4b63-b47f-2fa5c6714c36',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'beeefe06-78a0-4e14-a932-cc8d734d542d': {'policy': [], 'current_values':
[{'name': 'sda', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/310d8b3e-d578-418d-9802-dc0ebcea06d6/aa758c51-8478-4273-aeef-d4b374b8d6b4',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}, {'name':
'sdb', 'path':
'/rhev/data-center/mnt/glusterSD/gluster0.grove.silverorange.com:_data__sdb/30fd0a2f-ab42-4a8a-8f0b-67242dc2d15d/images/4072fda1-ec82-45c9-b353-91fceb13bf08/891f5982-dead-48b4-8907-caa1e309fa82',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]},
'7e5156de-649d-4904-9092-21a699242a37': {'policy': [], 'current_values':
[{'name': 'vda', 'path':
'/rhev/data-center/mnt/10.11.0.9:_vmstorage_nvme0n1/a99cd663-f6d5-42d8-bd7a-ee0b5d068608/images/ca0c1208-a7aa-4ef6-a450-4a40bd4455f3/a2335199-ddd4-429b-b55d-f4d527081fd3',
'ioTune': {'total_bytes_sec': 0, 'read_bytes_sec': 0, 'write_bytes_sec': 0,
'total_iops_sec': 0, 'write_iops_sec': 0, 'read_iops_sec': 0}}]}}}
from=::1,35012 (api:54)
2021-05-27 17:10:31,118+ WARN  (migmon/7

[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-27 Thread Strahil Nikolov via Users
Verify that thr engine's FQDN is resolvable and A/ + PTR records are OK
Best Regards,Strahil Nikolov
 
 
  On Thu, May 27, 2021 at 17:57, Harry O wrote:   Nope, it 
didn't fix it, just typed in the wrong IP-address
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GTTUZS5YNBWSZF7KMB2X6XGX5AZYB5QX/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWSDX7WWCW6II5GCZBKZIFFTGV26RTQK/


[ovirt-users] Re: Hosted-engine fail and host reboot

2021-05-27 Thread Dominique D
it seems to be this problem 

I tried to install it again with version 4.4.6-2021051809 and I get this 
message. 

[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "ERROR: 
Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED: 
'6900:tcp' already in 'public' Non-permanent operation"}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DASEBWHNO2RT2QNH23KYD7ENNSLFWYLN/


[ovirt-users] Re: Ovirt 4.4.6.8-1 upload ova as template to multiple hosts

2021-05-27 Thread Arik Hadas
On Thu, May 27, 2021 at 6:19 PM Don Dupuis  wrote:

> It is the same script, I just renamed it. Now what I did works but it
> isn't the proper way to do it. I want to improve it before contributing it
> as I write the ovf data back out to a file and use pythons sed like
> functionality to modify the file and then read in the updated one to give
> to the engine. I just wanted to get something to work to get my project
> done.
>

Ack, yeah, it makes sense to change that.
I don't know if you've added an argument to the script that determines
whether or not the entity is imported as a clone, but it can also be handy
to import as a clone automatically when a VM/template with the same ID
already exists. That's something we used to do when importing from export
domains and we miss it for uploading from OVA.


>
> Don
>
> On Thu, May 27, 2021 at 9:35 AM Arik Hadas  wrote:
>
>>
>>
>> On Thu, May 27, 2021 at 4:24 PM Don Dupuis  wrote:
>>
>>> Arik
>>> Just to say thank you again for the pointers on what was needed to be
>>> done. I was able to modify that script to do what was needed and now it
>>> works like a champ.
>>>
>>
>> Awesome, glad to hear that.
>> Looking again at what you wrote below, you've mentioned the script is
>> named upload_ova_as_template.py. We've made some changes to that script and
>> renamed it to upload_ova_as_vm_or_template.py [1]. It would be great if you
>> could contribute your changes to it
>>
>> [1]
>> https://gerrit.ovirt.org/gitweb?p=ovirt-engine-sdk.git;a=blob;f=sdk/examples/upload_ova_as_vm_or_template.py;h=d6f40548b912577dc18a24d564f37d117a084d28;hb=HEAD
>>
>>
>>>
>>> Thanks
>>> Don
>>>
>>> On Mon, May 24, 2021 at 3:56 PM Don Dupuis  wrote:
>>>
 Arik
 Thanks for the info. My simple setup is just a base for bigger clusters
 that I have to do and there will be multiple templates that I need to
 install. I have python programming skills but just needed some simple
 pointing in the right direction on where to make the addition changes to
 the code. It takes a little bit of time to get the services and types
 correct for what you want to accomplish and how it is implemented.

 Don

 On Mon, May 24, 2021 at 2:51 PM Arik Hadas  wrote:

>
>
> On Mon, May 24, 2021 at 6:49 PM Don Dupuis  wrote:
>
>> Nudging to see if anyone has experience with this?
>>
>> Don
>>
>> On Wed, May 19, 2021 at 11:18 PM Don Dupuis 
>> wrote:
>>
>>> I have a single ovirt manager and 2 ovirt hosts, each has a local
>>> storage domain. I am using the upload_ova_as_template.py and my template
>>> upload works on a single host but not both. If I use the gui method, 
>>> there
>>> is the option of "clone" and putting in a new name for the template. 
>>> This
>>> seems to work most of the time, but it has failed a couple of times 
>>> also. I
>>> would like to add the same "clone" and "name" option to the
>>> upload_ova_as_temple.py. What is the best way to do this since I need to
>>> have unique UUIDs for the template disks? This is a unique setup in the
>>> fact that I can't use shared storage and this should be doable as I was
>>> able to do it in the ovirt gui.
>>>
>>
> If it's just a one-time operation, I'd rather try to create a VM out
> of the template that was imported successfully (with disk-provisioning =
> clone), create a second template out of it, remove the original template
> and then upload the template from the OVA to the other storage domain.
>
> Changing the script to obtain import as clone is also possible but it
> requires some programming skills - you'd need to either generate or 
> provide
> the script with different name for the template and UUIDs for the disks 
> and
> then (1) use the new UUIDs when uploading the disks and (2) change the OVF
> that is loaded from the OVA to have the new name and UUIDs before 
> providing
> the OVF to the engine.
>
>
>>
>>> Thanks
>>> Don
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U6Q5P7YKDBJARPPNFJOXAMP2AMKEJDNK/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UK6ASJ4NHZY7IUEYOCQPVSSIUTVFIVE2/


[ovirt-users] Re: Creating Snapshots failed

2021-05-27 Thread Liran Rotenberg
On Thu, May 27, 2021 at 6:05 PM jb  wrote:
>
> Hello Community,
>
> since I upgrade our cluster to ovirt 4.4.6.8-1.el8 I'm not able anymore
> to create snapshots on certain VMs. For example I have two debian 10
> VMs, from one I can make a snapshot, and from other one not.
>
> Both are up to date and uses the same qemu-guest-agent versions.
>
> I tried to create snapshots over API and on web gui, both gives the same
> result.
>
> In the attachment you found a snipped from the engine.log.
Hi,
The error happened in VDSM (or even platform). But we need the VDSM
log to see what is wrong.

Regards,
Liran.
>
> Any help would be wonderful!
>
>
> Regards,
>
> Jonathan
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKZYLZTC5ZSENMFPOR45Y65F6THYUTZI/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K6MHBMFK7MIAK762FSFTRUBLQJSQY7ZT/


[ovirt-users] Re: Ovirt 4.4.6.8-1 upload ova as template to multiple hosts

2021-05-27 Thread Don Dupuis
It is the same script, I just renamed it. Now what I did works but it isn't
the proper way to do it. I want to improve it before contributing it as I
write the ovf data back out to a file and use pythons sed like
functionality to modify the file and then read in the updated one to give
to the engine. I just wanted to get something to work to get my project
done.

Don

On Thu, May 27, 2021 at 9:35 AM Arik Hadas  wrote:

>
>
> On Thu, May 27, 2021 at 4:24 PM Don Dupuis  wrote:
>
>> Arik
>> Just to say thank you again for the pointers on what was needed to be
>> done. I was able to modify that script to do what was needed and now it
>> works like a champ.
>>
>
> Awesome, glad to hear that.
> Looking again at what you wrote below, you've mentioned the script is
> named upload_ova_as_template.py. We've made some changes to that script and
> renamed it to upload_ova_as_vm_or_template.py [1]. It would be great if you
> could contribute your changes to it
>
> [1]
> https://gerrit.ovirt.org/gitweb?p=ovirt-engine-sdk.git;a=blob;f=sdk/examples/upload_ova_as_vm_or_template.py;h=d6f40548b912577dc18a24d564f37d117a084d28;hb=HEAD
>
>
>>
>> Thanks
>> Don
>>
>> On Mon, May 24, 2021 at 3:56 PM Don Dupuis  wrote:
>>
>>> Arik
>>> Thanks for the info. My simple setup is just a base for bigger clusters
>>> that I have to do and there will be multiple templates that I need to
>>> install. I have python programming skills but just needed some simple
>>> pointing in the right direction on where to make the addition changes to
>>> the code. It takes a little bit of time to get the services and types
>>> correct for what you want to accomplish and how it is implemented.
>>>
>>> Don
>>>
>>> On Mon, May 24, 2021 at 2:51 PM Arik Hadas  wrote:
>>>


 On Mon, May 24, 2021 at 6:49 PM Don Dupuis  wrote:

> Nudging to see if anyone has experience with this?
>
> Don
>
> On Wed, May 19, 2021 at 11:18 PM Don Dupuis 
> wrote:
>
>> I have a single ovirt manager and 2 ovirt hosts, each has a local
>> storage domain. I am using the upload_ova_as_template.py and my template
>> upload works on a single host but not both. If I use the gui method, 
>> there
>> is the option of "clone" and putting in a new name for the template. This
>> seems to work most of the time, but it has failed a couple of times 
>> also. I
>> would like to add the same "clone" and "name" option to the
>> upload_ova_as_temple.py. What is the best way to do this since I need to
>> have unique UUIDs for the template disks? This is a unique setup in the
>> fact that I can't use shared storage and this should be doable as I was
>> able to do it in the ovirt gui.
>>
>
 If it's just a one-time operation, I'd rather try to create a VM out of
 the template that was imported successfully (with disk-provisioning =
 clone), create a second template out of it, remove the original template
 and then upload the template from the OVA to the other storage domain.

 Changing the script to obtain import as clone is also possible but it
 requires some programming skills - you'd need to either generate or provide
 the script with different name for the template and UUIDs for the disks and
 then (1) use the new UUIDs when uploading the disks and (2) change the OVF
 that is loaded from the OVA to have the new name and UUIDs before providing
 the OVF to the engine.


>
>> Thanks
>> Don
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U6Q5P7YKDBJARPPNFJOXAMP2AMKEJDNK/
>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BZMJRKEQOH4UECDGUFPMDHGV45POGJEO/


[ovirt-users] Creating Snapshots failed

2021-05-27 Thread jb

Hello Community,

since I upgrade our cluster to ovirt 4.4.6.8-1.el8 I'm not able anymore 
to create snapshots on certain VMs. For example I have two debian 10 
VMs, from one I can make a snapshot, and from other one not.


Both are up to date and uses the same qemu-guest-agent versions.

I tried to create snapshots over API and on web gui, both gives the same 
result.


In the attachment you found a snipped from the engine.log.

Any help would be wonderful!


Regards,

Jonathan



2021-05-27 16:46:17,887+02 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (default task-1262) [660a5a41-c2b8-4369-a846-0e5c902a5219] Lock Acquired to object 'EngineLock:{exclusiveLocks='[c241d4ce-d2b8-4cca-8a81-0aa0e1017f35=VM]', sharedLocks=''}'
2021-05-27 16:46:17,901+02 INFO  [org.ovirt.engine.core.bll.memory.MemoryStorageHandler] (default task-1262) [660a5a41-c2b8-4369-a846-0e5c902a5219] The memory volumes of VM (name 'mariaDB', id 'c241d4ce-d2b8-4cca-8a81-0aa0e1017f35') will be stored in storage domain (name 'vmstore', id '3cf83851-1cc8-4f97-8960-08a60b9e25db')
2021-05-27 16:46:17,947+02 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] Running command: CreateSnapshotForVmCommand internal: false. Entities affected :  ID: c241d4ce-d2b8-4cca-8a81-0aa0e1017f35 Type: VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER
2021-05-27 16:46:17,966+02 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] Running command: CreateSnapshotDiskCommand internal: true. Entities affected :  ID: c241d4ce-d2b8-4cca-8a81-0aa0e1017f35 Type: VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER
2021-05-27 16:46:17,992+02 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] Running command: CreateSnapshotCommand internal: true. Entities affected :  ID: ---- Type: Storage
2021-05-27 16:46:18,016+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='c9baa5d4-3543-11eb-9c0c-00163e33f845', ignoreFailoverLimit='false', storageDomainId='3cf83851-1cc8-4f97-8960-08a60b9e25db', imageGroupId='ad23c0db-1838-4f1f-811b-2b213d3a11cd', imageSizeInBytes='21474836480', volumeFormat='COW', newImageId='e6e831f5-f033-4b29-b45c-de56909fcc3f', imageType='Sparse', newImageDescription='', imageInitialSizeInBytes='0', imageId='15259a3b-1065-4fb7-bc3c-04c5f4e14479', sourceImageGroupId='ad23c0db-1838-4f1f-811b-2b213d3a11cd', shouldAddBitmaps='false'}), log id: 75d3f7c8
2021-05-27 16:46:18,098+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] FINISH, CreateVolumeVDSCommand, return: e6e831f5-f033-4b29-b45c-de56909fcc3f, log id: 75d3f7c8
2021-05-27 16:46:18,104+02 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'a4f9fb29-aee4-4622-974c-19e31fe8a2e7'
2021-05-27 16:46:18,104+02 INFO  [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] CommandMultiAsyncTasks::attachTask: Attaching task 'fd7051de-83ff-4717-88ef-626926b334fe' to command 'a4f9fb29-aee4-4622-974c-19e31fe8a2e7'.
2021-05-27 16:46:18,120+02 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] Adding task 'fd7051de-83ff-4717-88ef-626926b334fe' (Parent Command 'CreateSnapshot', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2021-05-27 16:46:18,158+02 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] BaseAsyncTask::startPollingTask: Starting to poll task 'fd7051de-83ff-4717-88ef-626926b334fe'.
2021-05-27 16:46:18,199+02 INFO  [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] Running command: AddDiskCommand internal: true. Entities affected :  ID: 3cf83851-1cc8-4f97-8960-08a60b9e25db Type: StorageAction group CREATE_DISK with role type USER
2021-05-27 16:46:18,216+02 INFO  [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-213156) [660a5a41-c2b8-4369-a846-0e5c902a5219] Running command: AddImageFromScratchCommand internal: true. Ent

[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-27 Thread Harry O
Nope, it didn't fix it, just typed in the wrong IP-address
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GTTUZS5YNBWSZF7KMB2X6XGX5AZYB5QX/


[ovirt-users] Re: CPU Compatibility Problem after Upgrading Centos 8 Stream Host

2021-05-27 Thread Gunasekhar Kothapalli via Users
The issue was with edk2-ovmf package update, rolling that package back and
it started recognizing the CPU and host coming up... tested on one host and
worked fine. 

 

 

 

Thanks & Regards, 
Gunasekhar Kothapalli



 

From: Nur Imam Febrianto  
Sent: Wednesday, May 26, 2021 9:54 PM
To: k.gunasek...@non.keysight.com; users@ovirt.org
Subject: RE: [ovirt-users] Re: CPU Compatibility Problem after Upgrading
Centos 8 Stream Host

 

CAUTION: This message originates from an external sender.

Already trying several things, seem kernel update doesn't make this problem
happen. Already tried to yum update exclude kernel, the issue still
happened.

 

Thanks.

 

Regards,

Nur Imam Febrianto

 

From: k.gunasekhar--- via Users  
Sent: 26 May 2021 12:27
To: users@ovirt.org  
Subject: [ovirt-users] Re: CPU Compatibility Problem after Upgrading Centos
8 Stream Host

 

I also end up with the same problem today. How did rollback yum i see many
yum updates in the yum history.

Here is what the error says. 

The host CPU does not match the Cluster CPU Type and is running in a
degraded mode. It is missing the following CPU flags: model_IvyBridge,
spec_ctrl. Please update the host CPU microcode or change the Cluster CPU
Type.
___
Users mailing list -- users@ovirt.org  
To unsubscribe send an email to users-le...@ovirt.org
 
Privacy Statement:
https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt
.org%2Fprivacy-policy.html&data=04%7C01%7C%7Ccdfdf3baa30a417b631a08d9200
6eeac%7C84df9e7fe9f640afb435%7C1%7C0%7C637576036430558160%7CUnkn
own%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVC
I6Mn0%3D%7C1000&sdata=ioM1ZSkM2q7CNvInAJQdg0n%2BZdUNSMG%2BpmapvHi%2FFKo%
3D&reserved=0
oVirt Code of Conduct:
https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt
.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2F&data=04%7C01%7C%7Ccdf
df3baa30a417b631a08d92006eeac%7C84df9e7fe9f640afb435%7C1%7C0%7C6
37576036430558160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMz
IiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=6pxtmPuppncwbn6Q3sRxYq%2Bd
pbK68bZ1HV28qnxAf0w%3D&reserved=0
List Archives:
https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovi
rt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2F7VQ7CABK4FIP4SLPNNP
EVZSCM6DTIUAD%2F&data=04%7C01%7C%7Ccdfdf3baa30a417b631a08d92006eeac%7C84
df9e7fe9f640afb435%7C1%7C0%7C637576036430558160%7CUnknown%7CTWFp
bGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7
C1000&sdata=UxZ3oeiPqYUo1N6dvIAyUmlTPWv%2FG1FM0AMmF%2FLHmno%3D&reser
ved=0

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTOZO4DO6F6LKHEJLY4HI7GSSZ2T54ZL/


[ovirt-users] Re: Ovirt 4.4.6.8-1 upload ova as template to multiple hosts

2021-05-27 Thread Arik Hadas
On Thu, May 27, 2021 at 4:24 PM Don Dupuis  wrote:

> Arik
> Just to say thank you again for the pointers on what was needed to be
> done. I was able to modify that script to do what was needed and now it
> works like a champ.
>

Awesome, glad to hear that.
Looking again at what you wrote below, you've mentioned the script is named
upload_ova_as_template.py. We've made some changes to that script and
renamed it to upload_ova_as_vm_or_template.py [1]. It would be great if you
could contribute your changes to it

[1]
https://gerrit.ovirt.org/gitweb?p=ovirt-engine-sdk.git;a=blob;f=sdk/examples/upload_ova_as_vm_or_template.py;h=d6f40548b912577dc18a24d564f37d117a084d28;hb=HEAD


>
> Thanks
> Don
>
> On Mon, May 24, 2021 at 3:56 PM Don Dupuis  wrote:
>
>> Arik
>> Thanks for the info. My simple setup is just a base for bigger clusters
>> that I have to do and there will be multiple templates that I need to
>> install. I have python programming skills but just needed some simple
>> pointing in the right direction on where to make the addition changes to
>> the code. It takes a little bit of time to get the services and types
>> correct for what you want to accomplish and how it is implemented.
>>
>> Don
>>
>> On Mon, May 24, 2021 at 2:51 PM Arik Hadas  wrote:
>>
>>>
>>>
>>> On Mon, May 24, 2021 at 6:49 PM Don Dupuis  wrote:
>>>
 Nudging to see if anyone has experience with this?

 Don

 On Wed, May 19, 2021 at 11:18 PM Don Dupuis  wrote:

> I have a single ovirt manager and 2 ovirt hosts, each has a local
> storage domain. I am using the upload_ova_as_template.py and my template
> upload works on a single host but not both. If I use the gui method, there
> is the option of "clone" and putting in a new name for the template. This
> seems to work most of the time, but it has failed a couple of times also. 
> I
> would like to add the same "clone" and "name" option to the
> upload_ova_as_temple.py. What is the best way to do this since I need to
> have unique UUIDs for the template disks? This is a unique setup in the
> fact that I can't use shared storage and this should be doable as I was
> able to do it in the ovirt gui.
>

>>> If it's just a one-time operation, I'd rather try to create a VM out of
>>> the template that was imported successfully (with disk-provisioning =
>>> clone), create a second template out of it, remove the original template
>>> and then upload the template from the OVA to the other storage domain.
>>>
>>> Changing the script to obtain import as clone is also possible but it
>>> requires some programming skills - you'd need to either generate or provide
>>> the script with different name for the template and UUIDs for the disks and
>>> then (1) use the new UUIDs when uploading the disks and (2) change the OVF
>>> that is loaded from the OVA to have the new name and UUIDs before providing
>>> the OVF to the engine.
>>>
>>>

> Thanks
> Don
>
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/U6Q5P7YKDBJARPPNFJOXAMP2AMKEJDNK/

>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M2ORTP6NNIU5ZB25WP3Q6KF2ITHTLGXM/


[ovirt-users] Re: Ovirt 4.4.6.8-1 upload ova as template to multiple hosts

2021-05-27 Thread Don Dupuis
Arik
Just to say thank you again for the pointers on what was needed to be done.
I was able to modify that script to do what was needed and now it works
like a champ.

Thanks
Don

On Mon, May 24, 2021 at 3:56 PM Don Dupuis  wrote:

> Arik
> Thanks for the info. My simple setup is just a base for bigger clusters
> that I have to do and there will be multiple templates that I need to
> install. I have python programming skills but just needed some simple
> pointing in the right direction on where to make the addition changes to
> the code. It takes a little bit of time to get the services and types
> correct for what you want to accomplish and how it is implemented.
>
> Don
>
> On Mon, May 24, 2021 at 2:51 PM Arik Hadas  wrote:
>
>>
>>
>> On Mon, May 24, 2021 at 6:49 PM Don Dupuis  wrote:
>>
>>> Nudging to see if anyone has experience with this?
>>>
>>> Don
>>>
>>> On Wed, May 19, 2021 at 11:18 PM Don Dupuis  wrote:
>>>
 I have a single ovirt manager and 2 ovirt hosts, each has a local
 storage domain. I am using the upload_ova_as_template.py and my template
 upload works on a single host but not both. If I use the gui method, there
 is the option of "clone" and putting in a new name for the template. This
 seems to work most of the time, but it has failed a couple of times also. I
 would like to add the same "clone" and "name" option to the
 upload_ova_as_temple.py. What is the best way to do this since I need to
 have unique UUIDs for the template disks? This is a unique setup in the
 fact that I can't use shared storage and this should be doable as I was
 able to do it in the ovirt gui.

>>>
>> If it's just a one-time operation, I'd rather try to create a VM out of
>> the template that was imported successfully (with disk-provisioning =
>> clone), create a second template out of it, remove the original template
>> and then upload the template from the OVA to the other storage domain.
>>
>> Changing the script to obtain import as clone is also possible but it
>> requires some programming skills - you'd need to either generate or provide
>> the script with different name for the template and UUIDs for the disks and
>> then (1) use the new UUIDs when uploading the disks and (2) change the OVF
>> that is loaded from the OVA to have the new name and UUIDs before providing
>> the OVF to the engine.
>>
>>
>>>
 Thanks
 Don

>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U6Q5P7YKDBJARPPNFJOXAMP2AMKEJDNK/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FQGGOO5VLIJW5SEM4ETR2Q3XCDKP2OMP/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-27 Thread Harry O
Hi,

This did the job for the first issue, thanks.
vi  /usr/lib/systemd/system/cockpit.service
ExecStart=/usr/libexec/cockpit-tls --idle-timeout 180

Now I get this:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Engine IP is different 
from engine's he_fqdn resolved IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM 
IP address is while the engine's he_fqdn hej.5ervers.lan resolves to 
192.168.4.144. If you are using DHCP, check your DHCP reservation 
configuration"}

But the information is correct.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JFMPDSNVFQNHBMYGDSMCSKLXU7HUFZK/


[ovirt-users] what does dwh db contain?

2021-05-27 Thread Nathanaël Blanchet

Hello,

I need to understand which user removed a vm 2 months ago, but I can't 
recover such info into engine.log (that default rotate is 20).


I gave a quick look to the dwh database: I'm able to find the removal 
date, but not the associated user.


Is there a way to get this information?

--
Nathanaël Blanchet

Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B4WRS5FRMNWFXRIMGKJUYA7PLNNTGC7Z/


[ovirt-users] ovirt4.4 vm with windows 2008r2 bsod19

2021-05-27 Thread grig . 4n
after migrate from ovirt4.1 to ovirt4.4 virtual mashines with  windows 2008r2 
bsod  0x0019. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZLSMUE4I2BBZFLIVVRWOUJS4ZYP6AVMH/