[ovirt-users] Re: oVirt Node 4.5.5 web login

2024-02-23 Thread Ismet Sonmez
its root user
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5S3YQ75HVOTJZSUVR56CHT2OHJJJMAYN/


[ovirt-users] Re: oVirt Node 4.5.5 web login

2024-02-20 Thread antonio . riggio
I am having the same problem too. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXLIQIUOIXE5NE27T3GHAL6NSK5EYPP3/


[ovirt-users] Re: ovirt node 4.5 is not working on esxi8 on my lab

2023-07-20 Thread Jorge Visentini
Hi there.

I believe the devs have removed this option.
Now deploy is command line only using *hosted-engine --deploy --4*
So, open a session using tmux and deploy.

Be happy.

Em qui., 20 de jul. de 2023 às 13:27,  escreveu:

> Hello there,
>
> May I ask why I have installed ovirt node 4.5 latest from iso on my esxi8
> after I logged in to the web interface to manage this host I cannot find
> menu virtualization but when I tested on 4.4.6 everything is work. Do you
> have any idea?
>
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WLICKYZLJON2BYBPYHFPGSDDZF7VWEIP/
>


-- 
Att,
Jorge Visentini
+55 55 98432-9868
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/47CZK5RW2OP66JVXPT277VH6VBITTFEH/


[ovirt-users] Re: ovirt node

2023-05-22 Thread Volenbovskyi, Konstantin
Hi,
It is something like 'IPv4/IPv6 addresses that are result DNS resolution of 
hostname should be identical to interface configuration (ip a)'
My guess would be that DNS server that resolves ovirt.bee.moitel.local doesn't 
have  records.
I think that ovirt 4.5 resolves  it by forcing you to use --4 or --6 for  in 
hosted-engine deploy.
In ovirt 4.4 this is optional, but I think that using --4 in your deployment 
command it might resolve it for you.

Another way forward is to disable IPv6 on your interface.
Check out:
https://github.com/oVirt/ovirt-ansible-collection/blob/master/roles/hosted_engine_setup/tasks/pre_checks/002_validate_hostname_tasks.yml
(this is from master, you can find this file locally on your deployment host in 
case of any doubts regarding ovirt 4.4 version and version in master)

BR,
Konstantin

Am 22.05.23, 07:46 schrieb "skhurtsil...@cellfie.ge 
" mailto:skhurtsil...@cellfie.ge>>:


Hello Guys
I installed oVirt Node 4.4 and I want to deploy Hosted Engine but i get this 
error


[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure the resolved address 
resolves only on the selected interface]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "hostname 
'ovirt.bee.moitel.local' doesn't uniquely match the interface 'ens192' selected 
for the management bridge; it matches also interface with IP 
['fe80::9a5b:2039:fe49:5252', '192.168.222.1', 'fd00:1234:5678:900::1']. Please 
make sure that the hostname got from the interface for the management network 
resolves only there.\n"}


How can i fix this error?
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 

Privacy Statement: https://www.ovirt.org/privacy-policy.html 

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 

List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org 
/message/MFW6ESTNTQPRL4Y6WWZ4EZHSEITQ334B/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MPGVIMEJRGG2PC5QMOAW6MDQB3PR4D5U/


[ovirt-users] Re: Ovirt node disk broken

2023-05-09 Thread marcel
Hi all,

Did no one have a single node installed where the systemdisk is broken?

I have reinstalled the node and have attached the old gluster drives with 
"force" and i have deploy the hosted engine.
Later i have add the gluster volumes to the engine and i could see the VMs that 
i can import. 

This i have done but the disks of the VMs where only 64 MB which is the first 
shard file only. the connection from fist file to the shard file is broken. i 
assume. 
where the glusterfs is storing the shard file information? i think it is not 
possible to backup it because the vm disk can have a lot shard and also new 
shard files after the backup. 

Br
Marcel
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SFE4DBUIOXMMPB3MQ6OYHMMMND3GERX4/


[ovirt-users] Re: Ovirt-node don't communicate with qemu-agent

2022-12-27 Thread Christoph Timm

Hi Arik,

I have checked the installed version of VDSM:
vdsm.x86_64 4.50.3.4-1.el8 @centos-ovirt45

I also refreshed the repos and don't see any update for the VDSM. Looks 
like that this version have not been released yet.


Best regards
Christoph

Am 27.12.22 um 12:25 schrieb Christoph Timm:

No this is on none of my 6 hosts.

/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py

    def _qga_call_get_vcpus(self, vm):
    try:
    self.log.debug('Requesting guest CPU info for vm=%s', vm.id)
    with vm.qga_context(_COMMAND_TIMEOUT):
    vcpus = QemuGuestAgentDomain(vm).guestVcpus()
    except (exception.NonResponsiveGuestAgent, 
libvirt.libvirtError) as e:
    self.log.info('Failed to get guest CPU info for vm=%s, 
error: %s',

  vm.id, e)
    self.set_failure(vm.id)
    return {}
    except virdomain.NotConnectedError:
    self.log.debug(
    'Not querying QEMU-GA for guest CPU info because domain'
    'is not running for vm-id=%s', vm.id)
    return {}
    if 'online' in vcpus:
    count = len(taskset.cpulist_parse(vcpus['online']))
    else:
    count = -1
    return {'guestCPUCount': count}


Am 27.12.22 um 12:17 schrieb Arik Hadas:



On Tue, Dec 27, 2022 at 12:52 PM Christoph Timm  wrote:

No for me with 4.5.4-1.el8


Do you see the same error in the vdsm log?
The stacktrace below suggests that the fix is not included there, see:
https://github.com/oVirt/vdsm/blob/v4.50.4.1/lib/vdsm/virt/qemuguestagent.py#L797 




Am 27.12.22 um 11:22 schrieb Arik Hadas:



On Tue, Dec 27, 2022 at 11:50 AM Christoph Timm
 wrote:

Hi Fernando,

I have also from time to time this issue.

I can see the following in the vdsm.log if the issue occurs:

2022-12-27 10:38:22,473+0100 ERROR (qgapoller/3)
[virt.periodic.Operation] > operation failed (periodic:187)
Traceback (most recent call last):
  File
"/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py",
line 185, in __call__
    self._func()
  File
"/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
line 476, in _poller
    vm_id, self._qga_call_get_vcpus(vm_obj))
  File
"/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
line 797, in _qga_call_get_vcpus
    if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable


This should have been resolved by
https://github.com/oVirt/vdsm/pull/350



I had this also in older versions so this is nothing new for me.
Sometimes I can solve it with putting the host in
maintenance. But it is coming back after a while.


Best regards
Christoph

Am 19.12.22 um 19:22 schrieb Fernando Hallberg:

Hi all,

I reinstalled one of the ovirt-nodes, with ovirt-4.5.4, and
after the reinstallation the agents of the vms connected to
this node cannot communicate with the ovirt-engine.

ovirt-engine 4.5.4

any idea?

VMs work perfectly, but the agent doesn't communicate.

Best regards,

Fernando Hallberg



___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCCPQWTYU4D2EZFNRO77FNAKVFKWKFDA/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHQSH7AIOPVGG5IDANH4LBQALJRPW7KR/






___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/QHR2YGNY4HNZU227ID2CKRF5ZUXLLJB5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Ovirt-node don't communicate with qemu-agent

2022-12-27 Thread Christoph Timm

No this is on none of my 6 hosts.

/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py

    def _qga_call_get_vcpus(self, vm):
    try:
    self.log.debug('Requesting guest CPU info for vm=%s', vm.id)
    with vm.qga_context(_COMMAND_TIMEOUT):
    vcpus = QemuGuestAgentDomain(vm).guestVcpus()
    except (exception.NonResponsiveGuestAgent, 
libvirt.libvirtError) as e:
    self.log.info('Failed to get guest CPU info for vm=%s, 
error: %s',

  vm.id, e)
    self.set_failure(vm.id)
    return {}
    except virdomain.NotConnectedError:
    self.log.debug(
    'Not querying QEMU-GA for guest CPU info because domain'
    'is not running for vm-id=%s', vm.id)
    return {}
    if 'online' in vcpus:
    count = len(taskset.cpulist_parse(vcpus['online']))
    else:
    count = -1
    return {'guestCPUCount': count}


Am 27.12.22 um 12:17 schrieb Arik Hadas:



On Tue, Dec 27, 2022 at 12:52 PM Christoph Timm  wrote:

No for me with 4.5.4-1.el8


Do you see the same error in the vdsm log?
The stacktrace below suggests that the fix is not included there, see:
https://github.com/oVirt/vdsm/blob/v4.50.4.1/lib/vdsm/virt/qemuguestagent.py#L797 




Am 27.12.22 um 11:22 schrieb Arik Hadas:



On Tue, Dec 27, 2022 at 11:50 AM Christoph Timm 
wrote:

Hi Fernando,

I have also from time to time this issue.

I can see the following in the vdsm.log if the issue occurs:

2022-12-27 10:38:22,473+0100 ERROR (qgapoller/3)
[virt.periodic.Operation] > operation failed (periodic:187)
Traceback (most recent call last):
  File
"/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py",
line 185, in __call__
    self._func()
  File
"/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
line 476, in _poller
    vm_id, self._qga_call_get_vcpus(vm_obj))
  File
"/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
line 797, in _qga_call_get_vcpus
    if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable


This should have been resolved by
https://github.com/oVirt/vdsm/pull/350



I had this also in older versions so this is nothing new for me.
Sometimes I can solve it with putting the host in
maintenance. But it is coming back after a while.


Best regards
Christoph

Am 19.12.22 um 19:22 schrieb Fernando Hallberg:

Hi all,

I reinstalled one of the ovirt-nodes, with ovirt-4.5.4, and
after the reinstallation the agents of the vms connected to
this node cannot communicate with the ovirt-engine.

ovirt-engine 4.5.4

any idea?

VMs work perfectly, but the agent doesn't communicate.

Best regards,

Fernando Hallberg



___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCCPQWTYU4D2EZFNRO77FNAKVFKWKFDA/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHQSH7AIOPVGG5IDANH4LBQALJRPW7KR/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QHR2YGNY4HNZU227ID2CKRF5ZUXLLJB5/


[ovirt-users] Re: Ovirt-node don't communicate with qemu-agent

2022-12-27 Thread Arik Hadas
On Tue, Dec 27, 2022 at 12:52 PM Christoph Timm  wrote:

> No for me with 4.5.4-1.el8
>

Do you see the same error in the vdsm log?
The stacktrace below suggests that the fix is not included there, see:
https://github.com/oVirt/vdsm/blob/v4.50.4.1/lib/vdsm/virt/qemuguestagent.py#L797



>
> Am 27.12.22 um 11:22 schrieb Arik Hadas:
>
>
>
> On Tue, Dec 27, 2022 at 11:50 AM Christoph Timm  wrote:
>
>> Hi Fernando,
>>
>> I have also from time to time this issue.
>>
>> I can see the following in the vdsm.log if the issue occurs:
>>
>> 2022-12-27 10:38:22,473+0100 ERROR (qgapoller/3)
>> [virt.periodic.Operation] > >
>> operation failed (periodic:187)
>> Traceback (most recent call last):
>>   File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py", line
>> 185, in __call__
>> self._func()
>>   File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
>> line 476, in _poller
>> vm_id, self._qga_call_get_vcpus(vm_obj))
>>   File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
>> line 797, in _qga_call_get_vcpus
>> if 'online' in vcpus:
>> TypeError: argument of type 'NoneType' is not iterable
>>
>
> This should have been resolved by https://github.com/oVirt/vdsm/pull/350
>
>
>>
>>
>> I had this also in older versions so this is nothing new for me.
>> Sometimes I can solve it with putting the host in maintenance. But it is
>> coming back after a while.
>>
>
>> Best regards
>> Christoph
>>
>> Am 19.12.22 um 19:22 schrieb Fernando Hallberg:
>>
>> Hi all,
>>
>> I reinstalled one of the ovirt-nodes, with ovirt-4.5.4, and after the
>> reinstallation the agents of the vms connected to this node cannot
>> communicate with the ovirt-engine.
>>
>> ovirt-engine 4.5.4
>>
>> any idea?
>>
>> VMs work perfectly, but the agent doesn't communicate.
>>
>> Best regards,
>>
>> Fernando Hallberg
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCCPQWTYU4D2EZFNRO77FNAKVFKWKFDA/
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHQSH7AIOPVGG5IDANH4LBQALJRPW7KR/
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGSEZKLS3IRUEU3XSTQU6MBNVEIYQOK7/


[ovirt-users] Re: Ovirt-node don't communicate with qemu-agent

2022-12-27 Thread Christoph Timm

No for me with 4.5.4-1.el8

Am 27.12.22 um 11:22 schrieb Arik Hadas:



On Tue, Dec 27, 2022 at 11:50 AM Christoph Timm  wrote:

Hi Fernando,

I have also from time to time this issue.

I can see the following in the vdsm.log if the issue occurs:

2022-12-27 10:38:22,473+0100 ERROR (qgapoller/3)
[virt.periodic.Operation] > operation failed (periodic:187)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py",
line 185, in __call__
    self._func()
  File
"/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
line 476, in _poller
    vm_id, self._qga_call_get_vcpus(vm_obj))
  File
"/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
line 797, in _qga_call_get_vcpus
    if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable


This should have been resolved by https://github.com/oVirt/vdsm/pull/350



I had this also in older versions so this is nothing new for me.
Sometimes I can solve it with putting the host in maintenance. But
it is coming back after a while.


Best regards
Christoph

Am 19.12.22 um 19:22 schrieb Fernando Hallberg:

Hi all,

I reinstalled one of the ovirt-nodes, with ovirt-4.5.4, and after
the reinstallation the agents of the vms connected to this node
cannot communicate with the ovirt-engine.

ovirt-engine 4.5.4

any idea?

VMs work perfectly, but the agent doesn't communicate.

Best regards,

Fernando Hallberg



___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCCPQWTYU4D2EZFNRO77FNAKVFKWKFDA/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHQSH7AIOPVGG5IDANH4LBQALJRPW7KR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJB5PKYSNFWKG47P4GKHYPJVM5DVP37I/


[ovirt-users] Re: Ovirt-node don't communicate with qemu-agent

2022-12-27 Thread Arik Hadas
On Tue, Dec 27, 2022 at 11:50 AM Christoph Timm  wrote:

> Hi Fernando,
>
> I have also from time to time this issue.
>
> I can see the following in the vdsm.log if the issue occurs:
>
> 2022-12-27 10:38:22,473+0100 ERROR (qgapoller/3) [virt.periodic.Operation]
>  >
> operation failed (periodic:187)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py", line 185,
> in __call__
> self._func()
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
> line 476, in _poller
> vm_id, self._qga_call_get_vcpus(vm_obj))
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py",
> line 797, in _qga_call_get_vcpus
> if 'online' in vcpus:
> TypeError: argument of type 'NoneType' is not iterable
>

This should have been resolved by https://github.com/oVirt/vdsm/pull/350


>
>
> I had this also in older versions so this is nothing new for me.
> Sometimes I can solve it with putting the host in maintenance. But it is
> coming back after a while.
>

> Best regards
> Christoph
>
> Am 19.12.22 um 19:22 schrieb Fernando Hallberg:
>
> Hi all,
>
> I reinstalled one of the ovirt-nodes, with ovirt-4.5.4, and after the
> reinstallation the agents of the vms connected to this node cannot
> communicate with the ovirt-engine.
>
> ovirt-engine 4.5.4
>
> any idea?
>
> VMs work perfectly, but the agent doesn't communicate.
>
> Best regards,
>
> Fernando Hallberg
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCCPQWTYU4D2EZFNRO77FNAKVFKWKFDA/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHQSH7AIOPVGG5IDANH4LBQALJRPW7KR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z4SBQ6QIQPYSJJPMU6EZXCIWEVJOQW53/


[ovirt-users] Re: Ovirt-node don't communicate with qemu-agent

2022-12-27 Thread Christoph Timm

Hi Fernando,

I have also from time to time this issue.

I can see the following in the vdsm.log if the issue occurs:

2022-12-27 10:38:22,473+0100 ERROR (qgapoller/3) 
[virt.periodic.Operation] 0x7fbdf00ced68>> operation failed (periodic:187)

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py", line 
185, in __call__

    self._func()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", 
line 476, in _poller

    vm_id, self._qga_call_get_vcpus(vm_obj))
  File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", 
line 797, in _qga_call_get_vcpus

    if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable

I had this also in older versions so this is nothing new for me.
Sometimes I can solve it with putting the host in maintenance. But it is 
coming back after a while.


Best regards
Christoph

Am 19.12.22 um 19:22 schrieb Fernando Hallberg:

Hi all,

I reinstalled one of the ovirt-nodes, with ovirt-4.5.4, and after the 
reinstallation the agents of the vms connected to this node cannot 
communicate with the ovirt-engine.


ovirt-engine 4.5.4

any idea?

VMs work perfectly, but the agent doesn't communicate.

Best regards,

Fernando Hallberg



___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCCPQWTYU4D2EZFNRO77FNAKVFKWKFDA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHQSH7AIOPVGG5IDANH4LBQALJRPW7KR/


[ovirt-users] Re: oVirt node 4.5 now load QLogic 10gb interface.

2022-12-26 Thread dhanaraj.ramesh--- via Users
my suggestion, check the driver is supported with the current os, else raise a 
case with vendor to get the right driver. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OEF2YVRVISK7LGMVQCP2PCU4N3FEKWKM/


[ovirt-users] Re: ovirt node Emulex network

2022-12-19 Thread matthew.st...@fujitsu.com
Elrepo also has packages for the drivers that RedHat decided to remove.

Of course, you could just purchase some supported Network Interface Cards.

-Original Message-
From: Darrell Budic  
Sent: Monday, December 19, 2022 5:41 PM
To: parallax 
Cc: users 
Subject: [ovirt-users] Re: ovirt node Emulex network

RedHat has dropped support for this, so I don’t know if there’s anyway to fix 
it for the node images short of respinning your own that includes the driver. I 
use almalinux 8 hosts with elrepo kernel-lt installed, which includes those 
drivers, myself.

   -Darrell

> On Dec 19, 2022, at 5:49 AM, parallax  wrote:
> 
> ovirt-node-ng-installer-4.4.10-2022030308.el8
> 
> ovirt-node-ng-installer-4.4.10-2022030308
> 
> can't recognize network intefaces:
> 
> 06:00.0 Ethernet controller [0200]: Emulex Corporation OneConnect 10Gb 
> NIC (be3) [19a2:0710] (rev 01)
> 06:00.1 Ethernet controller [0200]: Emulex Corporation OneConnect 10Gb 
> NIC (be3) [19a2:0710] (rev 01)
> 06:00.4 Ethernet controller [0200]: Emulex Corporation OneConnect 10Gb 
> NIC (be3) [19a2:0710] (rev 01)
> 06:00.5 Ethernet controller [0200]: Emulex Corporation OneConnect 10Gb 
> NIC (be3) [19a2:0710] (rev 01)
> 
> anyone know howto fix it ?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy 
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2QWLS5Z
> XGDUYYWXMZREQG7PLP4MGTZP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAVVMYGRO5UPKXWEHCBU5UAVOKAXXJYA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZMLPCKNO25F4WQ666DWNXTA6JMHLYSUR/


[ovirt-users] Re: ovirt node Emulex network

2022-12-19 Thread Darrell Budic
RedHat has dropped support for this, so I don’t know if there’s anyway to fix 
it for the node images short of respinning your own that includes the driver. I 
use almalinux 8 hosts with elrepo kernel-lt installed, which includes those 
drivers, myself.

   -Darrell

> On Dec 19, 2022, at 5:49 AM, parallax  wrote:
> 
> ovirt-node-ng-installer-4.4.10-2022030308.el8
> 
> ovirt-node-ng-installer-4.4.10-2022030308 
> 
> can't recognize network intefaces:
> 
> 06:00.0 Ethernet controller [0200]: Emulex Corporation OneConnect 10Gb NIC 
> (be3) [19a2:0710] (rev 01)
> 06:00.1 Ethernet controller [0200]: Emulex Corporation OneConnect 10Gb NIC 
> (be3) [19a2:0710] (rev 01)
> 06:00.4 Ethernet controller [0200]: Emulex Corporation OneConnect 10Gb NIC 
> (be3) [19a2:0710] (rev 01)
> 06:00.5 Ethernet controller [0200]: Emulex Corporation OneConnect 10Gb NIC 
> (be3) [19a2:0710] (rev 01)
> 
> anyone know howto fix it ?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2QWLS5ZXGDUYYWXMZREQG7PLP4MGTZP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAVVMYGRO5UPKXWEHCBU5UAVOKAXXJYA/


[ovirt-users] Re: ovirt-node 4.5 and engine 4.4

2022-11-10 Thread Michal Skrivanek


> On 8. 11. 2022, at 12:58, Nathanaël Blanchet via Users  
> wrote:
> 
> Hello,
> 
> I'm planning to upgrade ovirt engine to 4.5, but I need now to install 
> additionnal hosts. Is it safe to directly install my new hosts with 
> ovirt-node 4.5 and attach them to 4.4 engine?

generally yes
we keep compatibility with 4.2 though but we do not test older engines really. 
but if it manages to deploy then it should be all good...and with hopefully 
less bugs than old 4.4. node.

> 
> Thank you
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> SIRE
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2XMBGPBAFI5YNTEQ4AWDWQLCKXOJMPD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4JZSDCFXJREOUXFJBD4EVKRCPZXPZFJF/


[ovirt-users] Re: Ovirt node Virus scan

2022-04-29 Thread Strahil Nikolov via Users
You can install software via the rpm-ostree . Yet,if this was RHV - you would 
loose support if you install additional software.Keep in mind that oVirt nodes 
are more an appliance than a regular Linux system and such security rules 
should not affect appliances.

Best Regards,Strahil Nikolov
 
 
  On Fri, Apr 29, 2022 at 17:52, marcel d'heureuse wrote:   
Hi,

I have to scan our 18 nodes with a virus scanner and I have to provide the 
report to our ito. 

This nodes have no internet connection and they will not get it. 

Which software I should use? Clamav? 

I can shutdown each single node and can scan but I would prefer to have it as 
cmd running and generate monthly or weekly reports. I don't want to have the 
daemon running.

How did you do this?

Br
Marcel
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BS3LF3JXNA5I4GV6YW3L22V4JVRW426A/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECQLYC7KL4CQT75FGIIJWV6P2BCJCPEX/


[ovirt-users] Re: oVirt Node 4.5 - Installing gluster single node stops immediately

2022-04-23 Thread Patrick Lomakin
> Hi guys! I've tried to deploy ovirt single node with gluster via GUI but have 
> a
> problem immediately. A log file that shows after installation was empty. But 
> in journalctl
> I found that:
> 
> Apr 23 20:06:45 host1 cockpit-ws[22948]: ERROR! couldn't resolve module/action
> 'vdo'. This often indicates a misspelling, missing collection, or incorrect 
> module
> path.
> Apr 23 20:06:45 host1 cockpit-ws[22948]: The error appears to be in
> '/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml': 
> line
> 53, column 3, but may
> Apr 23 20:06:45 host1 cockpit-ws[22948]: be elsewhere in the file depending 
> on the exact
> syntax problem.
> Apr 23 20:06:45 host1 cockpit-ws[22948]: The offending line appears to be:
> Apr 23 20:06:45 host1 cockpit-ws[22948]: - name: Create VDO with specified 
> size
> Apr 23 20:06:45 host1 cockpit-ws[22948]:   ^ here

I've resolved problems via installing missing ansible collections. To resolve 
this try to install on ovirt-node:  
"ansible-galaxy collection install community.general" and "ansible-galaxy 
collection install gluster.gluster"
Regards!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D7XVVP7VEXLQBPCSU3LTROHDO5IZQMVM/


[ovirt-users] Re: ovirt-node-ng state "Bond status: NONE"

2022-03-17 Thread Ales Musil
On Thu, Mar 17, 2022 at 11:43 AM Renaud RAKOTOMALALA <
renaud.rakotomal...@alterway.fr> wrote:

> Hi Ales,
>
> Le mer. 16 mars 2022 à 07:11, Ales Musil  a écrit :
>
>>
>> [../..]
>>
>
>> I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed
>> by an ovirt-engine version 4.4.10.
>>
>> My cluster is composed of other ovirt-node-ng which have been
>> successively updated from version 4.4.4 to version 4.4.10 without any
>> problem.
>>
>> This new node is integrated normally in the cluster, however when I look
>> at the status of the network part in the tab "Network interface" I see that
>> all interfaces are "down".
>>
>
> Did you try to call "Refresh Capabilities"? It might be the case that the
> engine presents a different state that is on the host after upgrade.
>
> I tried but and I show the pull in the vdsm.log on my faulty node. But the
> bond/interfaces states still "down". I tried a fresh install several time
> the node with "ovirt-node-ng-installer-4.4.10-2022030308.el8.iso" but the
> issue still there.
>
>
>
>>
>>> I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
>>>
>>> I compared the content of "/etc/sysconfig/network-script" between an
>>> hypervisor which works and the one which has the problem and I notice that
>>> a whole bunch of files are missing and in particular the "ifup/ifdown"
>>> files. The folder contains only the cluster specific files + the
>>> "ovirtmgmt" interface.
>>>
>>
>> Since 4.4 in general we don't use initscripts anymore, so those files are
>> really not a good indicator of anything. We are using nmstate +
>> NetworkManager, if the connection are correctly presented here
>> everything should be fine.
>>
>>
>
> networkManager show interfaces and bond up and running from the node
> perspective
>
> nmcli con show --active
> NAME   UUID  TYPE  DEVICE
> ovirtmgmt  6b08c819-6091-44de-9546-X  bridgeovirtmgmt
> virbr0 91cb9d5c-b64d-4655-ac2a-X  bridgevirbr0
> bond0  ad33d8b0-1f7b-cab9-9447-X  bond  bond0
> eno1   abf4c85b-57cc-4484-4fa9-X  ethernet  eno1
> eno2   b186f945-cc80-911d-668c-X  ethernet  eno2
>
>
>
> nmstatectl show return correct states
>
> - name: bond0
>   type: bond
>   state: up
>   accept-all-mac-addresses: false
>   ethtool:
> feature:
> [../..]
>   ipv4:
> enabled: false
> address: []
> dhcp: false
>   ipv6:
> enabled: false
> address: []
> autoconf: false
> dhcp: false
>   link-aggregation:
> mode: active-backup
> options:
>   all_slaves_active: dropped
>   arp_all_targets: any
>   arp_interval: 0
>   arp_validate: none
>   downdelay: 0
>   fail_over_mac: none
>   miimon: 100
>   num_grat_arp: 1
>   num_unsol_na: 1
>   primary: eno1
>   primary_reselect: always
>   resend_igmp: 1
>   updelay: 0
>   use_carrier: true
> port:
> - eno1
> - eno2
>   lldp:
> enabled: false
>   mac-address: X
>   mtu: 1500
>
>
> The state for eno1 and eno2 is "up".
>
>
>>> The hypervisor which has the problem seems to be perfectly functional,
>>> ovirt-engine does not raise any problem.
>>>
>>
>> This really sounds like something that a simple call to "Refresh
>> Capabilities" could fix.
>>
>
> I did it several times. Everything is fetched (I checked in the logs), but
> the states are still down for all interfaces... If I do a fresh install in
> 4.4.4, the states shown by rhevm are OK, if I reinstall in 4.4.10 the WebUI
> Hosts//Network Interfaces is KO.
>
>

That's really strange, I would suggest removing the host completely from
the engine if possible and then adding it again. That should also remove
the host from DB and clear the references.

Is it only one host that's affected or multiple?

Best regards,
Ales



-- 

Ales Musil

Senior Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LYYVNPGG5NDJGXHGT6COBXLTQJQJ6PUK/


[ovirt-users] Re: ovirt-node-ng state "Bond status: NONE"

2022-03-17 Thread Renaud RAKOTOMALALA
Hi Ales,

Le mer. 16 mars 2022 à 07:11, Ales Musil  a écrit :

>
> [../..]
>

> I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed
> by an ovirt-engine version 4.4.10.
>
> My cluster is composed of other ovirt-node-ng which have been successively
> updated from version 4.4.4 to version 4.4.10 without any problem.
>
> This new node is integrated normally in the cluster, however when I look
> at the status of the network part in the tab "Network interface" I see that
> all interfaces are "down".
>

Did you try to call "Refresh Capabilities"? It might be the case that the
engine presents a different state that is on the host after upgrade.

I tried but and I show the pull in the vdsm.log on my faulty node. But the
bond/interfaces states still "down". I tried a fresh install several time
the node with "ovirt-node-ng-installer-4.4.10-2022030308.el8.iso" but the
issue still there.



>
>> I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
>>
>> I compared the content of "/etc/sysconfig/network-script" between an
>> hypervisor which works and the one which has the problem and I notice that
>> a whole bunch of files are missing and in particular the "ifup/ifdown"
>> files. The folder contains only the cluster specific files + the
>> "ovirtmgmt" interface.
>>
>
> Since 4.4 in general we don't use initscripts anymore, so those files are
> really not a good indicator of anything. We are using nmstate +
> NetworkManager, if the connection are correctly presented here
> everything should be fine.
>
>

networkManager show interfaces and bond up and running from the node
perspective

nmcli con show --active
NAME   UUID  TYPE  DEVICE
ovirtmgmt  6b08c819-6091-44de-9546-X  bridgeovirtmgmt
virbr0 91cb9d5c-b64d-4655-ac2a-X  bridgevirbr0
bond0  ad33d8b0-1f7b-cab9-9447-X  bond  bond0
eno1   abf4c85b-57cc-4484-4fa9-X  ethernet  eno1
eno2   b186f945-cc80-911d-668c-X  ethernet  eno2



nmstatectl show return correct states

- name: bond0
  type: bond
  state: up
  accept-all-mac-addresses: false
  ethtool:
feature:
[../..]
  ipv4:
enabled: false
address: []
dhcp: false
  ipv6:
enabled: false
address: []
autoconf: false
dhcp: false
  link-aggregation:
mode: active-backup
options:
  all_slaves_active: dropped
  arp_all_targets: any
  arp_interval: 0
  arp_validate: none
  downdelay: 0
  fail_over_mac: none
  miimon: 100
  num_grat_arp: 1
  num_unsol_na: 1
  primary: eno1
  primary_reselect: always
  resend_igmp: 1
  updelay: 0
  use_carrier: true
port:
- eno1
- eno2
  lldp:
enabled: false
  mac-address: X
  mtu: 1500


The state for eno1 and eno2 is "up".


>> The hypervisor which has the problem seems to be perfectly functional,
>> ovirt-engine does not raise any problem.
>>
>
> This really sounds like something that a simple call to "Refresh
> Capabilities" could fix.
>

I did it several times. Everything is fetched (I checked in the logs), but
the states are still down for all interfaces... If I do a fresh install in
4.4.4, the states shown by rhevm are OK, if I reinstall in 4.4.10 the WebUI
Hosts//Network Interfaces is KO.


>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYTRZR5Z57RJAEUXCU256WCADEB6KPOS/


[ovirt-users] Re: ovirt-node-ng state "Bond status: NONE"

2022-03-16 Thread Ales Musil
On Tue, Mar 15, 2022 at 5:19 PM Renaud RAKOTOMALALA <
renaud.rakotomal...@smile.fr> wrote:

> Hello,
>

Hi,


>
> I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed
> by an ovirt-engine version 4.4.10.
>
> My cluster is composed of other ovirt-node-ng which have been successively
> updated from version 4.4.4 to version 4.4.10 without any problem.
>
> This new node is integrated normally in the cluster, however when I look
> at the status of the network part in the tab "Network interface" I see that
> all interfaces are "down".
>

Did you try to call "Refresh Capabilities"? It might be the case that the
engine presents a different state that is on the host after upgrade.


> I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
>
> I compared the content of "/etc/sysconfig/network-script" between an
> hypervisor which works and the one which has the problem and I notice that
> a whole bunch of files are missing and in particular the "ifup/ifdown"
> files. The folder contains only the cluster specific files + the
> "ovirtmgmt" interface.
>

Since 4.4 in general we don't use initscripts anymore, so those files are
really not a good indicator of anything. We are using nmstate +
NetworkManager, if the connection are correctly presented here
everything should be fine.


>
> The hypervisor which has the problem seems to be perfectly functional,
> ovirt-engine does not raise any problem.
>

This really sounds like something that a simple call to "Refresh
Capabilities" could fix.


>
> Have you already encountered this type of problem?
>
> Cheers,
> Renaud
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGOKO22HWF6OMLDCJW6XAWLE2DNPTQCB/
>


Best regards,
Ales.

-- 

Ales Musil

Senior Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.comIM: amusil

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCSTH2IH6E6I4GQ2QXAR2AWUZO5AL6BK/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-10 Thread perm-mf
Update!
1) Fix repo /etc/yum/repos.d/  and /usr/share/ovirt-release44/
[ovirt-4.4-centos-gluster8]
name=CentOS-$releasever - Gluster 8
#mirrorlist=http://mirrorlist.centos.org?arch=$basearch=$releasever=storage-gluster-8
#baseurl=http://mirror.centos.org/$contentdir/$releasever/storage/$basearch/gluster-8/
baseurl=http://vault.centos.org/8.5.2111/storage/x86_64/gluster-8/
..etc all repo
2) dnf update -y
 ovirt-engine-appliance - 4.4-20220203103124.1.el8 - Fix problem.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O5CMXQ4YHPBLBKYNFASSNEF2DCO7SOR4/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-03 Thread milan . mithbaokar
Hi,
  we ran into same issues with the engine deployment and its getting stuck at 
the mirror site. let us know if you have any luck?
Thank You,
Our Issue is that we have 4 node cluster(4.4) engine crashed and we were trying 
to restore from engine backup and during restoration its pointing to mirror 
site. we are stuck badly
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D25PSIURRA6FYZRE5VWNNHLUSXN3CU2O/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-03 Thread Strahil Nikolov via Users
To workaround it, you can edit 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/engine_setup/tasks/install_packages.yml'
 and add a task to fix the repo file.
Best Regards,Strahil Nikolov

 
 
  On Thu, Feb 3, 2022 at 13:18, Abe E wrote:   Hey Strahil

2022-02-02 20:20:54,043-0700 INFO ansible ok {'status': 'OK', 'ansible_type': 
'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_host': 'localhost', 'ansible_task': 'Gather facts on installed 
packages', 'task_duration': 3}
2022-02-02 20:20:54,043-0700 DEBUG ansible on_any args 
  kwargs 
2022-02-02 20:20:55,020-0700 INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'ovirt.ovirt.engine_setup : Fail when firewall manager is not 
installed'}
2022-02-02 20:20:55,021-0700 DEBUG ansible on_any args TASK: 
ovirt.ovirt.engine_setup : Fail when firewall manager is not installed  kwargs 
is_conditional:False 
2022-02-02 20:20:55,021-0700 DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.engine_setup : Fail when firewall manager is not installed  kwargs 
2022-02-02 20:20:55,976-0700 INFO ansible skipped {'status': 'SKIPPED', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'Fail when firewall manager is not installed', 'ansible_host': 
'localhost'}
2022-02-02 20:20:55,977-0700 DEBUG ansible on_any args 
  kwargs 
2022-02-02 20:20:56,953-0700 INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'ovirt.ovirt.engine_setup : Install required packages for oVirt 
Engine deployment'}
2022-02-02 20:20:56,953-0700 DEBUG ansible on_any args TASK: 
ovirt.ovirt.engine_setup : Install required packages for oVirt Engine 
deployment  kwargs is_conditional:False 
2022-02-02 20:20:56,954-0700 DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.engine_setup : Install required packages for oVirt Engine 
deployment  kwargs 
2022-02-02 20:20:57,916-0700 INFO ansible ok {'status': 'OK', 'ansible_type': 
'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_host': 'localhost', 'ansible_task': 'Install required packages for 
oVirt Engine deployment', 'task_duration': 1}
2022-02-02 20:20:57,917-0700 DEBUG ansible on_any args 
  kwargs 
2022-02-02 20:20:57,985-0700 DEBUG ansible on_any args 
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/engine_setup/tasks/install_packages.yml
 (args={} vars={}): [localhost]  kwargs 
2022-02-02 20:20:58,969-0700 INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'ovirt.ovirt.engine_setup : Install oVirt Engine package'}
2022-02-02 20:20:58,969-0700 DEBUG ansible on_any args TASK: 
ovirt.ovirt.engine_setup : Install oVirt Engine package  kwargs 
is_conditional:False 
2022-02-02 20:20:58,970-0700 DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.engine_setup : Install oVirt Engine package  kwargs 
2022-02-02 20:21:46,955-0700 DEBUG var changed: host "localhost" var 
"ansible_failed_task" type "" value: "{
    "action": "package",
    "any_errors_fatal": false,
    "args": {
        "name": "ovirt-engine",
        "state": "present"
    },
    "async": 0,
    "async_val": 0,
    "become": false,
    "become_exe": null,
    "become_flags": null,
    "become_method": "sudo",
    "become_user": null,
    "changed_when": [],
    "check_mode": false,
    "collections": [
        "ovirt.ovirt",
        "ansible.builtin"
    ],
    "connection": "local",
    "debugger": null,
    "delay": 5,
    "delegate_facts": null,
    "delegate_to": "hyper.hiddendomain.com",
    "diff": false,
    "environment": [
        {}
    ],
    "failed_when": [],
    "finalized": true,
    "ignore_errors": null,
    "ignore_unreachable": null,
    "loop": null,
    "loop_control": null,
    "loop_with": null,
    "module_defaults": [],
    "name": "Install oVirt Engine package",
    "no_log": null,
    "notify": null,
    "poll": 15,
    "port": null,
    "register": null,
    "remote_user": null,
    "retries": 3,
    "run_once": null,
    "squashed": true,
    "tags": [
        "bootstrap_local_vm",
        "never",
        "bootstrap_local_vm",
        "never"
    ],
    "throttle": 0,
    "until": [],
    "uuid": "20040ff4-641c-0f32-a8c8-17e2",
    "vars": {},
    "when": [
        "ovirt_engine_setup_product_type | lower == 'ovirt'"
    ]
}"
2022-02-02 20:21:46,955-0700 DEBUG var changed: host "localhost" var 
"ansible_failed_result" type "" value: "{
    "_ansible_delegated_vars": {
        "ansible_admin_users": null,
        "ansible_async_dir": null,
        "ansible_connection": "smart",
   

[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-03 Thread Abe E
Hey Strahil

2022-02-02 20:20:54,043-0700 INFO ansible ok {'status': 'OK', 'ansible_type': 
'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_host': 'localhost', 'ansible_task': 'Gather facts on installed 
packages', 'task_duration': 3}
2022-02-02 20:20:54,043-0700 DEBUG ansible on_any args 
  kwargs 
2022-02-02 20:20:55,020-0700 INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'ovirt.ovirt.engine_setup : Fail when firewall manager is not 
installed'}
2022-02-02 20:20:55,021-0700 DEBUG ansible on_any args TASK: 
ovirt.ovirt.engine_setup : Fail when firewall manager is not installed  kwargs 
is_conditional:False 
2022-02-02 20:20:55,021-0700 DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.engine_setup : Fail when firewall manager is not installed  kwargs 
2022-02-02 20:20:55,976-0700 INFO ansible skipped {'status': 'SKIPPED', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'Fail when firewall manager is not installed', 'ansible_host': 
'localhost'}
2022-02-02 20:20:55,977-0700 DEBUG ansible on_any args 
  kwargs 
2022-02-02 20:20:56,953-0700 INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'ovirt.ovirt.engine_setup : Install required packages for oVirt 
Engine deployment'}
2022-02-02 20:20:56,953-0700 DEBUG ansible on_any args TASK: 
ovirt.ovirt.engine_setup : Install required packages for oVirt Engine 
deployment  kwargs is_conditional:False 
2022-02-02 20:20:56,954-0700 DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.engine_setup : Install required packages for oVirt Engine 
deployment  kwargs 
2022-02-02 20:20:57,916-0700 INFO ansible ok {'status': 'OK', 'ansible_type': 
'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_host': 'localhost', 'ansible_task': 'Install required packages for 
oVirt Engine deployment', 'task_duration': 1}
2022-02-02 20:20:57,917-0700 DEBUG ansible on_any args 
  kwargs 
2022-02-02 20:20:57,985-0700 DEBUG ansible on_any args 
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/engine_setup/tasks/install_packages.yml
 (args={} vars={}): [localhost]  kwargs 
2022-02-02 20:20:58,969-0700 INFO ansible task start {'status': 'OK', 
'ansible_type': 'task', 'ansible_playbook': 
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 
'ansible_task': 'ovirt.ovirt.engine_setup : Install oVirt Engine package'}
2022-02-02 20:20:58,969-0700 DEBUG ansible on_any args TASK: 
ovirt.ovirt.engine_setup : Install oVirt Engine package  kwargs 
is_conditional:False 
2022-02-02 20:20:58,970-0700 DEBUG ansible on_any args localhost TASK: 
ovirt.ovirt.engine_setup : Install oVirt Engine package  kwargs 
2022-02-02 20:21:46,955-0700 DEBUG var changed: host "localhost" var 
"ansible_failed_task" type "" value: "{
"action": "package",
"any_errors_fatal": false,
"args": {
"name": "ovirt-engine",
"state": "present"
},
"async": 0,
"async_val": 0,
"become": false,
"become_exe": null,
"become_flags": null,
"become_method": "sudo",
"become_user": null,
"changed_when": [],
"check_mode": false,
"collections": [
"ovirt.ovirt",
"ansible.builtin"
],
"connection": "local",
"debugger": null,
"delay": 5,
"delegate_facts": null,
"delegate_to": "hyper.hiddendomain.com",
"diff": false,
"environment": [
{}
],
"failed_when": [],
"finalized": true,
"ignore_errors": null,
"ignore_unreachable": null,
"loop": null,
"loop_control": null,
"loop_with": null,
"module_defaults": [],
"name": "Install oVirt Engine package",
"no_log": null,
"notify": null,
"poll": 15,
"port": null,
"register": null,
"remote_user": null,
"retries": 3,
"run_once": null,
"squashed": true,
"tags": [
"bootstrap_local_vm",
"never",
"bootstrap_local_vm",
"never"
],
"throttle": 0,
"until": [],
"uuid": "20040ff4-641c-0f32-a8c8-17e2",
"vars": {},
"when": [
"ovirt_engine_setup_product_type | lower == 'ovirt'"
]
}"
2022-02-02 20:21:46,955-0700 DEBUG var changed: host "localhost" var 
"ansible_failed_result" type "" value: "{
"_ansible_delegated_vars": {
"ansible_admin_users": null,
"ansible_async_dir": null,
"ansible_connection": "smart",
"ansible_control_path": null,
"ansible_control_path_dir": null,
"ansible_delegated_host": "hyper.hiddendomain.com",
"ansible_host": "192.168.1.193",
"ansible_host_key_checking": null,
"ansible_password": null,
   

[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-02 Thread denismalkomail
I tried  already with this iso image 
https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/4.4.10-2022020214/el8/
still
```
 [ ERROR ] fatal: [localhost -> 192.168.222.253]: FAILED! => {"changed": false, 
"msg": "Failed to download metadata for repo 'ovirt-4.4-centos-gluster8': 
Cannot prepare internal mirrorlist: No URLs in mirrorlist", "rc": 1, "results": 
[]}
```
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GU2HLVHPKDG4X2W7WSGSHEEFWR3HAQ3H/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-02 Thread Strahil Nikolov via Users
Please share the logs from the Hypervisor.
Best Regards,Strahil Nikolov
 
 
  On Wed, Feb 2, 2022 at 23:05, Abe E wrote:   I just 
tested with it, still getting the error about gluster8 mirrors despite using 
updated mirrors. Its as if the engine deployment has different mirror 
instructions to use.

Running  yum update -y ovirt-release44 returns
[root@hyper-1 yum.repos.d]# dnf update -y ovirt-release44
Last metadata expiration check: 0:00:39 ago on Wed 02 Feb 2022 01:57:52 PM MST.
Dependencies resolved.
Nothing to do.
Complete!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AZ6XBS2QB6HOIX3TOUCTZKPIEWJOWJS4/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNT6FPE4PL2ZFE2CS3JD6Y6NXTEL4NX7/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-02 Thread Abe E
I just tested with it, still getting the error about gluster8 mirrors despite 
using updated mirrors. Its as if the engine deployment has different mirror 
instructions to use.

Running  yum update -y ovirt-release44 returns
[root@hyper-1 yum.repos.d]# dnf update -y ovirt-release44
Last metadata expiration check: 0:00:39 ago on Wed 02 Feb 2022 01:57:52 PM MST.
Dependencies resolved.
Nothing to do.
Complete!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AZ6XBS2QB6HOIX3TOUCTZKPIEWJOWJS4/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-02 Thread aellahib
Well the devs are not responsive to my posts unfortunately but i noticed they 
released a new OVIRT-Node build , so i just re-imaged the USB with new ISOs and 
Im trying to deploy, will see.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUIMCMCSHLZBDXMHH55ERWWWMSZJQIBT/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-02 Thread less foobar via Users
> My tmp has more than enough space, i hit that issue because i attempted 
> redeployment after
> a failure so it was filled well past with the same files in double i guess. 
> My real issue
> is the engine deployment is still using the old mirrors but all my nodes are 
> running new
> mirrors..

> ncuxo: hi, a new ovirt Node iso has been just published, the 
>appliance is still in the oven. The new iso is here: 
>>https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/ovirt-node-ng-installer-latest.iso
> waiting to get the appliance done to send out >the announcement but perhaps 
>you can give it a run

So I guess we just have to wait a bit longer...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XY4BRR3EFESZQ4NW44QITRIB72MLO5QK/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-02 Thread aellahib
My tmp has more than enough space, i hit that issue because i attempted 
redeployment after a failure so it was filled well past with the same files in 
double i guess. My real issue is the engine deployment is still using the old 
mirrors but all my nodes are running new mirrors..
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QEVWNUIADCPII3PI55JWKW2GIOZEQYAJ/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-02 Thread lessfoobar lessfoobar via Users
and I have the same error for two days now, please let me know if you've found 
a solution
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OW53G3COUR6DPEWJWQM5HTL3I42DP2HT/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-01 Thread aellahib
Ignore the last part, I forgot PrepareVM was what i was in so ofcourse the tmp 
would be full, seems my issue right now is just the mirror issue, If i cancel 
deployment and cleanup var tmp becomes good sized for the install I believe.

I did try removing repos and DLing the new 
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm from 2-1/22 but it 
doesnt seem to make a difference.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRPWLEUAGFFIFVHWGVRZDCWLZTNMJYUE/


[ovirt-users] Re: OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist

2022-02-01 Thread Anonymous via Users
/var/tmp should be at least 8gb 

I've opened a github pull request to the docs repo. The image that is being 
installed is ~6.7 GB so with 5GB you don't have the space for it 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBUTBZU5TJBNULTJUPMDGZTA4DNBA6IR/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-17 Thread Sandro Bonazzola
Il giorno ven 17 set 2021 alle ore 12:02 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

>
>
> Il giorno ven 17 set 2021 alle ore 11:34 Chen Shao  ha
> scritto:
>
>> *ovirt-node-ng-installer-4.5.0-**2021091610.el9.iso Sanity Testing -
>> Failed*
>>
>> *Test scenarios:*
>>
>>- 1. ISO check - *PASS*
>>- 2. GUI Install oVirt-node 4.5 - *Failed* (Met bug *Bug 2002640*
>> - No way to
>>have CentOS Stream 9 booting on installed system with /home on md device
>>)
>>- Workaround:
>>   - lvchange -a y /dev/mapper/onn-home
>>   - and eventually activating also remaining maps (grep mapper
>>   /etc/fstab to see them)
>>   - exit from the rescue shell
>>   - 3. Install oVirt-node 4.5 on HP UEFI machine - *Failed*
>>(Probably due to the kernel not signed, see attachment "UEFI-Failed")
>>- 4. Cockpit UI Sanity Test - *Pass*
>>- 5. Upgrade Testing - *Not Cover* (known limitations)
>>- 6. Engine & Hosted Engine - *Not Cover* (known limitations)
>>
>>
>>
> Thanks Chen!
>
> A few updates on my side:
> attaching host to engine failed in 2 different places:
> - wrong repositories for oVirt Node optional packages (solved now via
> https://gerrit.ovirt.org/c/ovirt-release/+/116760 )
> - failing to setup network, reported here:
> https://bugzilla.redhat.com/show_bug.cgi?id=2005213
>

Thanks to Ales' workaround node is up and active in the engine.
There is some work to be done on CentOS Stream 9 side but looks like we can
start soon working on the engine side for supporting the CentOS Stream 9
nodes properly.



> - vdsm is built with /usr/bin/python as interpreter, not sure why, still
> digging into it. Workaround: `ln -s /usr/bin/python3 /usr/bin/python` but
> this may be due to the custom vdsm build.
>
>
>
>>
>> On Fri, Sep 17, 2021 at 12:45 AM Sandro Bonazzola 
>> wrote:
>>
>>> lvchange -a y /dev/mapper/onn-home
>>>
>>> and eventually activating also remaining maps (grep mapper /etc/fstab to 
>>> see them)
>>>
>>> and then exit from the rescue shell leads to a node up and running for me.
>>>
>>>
>>>
>>> Il giorno gio 16 set 2021 alle ore 18:34 Sandro Bonazzola <
>>> sbona...@redhat.com> ha scritto:
>>>
 Sounds like we hit https://bugzilla.redhat.com/show_bug.cgi?id=2002640
 , so it may be more complicated than expected to get the host up.

 Il giorno gio 16 set 2021 alle ore 18:14 Sandro Bonazzola <
 sbona...@redhat.com> ha scritto:

>
>
> Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>> On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> I'm still working on it but I have a first ISO ready for giving a
>>> first run at
>>>
>>> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>>>
>>> Known limitations:
>>> - No hosted engine setup available
>>>
>>>
>> Nice!
>> If SHE not available, what would be the procedure to install the
>> standalone engine before deploying the host?
>> Or could I try to deploy the node using a 4.4.8 standalone engine in
>> its own DC/Cluster?
>>
>
> You can give it a run with a 4.4.8 standalone engine in its own
> DC/Cluster as a start.
> Or you can deploy a new engine as in the 4.4.8 flow but using
> https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
> providing the repositories
> Please note also these cases have never been tested yet.
>
>
>
>>
>> Gianluca
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need
> to answer this email out of your office hours.
> *
>
>
>

 --

 Sandro Bonazzola

 MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

 Red Hat EMEA 

 sbona...@redhat.com
 

 *Red Hat respects your work life balance. Therefore there is no need to
 answer this email out of your office hours.
 *



>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>
>>> Red Hat EMEA 
>>>
>>> sbona...@redhat.com
>>> 
>>>
>>> *Red Hat respects your work life balance. Therefore there is no need to
>>> answer this email out of your office hours.
>>> *
>>>
>>>
>>> ___
>>> Users mailing list -- 

[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-17 Thread Sandro Bonazzola
Il giorno ven 17 set 2021 alle ore 11:34 Chen Shao  ha
scritto:

> *ovirt-node-ng-installer-4.5.0-**2021091610.el9.iso Sanity Testing -
> Failed*
>
> *Test scenarios:*
>
>- 1. ISO check - *PASS*
>- 2. GUI Install oVirt-node 4.5 - *Failed* (Met bug *Bug 2002640*
> - No way to have
>CentOS Stream 9 booting on installed system with /home on md device )
>- Workaround:
>   - lvchange -a y /dev/mapper/onn-home
>   - and eventually activating also remaining maps (grep mapper
>   /etc/fstab to see them)
>   - exit from the rescue shell
>   - 3. Install oVirt-node 4.5 on HP UEFI machine - *Failed* (Probably
>due to the kernel not signed, see attachment "UEFI-Failed")
>- 4. Cockpit UI Sanity Test - *Pass*
>- 5. Upgrade Testing - *Not Cover* (known limitations)
>- 6. Engine & Hosted Engine - *Not Cover* (known limitations)
>
>
>
Thanks Chen!

A few updates on my side:
attaching host to engine failed in 2 different places:
- wrong repositories for oVirt Node optional packages (solved now via
https://gerrit.ovirt.org/c/ovirt-release/+/116760 )
- failing to setup network, reported here:
https://bugzilla.redhat.com/show_bug.cgi?id=2005213
- vdsm is built with /usr/bin/python as interpreter, not sure why, still
digging into it. Workaround: `ln -s /usr/bin/python3 /usr/bin/python` but
this may be due to the custom vdsm build.



>
> On Fri, Sep 17, 2021 at 12:45 AM Sandro Bonazzola 
> wrote:
>
>> lvchange -a y /dev/mapper/onn-home
>>
>> and eventually activating also remaining maps (grep mapper /etc/fstab to see 
>> them)
>>
>> and then exit from the rescue shell leads to a node up and running for me.
>>
>>
>>
>> Il giorno gio 16 set 2021 alle ore 18:34 Sandro Bonazzola <
>> sbona...@redhat.com> ha scritto:
>>
>>> Sounds like we hit https://bugzilla.redhat.com/show_bug.cgi?id=2002640
>>> , so it may be more complicated than expected to get the host up.
>>>
>>> Il giorno gio 16 set 2021 alle ore 18:14 Sandro Bonazzola <
>>> sbona...@redhat.com> ha scritto:
>>>


 Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
 gianluca.cec...@gmail.com> ha scritto:

> On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
> wrote:
>
>> Hi,
>> I'm still working on it but I have a first ISO ready for giving a
>> first run at
>>
>> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>>
>> Known limitations:
>> - No hosted engine setup available
>>
>>
> Nice!
> If SHE not available, what would be the procedure to install the
> standalone engine before deploying the host?
> Or could I try to deploy the node using a 4.4.8 standalone engine in
> its own DC/Cluster?
>

 You can give it a run with a 4.4.8 standalone engine in its own
 DC/Cluster as a start.
 Or you can deploy a new engine as in the 4.4.8 flow but using
 https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
 providing the repositories
 Please note also these cases have never been tested yet.



>
> Gianluca
>


 --

 Sandro Bonazzola

 MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

 Red Hat EMEA 

 sbona...@redhat.com
 

 *Red Hat respects your work life balance. Therefore there is no need to
 answer this email out of your office hours.
 *



>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>
>>> Red Hat EMEA 
>>>
>>> sbona...@redhat.com
>>> 
>>>
>>> *Red Hat respects your work life balance. Therefore there is no need to
>>> answer this email out of your office hours.
>>> *
>>>
>>>
>>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> *
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UAHV33VCJN2YDXYVZZAPSYRZLMF6DTJS/
>>
>
>
> --
> Thanks & Best regards,
> Chen
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-17 Thread Chen Shao
*ovirt-node-ng-installer-4.5.0-**2021091610.el9.iso Sanity Testing - Failed*

*Test scenarios:*

   - 1. ISO check - *PASS*
   - 2. GUI Install oVirt-node 4.5 - *Failed* (Met bug *Bug 2002640*
    - No way to have
   CentOS Stream 9 booting on installed system with /home on md device )
   - Workaround:
  - lvchange -a y /dev/mapper/onn-home
  - and eventually activating also remaining maps (grep mapper
  /etc/fstab to see them)
  - exit from the rescue shell
  - 3. Install oVirt-node 4.5 on HP UEFI machine - *Failed* (Probably
   due to the kernel not signed, see attachment "UEFI-Failed")
   - 4. Cockpit UI Sanity Test - *Pass*
   - 5. Upgrade Testing - *Not Cover* (known limitations)
   - 6. Engine & Hosted Engine - *Not Cover* (known limitations)



On Fri, Sep 17, 2021 at 12:45 AM Sandro Bonazzola 
wrote:

> lvchange -a y /dev/mapper/onn-home
>
> and eventually activating also remaining maps (grep mapper /etc/fstab to see 
> them)
>
> and then exit from the rescue shell leads to a node up and running for me.
>
>
>
> Il giorno gio 16 set 2021 alle ore 18:34 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>> Sounds like we hit https://bugzilla.redhat.com/show_bug.cgi?id=2002640 ,
>> so it may be more complicated than expected to get the host up.
>>
>> Il giorno gio 16 set 2021 alle ore 18:14 Sandro Bonazzola <
>> sbona...@redhat.com> ha scritto:
>>
>>>
>>>
>>> Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> ha scritto:
>>>
 On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
 wrote:

> Hi,
> I'm still working on it but I have a first ISO ready for giving a
> first run at
>
> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>
> Known limitations:
> - No hosted engine setup available
>
>
 Nice!
 If SHE not available, what would be the procedure to install the
 standalone engine before deploying the host?
 Or could I try to deploy the node using a 4.4.8 standalone engine in
 its own DC/Cluster?

>>>
>>> You can give it a run with a 4.4.8 standalone engine in its own
>>> DC/Cluster as a start.
>>> Or you can deploy a new engine as in the 4.4.8 flow but using
>>> https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
>>> providing the repositories
>>> Please note also these cases have never been tested yet.
>>>
>>>
>>>

 Gianluca

>>>
>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>
>>> Red Hat EMEA 
>>>
>>> sbona...@redhat.com
>>> 
>>>
>>> *Red Hat respects your work life balance. Therefore there is no need to
>>> answer this email out of your office hours.
>>> *
>>>
>>>
>>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> *
>>
>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UAHV33VCJN2YDXYVZZAPSYRZLMF6DTJS/
>


-- 
Thanks & Best regards,
Chen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5S77F5KFQRUXBIWY7KJHMHNXOCEY2FB2/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Sandro Bonazzola
lvchange -a y /dev/mapper/onn-home

and eventually activating also remaining maps (grep mapper /etc/fstab
to see them)

and then exit from the rescue shell leads to a node up and running for me.



Il giorno gio 16 set 2021 alle ore 18:34 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

> Sounds like we hit https://bugzilla.redhat.com/show_bug.cgi?id=2002640 ,
> so it may be more complicated than expected to get the host up.
>
> Il giorno gio 16 set 2021 alle ore 18:14 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>>
>>
>> Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>> On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
>>> wrote:
>>>
 Hi,
 I'm still working on it but I have a first ISO ready for giving a first
 run at

 https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso

 Known limitations:
 - No hosted engine setup available


>>> Nice!
>>> If SHE not available, what would be the procedure to install the
>>> standalone engine before deploying the host?
>>> Or could I try to deploy the node using a 4.4.8 standalone engine in its
>>> own DC/Cluster?
>>>
>>
>> You can give it a run with a 4.4.8 standalone engine in its own
>> DC/Cluster as a start.
>> Or you can deploy a new engine as in the 4.4.8 flow but using
>> https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
>> providing the repositories
>> Please note also these cases have never been tested yet.
>>
>>
>>
>>>
>>> Gianluca
>>>
>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> *
>>
>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UAHV33VCJN2YDXYVZZAPSYRZLMF6DTJS/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Sandro Bonazzola
Sounds like we hit https://bugzilla.redhat.com/show_bug.cgi?id=2002640 , so
it may be more complicated than expected to get the host up.

Il giorno gio 16 set 2021 alle ore 18:14 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

>
>
> Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>> On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> I'm still working on it but I have a first ISO ready for giving a first
>>> run at
>>>
>>> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>>>
>>> Known limitations:
>>> - No hosted engine setup available
>>>
>>>
>> Nice!
>> If SHE not available, what would be the procedure to install the
>> standalone engine before deploying the host?
>> Or could I try to deploy the node using a 4.4.8 standalone engine in its
>> own DC/Cluster?
>>
>
> You can give it a run with a 4.4.8 standalone engine in its own DC/Cluster
> as a start.
> Or you can deploy a new engine as in the 4.4.8 flow but using
> https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
> providing the repositories
> Please note also these cases have never been tested yet.
>
>
>
>>
>> Gianluca
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L72ZW7AECD63XQXUK4M5TMYFGLWKCHAK/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Sandro Bonazzola
Il giorno gio 16 set 2021 alle ore 18:06 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
> wrote:
>
>> Hi,
>> I'm still working on it but I have a first ISO ready for giving a first
>> run at
>>
>> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>>
>> Known limitations:
>> - No hosted engine setup available
>>
>>
> Nice!
> If SHE not available, what would be the procedure to install the
> standalone engine before deploying the host?
> Or could I try to deploy the node using a 4.4.8 standalone engine in its
> own DC/Cluster?
>

You can give it a run with a 4.4.8 standalone engine in its own DC/Cluster
as a start.
Or you can deploy a new engine as in the 4.4.8 flow but using
https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm for
providing the repositories
Please note also these cases have never been tested yet.



>
> Gianluca
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GOKUJ3QC2ST3H23M4V5RWSYD5XZLOJ22/


[ovirt-users] Re: oVirt Node - master - CentOS Stream 9 based ISO available for testing

2021-09-16 Thread Gianluca Cecchi
On Thu, Sep 16, 2021 at 5:35 PM Sandro Bonazzola 
wrote:

> Hi,
> I'm still working on it but I have a first ISO ready for giving a first
> run at
>
> https://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-node-ng-installer/4.5.0-2021091610/el9/ovirt-node-ng-installer-4.5.0-2021091610.el9.iso
>
> Known limitations:
> - No hosted engine setup available
>
>
Nice!
If SHE not available, what would be the procedure to install the standalone
engine before deploying the host?
Or could I try to deploy the node using a 4.4.8 standalone engine in its
own DC/Cluster?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJY465PTKKOJMCPUQGJEKTUSUFZJGBJ4/


[ovirt-users] Re: Ovirt node 4.4.5 failure to upgrade to 4.4.6

2021-06-08 Thread Guillaume Pavese
Thank you, I'll try on the 3rd host if the same problem happens (likely)
In the meantime I managed to advance a bit on my 2nd host (but I did not
get the chance to try your suggestion of  '--rpmverbosity')

The failure to install ovirt-node-ng-image-update seems to be linked with
the errors messages that I reported previously regarding the impossibility
to stop vdsmd.
On the hosts that were refusing to update or install the rpm, I was also
unable to stop vdsmd or supervdsmd :

[root@ps-inf-prd-kvm-fr-511 ~]# systemctl stop vdsmd
Job for vdsmd.service canceled.

[root@ps-inf-prd-kvm-fr-511 ~]# systemctl stop supervdsmd
Job for supervdsmd.service canceled.


I managed to stop them only after stopping ovirt-ha-broker & ovirt-ha-agent
Once these services down, the manual install of ovirt-node-ng-image-update
succeeded.

iSCSI multipath was not working after reboot, I had to manually rediscover
the targets
After all that, I could finally do a successful Host Reinstall from oVirt.


Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Tue, Jun 8, 2021 at 2:24 PM Yedidyah Bar David  wrote:

> On Tue, Jun 8, 2021 at 8:01 AM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> Hello,
>>
>> I used the cluster upgrade feature that moves hosts in maintenance one by
>> one.
>> This is not a HCI cluster, my storage is on iSCSI multipath
>>
>> I managed to fully upgrade the 1st hosts after rebooting and fixing some
>> network/iSCSI errors.
>> However, now the second one is stuck at upgrading the ovirt-node layers
>> too but I can not succeed in upgrading that one.
>> On this 2nd host, the workaround of removing and reinstalling
>> ovirt-node-ng-image-update doesn't work. I only get the following error :
>>
>> [root@ps-inf-prd-kvm-fr-511 ~]# nodectl check
>> Status: OK
>> Bootloader ... OK
>>   Layer boot entries ... OK
>>   Valid boot entries ... OK
>> Mount points ... OK
>>   Separate /var ... OK
>>   Discard is used ... OK
>> Basic storage ... OK
>>   Initialized VG ... OK
>>   Initialized Thin Pool ... OK
>>   Initialized LVs ... OK
>> Thin storage ... OK
>>   Checking available space in thinpool ... OK
>>   Checking thinpool auto-extend ... OK
>> vdsmd ... OK
>>
>>
>> [root@ps-inf-prd-kvm-fr-511 ~]# nodectl info
>> bootloader:
>>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>   entries:
>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>   index: 0
>>   kernel:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>   args: resume=/dev/mapper/onn-swap 
>> rd.lvm.lv=onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>> rd.lvm.lv=onn/swap rhgb quiet
>> boot=UUID=a676b18f-0f1b-4ad4-88e1-533fe61ff063 rootflags=discard
>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1 intel_iommu=on
>>   root: /dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>   initrd:
>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>   blsid:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
>> layers:
>>   ovirt-node-ng-4.4.5.1-0.20210323.0:
>> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>> current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>
>>
>> [root@ps-inf-prd-kvm-fr-511 ~]# yum remove ovirt-node-ng-image-update
>> [...]
>> Removing:
>>  ovirt-node-ng-image-updatenoarch
>>4.4.6.3-1.el8
>>   @ovirt-4.4886 M
>>   Erasing  : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>>   Verifying: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>> Unpersisting: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>>
>> Removed:
>>   ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>> Complete!
>> [root@ps-inf-prd-kvm-fr-511 ~]#
>>
>>
>> [root@ps-inf-prd-kvm-fr-511 ~]#  yum install ovirt-node-ng-image-update
>> [...]
>> Installing:
>>  ovirt-node-ng-image-updatenoarch
>>4.4.6.3-1.el8
>>ovirt-4.4887 M
>>  [...]
>> ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>>
>>   23 MB/s | 887 MB 00:39
>> Running transaction check
>> Transaction check succeeded.
>> Running transaction test
>> Transaction test succeeded.
>> Running transaction
>>   Preparing:
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>>   Installing   : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>> *warning: %post(ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch)
>> scriptlet failed, exit status 1*
>>
>> *Error in POSTIN scriptlet in rpm package ovirt-node-ng-image-update   *
>>   

[ovirt-users] Re: Ovirt node 4.4.5 failure to upgrade to 4.4.6

2021-06-07 Thread Yedidyah Bar David
On Tue, Jun 8, 2021 at 8:01 AM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> Hello,
>
> I used the cluster upgrade feature that moves hosts in maintenance one by
> one.
> This is not a HCI cluster, my storage is on iSCSI multipath
>
> I managed to fully upgrade the 1st hosts after rebooting and fixing some
> network/iSCSI errors.
> However, now the second one is stuck at upgrading the ovirt-node layers
> too but I can not succeed in upgrading that one.
> On this 2nd host, the workaround of removing and reinstalling
> ovirt-node-ng-image-update doesn't work. I only get the following error :
>
> [root@ps-inf-prd-kvm-fr-511 ~]# nodectl check
> Status: OK
> Bootloader ... OK
>   Layer boot entries ... OK
>   Valid boot entries ... OK
> Mount points ... OK
>   Separate /var ... OK
>   Discard is used ... OK
> Basic storage ... OK
>   Initialized VG ... OK
>   Initialized Thin Pool ... OK
>   Initialized LVs ... OK
> Thin storage ... OK
>   Checking available space in thinpool ... OK
>   Checking thinpool auto-extend ... OK
> vdsmd ... OK
>
>
> [root@ps-inf-prd-kvm-fr-511 ~]# nodectl info
> bootloader:
>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
>   entries:
> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>   index: 0
>   kernel:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>   args: resume=/dev/mapper/onn-swap 
> rd.lvm.lv=onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
> rd.lvm.lv=onn/swap rhgb quiet
> boot=UUID=a676b18f-0f1b-4ad4-88e1-533fe61ff063 rootflags=discard
> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1 intel_iommu=on
>   root: /dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>   initrd:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
>   blsid:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
> layers:
>   ovirt-node-ng-4.4.5.1-0.20210323.0:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1
> current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>
>
> [root@ps-inf-prd-kvm-fr-511 ~]# yum remove ovirt-node-ng-image-update
> [...]
> Removing:
>  ovirt-node-ng-image-updatenoarch
>4.4.6.3-1.el8
>   @ovirt-4.4886 M
>   Erasing  : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>   Verifying: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
> Unpersisting: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>
> Removed:
>   ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
> Complete!
> [root@ps-inf-prd-kvm-fr-511 ~]#
>
>
> [root@ps-inf-prd-kvm-fr-511 ~]#  yum install ovirt-node-ng-image-update
> [...]
> Installing:
>  ovirt-node-ng-image-updatenoarch
>4.4.6.3-1.el8
>ovirt-4.4887 M
>  [...]
> ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>
>   23 MB/s | 887 MB 00:39
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:
>   Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>   Installing   : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>   Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
> *warning: %post(ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch) scriptlet
> failed, exit status 1*
>
> *Error in POSTIN scriptlet in rpm package ovirt-node-ng-image-update   *
>   Verifying: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>
>
> Installed:
>
>
>   ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
> Complete!
> [root@ps-inf-prd-kvm-fr-511 ~]#
>
> [root@ps-inf-prd-kvm-fr-511 ~]# nodectl info
> bootloader:
>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
>   entries:
> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>   index: 0
>   kernel:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>   args: resume=/dev/mapper/onn-swap 
> rd.lvm.lv=onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
> rd.lvm.lv=onn/swap rhgb quiet
> boot=UUID=a676b18f-0f1b-4ad4-88e1-533fe61ff063 rootflags=discard
> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1 intel_iommu=on
>   root: /dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>   initrd:
> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
>   blsid:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
> layers:
>   ovirt-node-ng-4.4.5.1-0.20210323.0:
> ovirt-node-ng-4.4.5.1-0.20210323.0+1
> 

[ovirt-users] Re: Ovirt node 4.4.5 failure to upgrade to 4.4.6

2021-06-07 Thread Guillaume Pavese
Hello,

I used the cluster upgrade feature that moves hosts in maintenance one by
one.
This is not a HCI cluster, my storage is on iSCSI multipath

I managed to fully upgrade the 1st hosts after rebooting and fixing some
network/iSCSI errors.
However, now the second one is stuck at upgrading the ovirt-node layers too
but I can not succeed in upgrading that one.
On this 2nd host, the workaround of removing and reinstalling
ovirt-node-ng-image-update doesn't work. I only get the following error :

[root@ps-inf-prd-kvm-fr-511 ~]# nodectl check
Status: OK
Bootloader ... OK
  Layer boot entries ... OK
  Valid boot entries ... OK
Mount points ... OK
  Separate /var ... OK
  Discard is used ... OK
Basic storage ... OK
  Initialized VG ... OK
  Initialized Thin Pool ... OK
  Initialized LVs ... OK
Thin storage ... OK
  Checking available space in thinpool ... OK
  Checking thinpool auto-extend ... OK
vdsmd ... OK


[root@ps-inf-prd-kvm-fr-511 ~]# nodectl info
bootloader:
  default: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
  entries:
ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
  index: 0
  kernel:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
  args: resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn/swap rhgb quiet
boot=UUID=a676b18f-0f1b-4ad4-88e1-533fe61ff063 rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1 intel_iommu=on
  root: /dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
  initrd:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
  title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
  blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
  ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1


[root@ps-inf-prd-kvm-fr-511 ~]# yum remove ovirt-node-ng-image-update
[...]
Removing:
 ovirt-node-ng-image-updatenoarch
 4.4.6.3-1.el8
@ovirt-4.4886 M
  Erasing  : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
  Verifying: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
Unpersisting: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm

Removed:
  ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
Complete!
[root@ps-inf-prd-kvm-fr-511 ~]#


[root@ps-inf-prd-kvm-fr-511 ~]#  yum install ovirt-node-ng-image-update
[...]
Installing:
 ovirt-node-ng-image-updatenoarch
 4.4.6.3-1.el8
 ovirt-4.4887 M
 [...]
ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm

23 MB/s | 887 MB 00:39
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing:
  Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
  Installing   : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
  Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
*warning: %post(ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch) scriptlet
failed, exit status 1*

*Error in POSTIN scriptlet in rpm package ovirt-node-ng-image-update   *
  Verifying: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch


Installed:


  ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
Complete!
[root@ps-inf-prd-kvm-fr-511 ~]#

[root@ps-inf-prd-kvm-fr-511 ~]# nodectl info
bootloader:
  default: ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64)
  entries:
ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
  index: 0
  kernel:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
  args: resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
rd.lvm.lv=onn/swap rhgb quiet
boot=UUID=a676b18f-0f1b-4ad4-88e1-533fe61ff063 rootflags=discard
img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1 intel_iommu=on
  root: /dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
  initrd:
/boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
  title: ovirt-node-ng-4.4.5.1-0.20210323.0
(4.18.0-240.15.1.el8_3.x86_64)
  blsid:
ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
layers:
  ovirt-node-ng-4.4.5.1-0.20210323.0:
ovirt-node-ng-4.4.5.1-0.20210323.0+1
current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
[root@ps-inf-prd-kvm-fr-511 ~]#

Is there any way to see where in the POSTIN scriplet the installation fails?

Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Fri, Jun 4, 2021 at 8:44 PM Lev Veyde  wrote:

> Hi Guillaume,
>
> Have you moved the host to the maintenance before the 

[ovirt-users] Re: Ovirt node 4.4.5 failure to upgrade to 4.4.6

2021-06-04 Thread Lev Veyde
Hi Guillaume,

Have you moved the host to the maintenance before the upgrade (making sure
that Gluster related options are unchecked)?

Or you started the upgrade directly?

Thanks in advance,

On Thu, Jun 3, 2021 at 10:50 AM wodel youchi  wrote:

> Hi,
>
> Is this an hci deployment?
>
> If yes :
>
> - try to boot using the old version 4.4.5
> - verify that your network configuration is still intact
> - verify that the gluster part is working properly
> # gluster peer status
> # gluster volume status
>
> - verify the your engine and your host can resolve each other hostname
>
> If all is ok try to bring the host at available state
>
>
> Regards.
>
> Le mer. 2 juin 2021 08:21, Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> a écrit :
>
>> Maybe my problem is in part linked to an issue seen by Jayme earlier, but
>> then the resolution that worked for him did not succeed for me :
>>
>> I first upgraded my Self Hosted Engine from 4.4.5 to 4.4.6 and then
>> upgraded it to Centos-Stream and rebooted
>>
>> Then I tried to upgrade the cluster (3 ovirt-nodes on 4.4.5) but it
>> failed at the first host.
>> They are all ovir-node hosts, originally first installed in 4.4.5
>>
>> In Host Event Logs I saw :
>>
>> ...
>> Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
>> Upgrade packages
>> Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
>> Check if image was updated.
>> Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
>> Check if image was updated.
>> Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
>> Check if image-updated file exists.
>> Failed to upgrade Host ps-inf-prd-kvm-fr-510.hostics.fr (User:
>> g...@hostics.fr).
>>
>>
>>
>> ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch was installed according
>> to yum,
>> I tried reinstalling it but got errors: "Error in POSTIN scriptlet" :
>>
>> Downloading Packages:
>> [SKIPPED] ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm: Already
>> downloaded
>> ...
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>>
>>   Reinstalling : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>>
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>>
>> warning: %post(ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch) scriptlet
>> failed, exit status 1
>>
>> Error in POSTIN scriptlet in rpm package ovirt-node-ng-image-update
>> ---
>> Reinstalled:
>>   ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>>
>>
>>
>> nodectl still showed it was on 4.4.5 :
>>
>> [root@ps-inf-prd-kvm-fr-510 ~]# nodectl info
>> bootloader:
>>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
>> (4.18.0-240.15.1.el8_3.x86_64)
>>  ...
>>   current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>
>>
>>
>> I tried to upgrade the Host again from oVirt and this time there was no
>> error, and the host rebooted.
>> However, it did not pass active after rebooting and nodectl still shows
>> that it's 4.4.5 installed. Similar symptoms as OP
>>
>> So I removed ovirt-node-ng-image-update, then reinstalled it and got no
>> error this time.
>> nodectl info seemed to show that it was installed :
>>
>>
>> [root@ps-inf-prd-kvm-fr-510 yum.repos.d]# nodectl info
>> bootloader:
>>   default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>> ...
>>   current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>
>>
>> However, after reboot the Host was still shown as "unresponsive"
>> After Marking it as "Manually rebooted", passing it in maintenance mode
>> and trying to activate it, the Host was automatically fenced. And still
>> unresponsive after this new reboot.
>>
>> I passed it in maintenance mode again, And tried to reinstall it with
>> "Deploy Hosted Engine" selected
>> However if failed : "Task Stop services failed to execute."
>>
>> In
>> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20210602082519-ps-inf-prd-kvm-fr-510.hostics.fr-0565d681-9406-4fa7-a444-7ee34804579c.log
>> :
>>
>> "msg" : "Unable to stop service vdsmd.service: Job for vdsmd.service
>> canceled.\n", "failed" : true,
>>
>> "msg" : "Unable to stop service supervdsmd.service: Job for
>> supervdsmd.service canceled.\n", failed" : true,
>>
>> "stderr" : "Error:  ServiceOperationError: _systemctlStop failed\nb'Job
>> for vdsmd.service canceled.\\n' ",
>>
>> "stderr_lines" : [ "Error:  ServiceOperationError: _systemctlStop
>> failed", "b'Job for vdsmd.service canceled.\\n' " ],
>>
>>
>> If I try on the Host I get :
>>
>> [root@ps-inf-prd-kvm-fr-510 ~]# systemctl stop vdsmd
>> Job for vdsmd.service canceled.
>>
>> [root@ps-inf-prd-kvm-fr-510 ~]# systemctl status vdsmd
>> ● vdsmd.service - Virtual Desktop Server Manager
>>Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
>> preset: disabled)
>>Active: deactivating (stop-sigterm) since Wed 2021-06-02 08:49:21
>> CEST; 7s ago
>>   Process: 54037 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
>> --pre-start (code=exited, status=0/SUCCESS)
>> ...
>>
>> Jun 02 08:47:34 ps-inf-prd-kvm-fr-510.hostics.fr 

[ovirt-users] Re: Ovirt node 4.4.5 failure to upgrade to 4.4.6

2021-06-03 Thread wodel youchi
Hi,

Is this an hci deployment?

If yes :

- try to boot using the old version 4.4.5
- verify that your network configuration is still intact
- verify that the gluster part is working properly
# gluster peer status
# gluster volume status

- verify the your engine and your host can resolve each other hostname

If all is ok try to bring the host at available state


Regards.

Le mer. 2 juin 2021 08:21, Guillaume Pavese <
guillaume.pav...@interactiv-group.com> a écrit :

> Maybe my problem is in part linked to an issue seen by Jayme earlier, but
> then the resolution that worked for him did not succeed for me :
>
> I first upgraded my Self Hosted Engine from 4.4.5 to 4.4.6 and then
> upgraded it to Centos-Stream and rebooted
>
> Then I tried to upgrade the cluster (3 ovirt-nodes on 4.4.5) but it failed
> at the first host.
> They are all ovir-node hosts, originally first installed in 4.4.5
>
> In Host Event Logs I saw :
>
> ...
> Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
> Upgrade packages
> Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
> Check if image was updated.
> Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
> Check if image was updated.
> Update of host ps-inf-prd-kvm-fr-510.hostics.fr.
> Check if image-updated file exists.
> Failed to upgrade Host ps-inf-prd-kvm-fr-510.hostics.fr (User:
> g...@hostics.fr).
>
>
>
> ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch was installed according to
> yum,
> I tried reinstalling it but got errors: "Error in POSTIN scriptlet" :
>
> Downloading Packages:
> [SKIPPED] ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm: Already
> downloaded
> ...
>   Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>
>   Reinstalling : ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>
>   Running scriptlet: ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>
> warning: %post(ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch) scriptlet
> failed, exit status 1
>
> Error in POSTIN scriptlet in rpm package ovirt-node-ng-image-update
> ---
> Reinstalled:
>   ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>
>
>
> nodectl still showed it was on 4.4.5 :
>
> [root@ps-inf-prd-kvm-fr-510 ~]# nodectl info
> bootloader:
>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
> (4.18.0-240.15.1.el8_3.x86_64)
>  ...
>   current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>
>
>
> I tried to upgrade the Host again from oVirt and this time there was no
> error, and the host rebooted.
> However, it did not pass active after rebooting and nodectl still shows
> that it's 4.4.5 installed. Similar symptoms as OP
>
> So I removed ovirt-node-ng-image-update, then reinstalled it and got no
> error this time.
> nodectl info seemed to show that it was installed :
>
>
> [root@ps-inf-prd-kvm-fr-510 yum.repos.d]# nodectl info
> bootloader:
>   default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
> ...
>   current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>
>
> However, after reboot the Host was still shown as "unresponsive"
> After Marking it as "Manually rebooted", passing it in maintenance mode
> and trying to activate it, the Host was automatically fenced. And still
> unresponsive after this new reboot.
>
> I passed it in maintenance mode again, And tried to reinstall it with
> "Deploy Hosted Engine" selected
> However if failed : "Task Stop services failed to execute."
>
> In
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20210602082519-ps-inf-prd-kvm-fr-510.hostics.fr-0565d681-9406-4fa7-a444-7ee34804579c.log
> :
>
> "msg" : "Unable to stop service vdsmd.service: Job for vdsmd.service
> canceled.\n", "failed" : true,
>
> "msg" : "Unable to stop service supervdsmd.service: Job for
> supervdsmd.service canceled.\n", failed" : true,
>
> "stderr" : "Error:  ServiceOperationError: _systemctlStop failed\nb'Job
> for vdsmd.service canceled.\\n' ",
>
> "stderr_lines" : [ "Error:  ServiceOperationError: _systemctlStop failed",
> "b'Job for vdsmd.service canceled.\\n' " ],
>
>
> If I try on the Host I get :
>
> [root@ps-inf-prd-kvm-fr-510 ~]# systemctl stop vdsmd
> Job for vdsmd.service canceled.
>
> [root@ps-inf-prd-kvm-fr-510 ~]# systemctl status vdsmd
> ● vdsmd.service - Virtual Desktop Server Manager
>Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
> preset: disabled)
>Active: deactivating (stop-sigterm) since Wed 2021-06-02 08:49:21 CEST;
> 7s ago
>   Process: 54037 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
> --pre-start (code=exited, status=0/SUCCESS)
> ...
>
> Jun 02 08:47:34 ps-inf-prd-kvm-fr-510.hostics.fr vdsm[54100]: WARN Failed
> to retrieve Hosted Engine HA info, is Hosted Engine setup finished?
> ...
> Jun 02 08:48:31 ps-inf-prd-kvm-fr-510.hostics.fr vdsm[54100]: WARN Worker
> blocked:  '2.0', 'method': 'StoragePool.connectStorageServer', 'params': {'storage>
>   File:
> "/usr/lib64/python3.6/threading.py", line 884, in _bootstrap
>
> 

[ovirt-users] Re: oVirt Node

2021-04-20 Thread Strahil Nikolov via Users
As far as I know oVirt node and Hosts have the same purpose and are 
interchangeable (with some slight differences). It shouldn't be a problem at 
all.

Best Regards,
Strahil Nikolov






В вторник, 20 април 2021 г., 11:27:08 ч. Гринуич+3, KSNull Zero 
 написа: 





Hello!
We want to switch OS based oVirt instalation to oVirt-Node based.
Is it safe to have OS based hosts and oVirt-Node hosts in the same cluster 
(with FC shared storage) during transition ?
Thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3TCXZYIP6Q4KA6XBOHGO6OXMQGJAMBN/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KMS7GXKSMVQ3LZQEDH2IA7NGBQOSHWRY/


[ovirt-users] Re: Ovirt Node (iso) - OK to enable Centos-BaseOS/Centos-Appstream repos ?

2021-03-25 Thread Vojtech Juranek
On Wednesday, 24 March 2021 15:20:21 CET jb wrote:
> Am 23.03.21 um 12:45 schrieb Vojtech Juranek:
> 
> > On Tuesday, 23 March 2021 11:56:26 CET morgan cox wrote:
> > 
> >> Hi.
> >>
> >>
> >>
> >> I have installed Ovirt nodes via the ovirt-node iso (centos8 based) - on
> >> a
> >> fresh install the standard CentOS repos are disabled (the ovirt 4-4 
> >> repo
> >> is enabled)
> >> 
> >   
> >> 
> >> As part of our company hardening we need to install a few packages from
> >> the
 Centos repos.
> >> 
> >   
> >> 
> >> Can I enable the CentOS-Linux-AppStream.repo + CentOS-Linux-BaseOS.repo
> >> repos or will this cause issues when we update the node ?
> > 
> > AFAIK it shouldn't break anything. ovirt repos have newer versions, so
> > anything required by ovirt should be installed from ovirt repo during
> > upgrade.
> I made the experience that after installing a upgrade, everything which 
> I installed from that repos disappear.

yes, AFAIK this happens when you install host as ovirt node. If you install 
from rpm (enterprise host), this shouldn't happen

> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CTW2PKGCC62IV
> IZTTQG3BQPQUFERZBB2/



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFVINXBMVGPPYDEEGIIEXK5NK7BNTUE3/


[ovirt-users] Re: Ovirt Node (iso) - OK to enable Centos-BaseOS/Centos-Appstream repos ?

2021-03-24 Thread jb


Am 23.03.21 um 12:45 schrieb Vojtech Juranek:

On Tuesday, 23 March 2021 11:56:26 CET morgan cox wrote:

Hi.

I have installed Ovirt nodes via the ovirt-node iso (centos8 based) - on a
fresh install the standard CentOS repos are disabled (the ovirt 4-4  repo
is enabled)
  

As part of our company hardening we need to install a few packages from the
Centos repos.
  

Can I enable the CentOS-Linux-AppStream.repo + CentOS-Linux-BaseOS.repo
repos or will this cause issues when we update the node ?

AFAIK it shouldn't break anything. ovirt repos have newer versions, so
anything required by ovirt should be installed from ovirt repo during upgrade.
I made the experience that after installing a upgrade, everything which 
I installed from that repos disappear.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CTW2PKGCC62IVIZTTQG3BQPQUFERZBB2/


[ovirt-users] Re: Ovirt Node (iso) - OK to enable Centos-BaseOS/Centos-Appstream repos ?

2021-03-24 Thread morgan cox
Thank you for confirming .
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2QRVPYWQN3XFOXD455F4SL7KBBUR6VC/


[ovirt-users] Re: Ovirt Node (iso) - OK to enable Centos-BaseOS/Centos-Appstream repos ?

2021-03-23 Thread Vojtech Juranek
On Tuesday, 23 March 2021 11:56:26 CET morgan cox wrote:
> Hi.
> 
> I have installed Ovirt nodes via the ovirt-node iso (centos8 based) - on a
> fresh install the standard CentOS repos are disabled (the ovirt 4-4  repo
> is enabled)
 
> As part of our company hardening we need to install a few packages from the
> Centos repos.
 
> Can I enable the CentOS-Linux-AppStream.repo + CentOS-Linux-BaseOS.repo
> repos or will this cause issues when we update the node ?

AFAIK it shouldn't break anything. ovirt repos have newer versions, so 
anything required by ovirt should be installed from ovirt repo during upgrade.


> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JITF2KTDAE7V2
> FMEYRJZOMUDCNKBGL56/



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3FWG6DQOQ5ZSO2SMGUNX2BYUY4NUVARH/


[ovirt-users] Re: oVirt Node install with Foreman VG issue

2021-02-28 Thread simon
Thanks Strahil,  Unfortunately changing the filter is done after the initial 
install.  We manually partition sda so that sdb isn’t touched during install.  
The issues with multipath grabbing sdb are ongoing with a possible manual fix 
being tested now.  Testing has paused at the moment as we have lost gluster on 
the arbiter node as per 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/U64MGWSUCFJR... 
which looks like a full oVirt rebuild again.  Any help on that thread would be 
appreciated.

Thanks again

Shimme
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PHZ6XMM2GYBDSET4XMVQDS4NZYUKAM3D/


[ovirt-users] Re: oVirt Node install with Foreman VG issue

2021-02-28 Thread Simon Scott

Thanks Strahil,

Unfortunately changing the filter is done after the initial install.

We manually partition sda so that sdb isn’t touched during install.

The issues with multipath grabbing sdb are ongoing with a possible manual fix 
being tested now.

Testing has paused at the moment as we have lost gluster on the arbiter node as 
per 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/U64MGWSUCFJRIVAH5EOFCQFVIPZI77PL/
 which looks like a full oVirt rebuild again.

Any help on that thread would be appreciated.

Thanks again

Shimme

On 28 Feb 2021, at 08:35, Strahil Nikolov  wrote:

Most probably there is an LVM filter.
As stated in the /etc/multipath.conf , use a special file to blacklist the 
local disks without modifying /etc/multipath.conf

Best Regards,
Strahil Nikolov

Hi All,

I have a server with a RAID1 disk for sda and RAID 5 disk for sdb.

Following default install, prior to Cockpit Gluster and Engine wizards there is 
only a single Volume Group which doesn’t allow me to continue.

If I manually configure the install and deselect sdb it gives other issues with 
multipath but at least I can resolve those.

Is there a specific kickstart confuguration that should be used?

Kind Regards

Shimme
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFZS3XVTVMP5GFT2ZHZNLZ4XUPXBNCZC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KX4WNCRB2FVE7M5H7PHEZTM4TQ37MMHY/


[ovirt-users] Re: oVirt Node install with Foreman VG issue

2021-02-28 Thread Strahil Nikolov via Users
Most probably there is an LVM filter.As stated in the /etc/multipath.conf , use 
a special file to blacklist the local disks without modifying 
/etc/multipath.conf
Best Regards,Strahil Nikolov
 
 
Hi All,

I have a server with a RAID1 disk for sda and RAID 5 disk for sdb.

Following default install, prior to Cockpit Gluster and Engine wizards there is 
only a single Volume Group which doesn’t allow me to continue.

If I manually configure the install and deselect sdb it gives other issues with 
multipath but at least I can resolve those.

Is there a specific kickstart confuguration that should be used?

Kind Regards

Shimme
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFZS3XVTVMP5GFT2ZHZNLZ4XUPXBNCZC/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQBWVY2INZUCHUR5HCIQCHTTTQDZNLCX/


[ovirt-users] Re: ovirt node ng 4.4 crash in anaconda when setting ntp

2021-01-29 Thread Gianluca Cecchi
On Thu, Jan 28, 2021 at 5:20 PM Gianluca Cecchi 
wrote:

> Hello,
> when installing ovirt node ng 4.4 on a Dell M620 I remember I had a crash
> in anaconda if I try to set up ntp.
> Using ovirt-node-ng-installer-4.4.4-2020122111.el8.iso
> As soon as I select it and try to type the hostname to use, all stops and
> then anaconda aborts.
> Just today I had the same with the latest RHVH 4.4 iso:
> RHVH-4.4-20201210.0-RHVH-x86_64-dvd1.iso on an R630.
> Quite disappointing because I also have to fight with iDRAC8 to install
> the OS: it is slow to die.
> In practice I waste about one hour
>
> Is anyone aware of it or able to reproduce on another platform so that
> eventually I'm going to open a bug/case for it?
> My config is default one in anaconda accepting default and creating a
> bonded connection (LACP).
> Then as the last step I go into the set time/date and click on the ntp
> button and I get the problem as soon as I try to type inside the text box.
>
> Thanks,
> Gianluca
>
> As I also had it with RHV I opened a case and Red Hat support was able to
reproduce and opened a bugzilla for it:
https://bugzilla.redhat.com/show_bug.cgi?id=1922206

I think an upstream clone bugzilla should be created too. I put my comment
on it

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IEZQPVP4F6OAQ3KG457GQEXSM4YVTA6P/


[ovirt-users] Re: ovirt node based on centos 8 stream

2021-01-25 Thread Sandro Bonazzola
Il giorno ven 22 gen 2021 alle ore 10:34 Nathanaël Blanchet <
blanc...@abes.fr> ha scritto:

> Hi all,
>
> I project to upgrade from 4.3 to 4.4 hosts in the next few days, and I
> wonder if ovirt node based on centos 8.3 will be upgradable to ovirt
> node based on centos stream.
>

We have not finished to properly test oVirt on CentOS Stream so oVirt Node
will likely stay on CentOS Linux 8.3 also for oVirt 4.4.5.
>From oVirt Node point of view, upgrading from CentOS Linux 8.3 to CentOS
Stream shouldn't differ from an upgrade from RHEL 8.3 to RHEL 8.4 so there
shouldn't be issues.



>
> If not, I will wait to upgrade directly to ovirt node centos stream
> based when available.
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> SIRE
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R2FAFKH2Q7U23FUSNO6X2E2OIEO33BXD/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MMRZSFZ6BXMSRJ74HU5D5GRVGBQ4SGIC/


[ovirt-users] Re: oVirt Node Crash

2020-11-19 Thread Sandro Bonazzola
Il giorno gio 19 nov 2020 alle ore 10:48 Anton Louw <
anton.l...@voxtelecom.co.za> ha scritto:

>
>
> Hi Sandro,
>
>
>
> Thanks for the response.
>
>
>
> If I upgrade my datacenter to 4.4.3, will I first need to upgrade my
> engine? I see my only options now in the datacenter is:
>
>
>
>
>
> Also, if the data center is upgraded, will it still be compatible with my
> other hosts, some running 4.3.3?
>

4.3.3 should be able to run cluster compatibility 4.3 :-)
In general, it would be better to align the datacenter to the latest
version as soon as practical.



>
>
> Thanks
>
>
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> --
> *T:*  087 805  | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.l...@voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> [image: F] 
> [image: T] 
> [image: I] 
> [image: L] 
> [image: Y] 
>
> *From:* Sandro Bonazzola 
> *Sent:* 19 November 2020 10:00
> *To:* Anton Louw 
> *Cc:* Arik Hadas ; Dominik Holler ;
> users@ovirt.org; Johan Koen 
> *Subject:* Re: [ovirt-users] oVirt Node Crash
>
>
>
>
>
>
>
> Il giorno mar 17 nov 2020 alle ore 16:01 Anton Louw <
> anton.l...@voxtelecom.co.za> ha scritto:
>
>
>
> Hi Sandro,
>
>
>
> Have you perhaps seen anything in the SOS report that could shed some
> light on the issues?
>
>
>
> Sadly no. I see it's oVirt Node 4.3.8, I can suggest to upgrade to 4.3.10
> at least and consider upgrading to 4.4.3 the whole datacenter.
>
> I had the feeling watchdog was the trigger of the reboot but couldn't find
> any evidence.
>
> I also don't see anything suspicious in the logs.
>
>
>
>
>
>
>
>
>
> Thanks
>
>
>
>
>
> *Anton Louw*
>
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> --
>
> *T:*  087 805  | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.l...@voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
>
>
> [image: F] 
>
>
>
> [image: T] 
>
>
>
> [image: I] 
>
>
>
> [image: L] 
>
>
>
> [image: Y] 
>
>
>
>
>
> *From:* Anton Louw
> *Sent:* 16 November 2020 07:30
> *To:* Sandro Bonazzola ; Arik Hadas <
> aha...@redhat.com>; Dominik Holler 
> *Cc:* users@ovirt.org; Johan Koen 
> *Subject:* RE: [ovirt-users] oVirt Node Crash
>
>
>
> I have also attached the SOS report as requested
>
>
>
> *From:* Anton Louw
> *Sent:* 16 November 2020 06:54
> *To:* Sandro Bonazzola ; Arik Hadas <
> aha...@redhat.com>; Dominik Holler 
> *Cc:* users@ovirt.org; Johan Koen 
> *Subject:* RE: [ovirt-users] oVirt Node Crash
>
>
>
> Hi Sandro,
>
>
>
> Thanks for the response. I logged onto oVirt this morning, and I see the
> node is in a “Unassigned” state. I can ping it, but cannot SSH, so there is
> something that is causing the host to be unresponsive.
>
>
>
> On Saturday after I sent the mail, I opened a console to the node, and I
> saw the below entries before logging in:
>
>
>
> audit:backlog limit exceeded
>
>
>
> I the tried the solution of increasing the buffer size in the audit.rules
> file in /etc/audit/rules.d/ , as per below, but it did not resolve the
> issue.
>
>
>
> ## First rule - delete all
>
> -D
>
>
>
> ## Increase the buffers to survive stress events.
>
> ## Make this bigger for busy systems
>
> -b 8192
>
>
>
> ## Set failure mode to syslog
>
> -f 1
>
>
>
> Is it possible to upgrade the node to 4.4 while the engine is still on 4.3?
>
>
>
> Thanks
>
>
>
> *From:* Sandro Bonazzola 
> *Sent:* 13 November 2020 18:39
> *To:* Anton Louw ; Arik Hadas <
> aha...@redhat.com>; Dominik Holler 
> *Cc:* users@ovirt.org; Johan Koen 
> *Subject:* Re: [ovirt-users] oVirt Node Crash
>
>
>
>
>
>
>
> Il giorno ven 13 nov 2020 alle ore 17:37 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>
>
>
>
> Il giorno ven 13 nov 2020 alle ore 13:38 Anton Louw via Users <
> users@ovirt.org> ha scritto:
>
>
>
> Hi Everybody,
>
>
>
> I have built a new host which has been running fine for the last couple of
> days. I noticed today that the host crashed, but it is not giving me a
> reason as to why.
>
>
>
> It happened at 13:45 today, but I have given time before that on the logs
> as well.
>
>
>
> Is there something I am missing here?
>
>
>
> Not related to the crash, but I see in the logs that 5 out of 20 guests
> have qemu guest agent not responding.
>
>
>
> Also you seem to have some issues with some firewalld rules. (Maybe +Dominik
> Holler  would like to have a look)
>
>
>
> I don't see anything explaining why the host got rebooted.
>
>
>
> Still related to guest agent I find a bit alarming the 

[ovirt-users] Re: oVirt Node Crash

2020-11-19 Thread Anton Louw via Users
Hi Sandro,

Thanks for the response.

If I upgrade my datacenter to 4.4.3, will I first need to upgrade my engine? I 
see my only options now in the datacenter is:

[cid:image007.jpg@01D6BE69.D16C5670]

Also, if the data center is upgraded, will it still be compatible with my other 
hosts, some running 4.3.3?

Thanks


Anton Louw
Cloud Engineer: Storage and Virtualization
__
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.l...@voxtelecom.co.za

www.vox.co.za



From: Sandro Bonazzola 
Sent: 19 November 2020 10:00
To: Anton Louw 
Cc: Arik Hadas ; Dominik Holler ; 
users@ovirt.org; Johan Koen 
Subject: Re: [ovirt-users] oVirt Node Crash



Il giorno mar 17 nov 2020 alle ore 16:01 Anton Louw 
mailto:anton.l...@voxtelecom.co.za>> ha scritto:

Hi Sandro,

Have you perhaps seen anything in the SOS report that could shed some light on 
the issues?

Sadly no. I see it's oVirt Node 4.3.8, I can suggest to upgrade to 4.3.10 at 
least and consider upgrading to 4.4.3 the whole datacenter.
I had the feeling watchdog was the trigger of the reboot but couldn't find any 
evidence.
I also don't see anything suspicious in the logs.




Thanks


Anton Louw
Cloud Engineer: Storage and Virtualization at Vox

T:  087 805  | D: 087 805 1572
M: N/A
E: anton.l...@voxtelecom.co.za
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
www.vox.co.za

[F]

[T]

[I]

[L]

[Y]


From: Anton Louw
Sent: 16 November 2020 07:30
To: Sandro Bonazzola mailto:sbona...@redhat.com>>; Arik 
Hadas mailto:aha...@redhat.com>>; Dominik Holler 
mailto:dhol...@redhat.com>>
Cc: users@ovirt.org; Johan Koen 
mailto:johan.k...@voxtelecom.co.za>>
Subject: RE: [ovirt-users] oVirt Node Crash

I have also attached the SOS report as requested

From: Anton Louw
Sent: 16 November 2020 06:54
To: Sandro Bonazzola mailto:sbona...@redhat.com>>; Arik 
Hadas mailto:aha...@redhat.com>>; Dominik Holler 
mailto:dhol...@redhat.com>>
Cc: users@ovirt.org; Johan Koen 
mailto:johan.k...@voxtelecom.co.za>>
Subject: RE: [ovirt-users] oVirt Node Crash

Hi Sandro,

Thanks for the response. I logged onto oVirt this morning, and I see the node 
is in a “Unassigned” state. I can ping it, but cannot SSH, so there is 
something that is causing the host to be unresponsive.

On Saturday after I sent the mail, I opened a console to the node, and I saw 
the below entries before logging in:

audit:backlog limit exceeded

I the tried the solution of increasing the buffer size in the audit.rules file 
in /etc/audit/rules.d/ , as per below, but it did not resolve the issue.

## First rule - delete all
-D

## Increase the buffers to survive stress events.
## Make this bigger for busy systems
-b 8192

## Set failure mode to syslog
-f 1

Is it possible to upgrade the node to 4.4 while the engine is still on 4.3?

Thanks

From: Sandro Bonazzola mailto:sbona...@redhat.com>>
Sent: 13 November 2020 18:39
To: Anton Louw 
mailto:anton.l...@voxtelecom.co.za>>; Arik Hadas 
mailto:aha...@redhat.com>>; Dominik Holler 
mailto:dhol...@redhat.com>>
Cc: users@ovirt.org; Johan Koen 
mailto:johan.k...@voxtelecom.co.za>>
Subject: Re: [ovirt-users] oVirt Node Crash



Il giorno ven 13 nov 2020 alle ore 17:37 Sandro Bonazzola 
mailto:sbona...@redhat.com>> ha scritto:


Il giorno ven 13 nov 2020 alle ore 13:38 Anton Louw via Users 
mailto:users@ovirt.org>> ha scritto:

Hi Everybody,

I have built a new host which has been running fine for the last couple of 
days. I noticed today that the host crashed, but it is not giving me a reason 
as to why.

It happened at 13:45 today, but I have given time before that on the logs as 
well.

Is there something I am missing here?

Not related to the crash, but I see in the logs that 5 out of 20 guests have 
qemu guest agent not responding.

Also you seem to have some issues with some firewalld rules. (Maybe +Dominik 
Holler would like to have a look)

I don't see anything explaining why the host got rebooted.

Still related to guest agent I find a bit alarming the following lines:
Nov 13 13:29:34 jb2-node03 libvirtd: 2020-11-13 11:29:34.294+: 12603: error 
: qemuDomainAgentAvailable:9144 : Guest agent is not responding: QEMU guest 
agent is not connected
Nov 13 13:29:34 jb2-node03 vdsm[13843]: ERROR Shutdown by QEMU Guest Agent 
failed#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5304, in 
qemuGuestAgentShutdown#012
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)#012  File 

[ovirt-users] Re: oVirt Node Crash

2020-11-19 Thread Sandro Bonazzola
Il giorno mar 17 nov 2020 alle ore 16:01 Anton Louw <
anton.l...@voxtelecom.co.za> ha scritto:

>
>
> Hi Sandro,
>
>
>
> Have you perhaps seen anything in the SOS report that could shed some
> light on the issues?
>

Sadly no. I see it's oVirt Node 4.3.8, I can suggest to upgrade to 4.3.10
at least and consider upgrading to 4.4.3 the whole datacenter.
I had the feeling watchdog was the trigger of the reboot but couldn't find
any evidence.
I also don't see anything suspicious in the logs.




>
>
> Thanks
>
>
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> --
> *T:*  087 805  | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.l...@voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> [image: F] 
> [image: T] 
> [image: I] 
> [image: L] 
> [image: Y] 
>
> *From:* Anton Louw
> *Sent:* 16 November 2020 07:30
> *To:* Sandro Bonazzola ; Arik Hadas <
> aha...@redhat.com>; Dominik Holler 
> *Cc:* users@ovirt.org; Johan Koen 
> *Subject:* RE: [ovirt-users] oVirt Node Crash
>
>
>
> I have also attached the SOS report as requested
>
>
>
> *From:* Anton Louw
> *Sent:* 16 November 2020 06:54
> *To:* Sandro Bonazzola ; Arik Hadas <
> aha...@redhat.com>; Dominik Holler 
> *Cc:* users@ovirt.org; Johan Koen 
> *Subject:* RE: [ovirt-users] oVirt Node Crash
>
>
>
> Hi Sandro,
>
>
>
> Thanks for the response. I logged onto oVirt this morning, and I see the
> node is in a “Unassigned” state. I can ping it, but cannot SSH, so there is
> something that is causing the host to be unresponsive.
>
>
>
> On Saturday after I sent the mail, I opened a console to the node, and I
> saw the below entries before logging in:
>
>
>
> audit:backlog limit exceeded
>
>
>
> I the tried the solution of increasing the buffer size in the audit.rules
> file in /etc/audit/rules.d/ , as per below, but it did not resolve the
> issue.
>
>
>
> ## First rule - delete all
>
> -D
>
>
>
> ## Increase the buffers to survive stress events.
>
> ## Make this bigger for busy systems
>
> -b 8192
>
>
>
> ## Set failure mode to syslog
>
> -f 1
>
>
>
> Is it possible to upgrade the node to 4.4 while the engine is still on 4.3?
>
>
>
> Thanks
>
>
>
> *From:* Sandro Bonazzola 
> *Sent:* 13 November 2020 18:39
> *To:* Anton Louw ; Arik Hadas <
> aha...@redhat.com>; Dominik Holler 
> *Cc:* users@ovirt.org; Johan Koen 
> *Subject:* Re: [ovirt-users] oVirt Node Crash
>
>
>
>
>
>
>
> Il giorno ven 13 nov 2020 alle ore 17:37 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>
>
>
>
> Il giorno ven 13 nov 2020 alle ore 13:38 Anton Louw via Users <
> users@ovirt.org> ha scritto:
>
>
>
> Hi Everybody,
>
>
>
> I have built a new host which has been running fine for the last couple of
> days. I noticed today that the host crashed, but it is not giving me a
> reason as to why.
>
>
>
> It happened at 13:45 today, but I have given time before that on the logs
> as well.
>
>
>
> Is there something I am missing here?
>
>
>
> Not related to the crash, but I see in the logs that 5 out of 20 guests
> have qemu guest agent not responding.
>
>
>
> Also you seem to have some issues with some firewalld rules. (Maybe +Dominik
> Holler  would like to have a look)
>
>
>
> I don't see anything explaining why the host got rebooted.
>
>
>
> Still related to guest agent I find a bit alarming the following lines:
>
> Nov 13 13:29:34 jb2-node03 libvirtd: 2020-11-13 11:29:34.294+: 12603:
> error : qemuDomainAgentAvailable:9144 : Guest agent is not responding: QEMU
> guest agent is not connected
> Nov 13 13:29:34 jb2-node03 vdsm[13843]: ERROR Shutdown by QEMU Guest Agent
> failed#012Traceback (most recent call last):#012  File
> "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5304, in
> qemuGuestAgentShutdown#012
>  self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in
> f#012ret = attr(*args, **kwargs)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
> 131, in wrapper#012ret = f(*args, **kwargs)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in
> wrapper#012return func(inst, *args, **kwargs)#012  File
> "/usr/lib64/python2.7/site-packages/libvirt.py", line 2517, in
> shutdownFlags#012if ret == -1: raise libvirtError
> ('virDomainShutdownFlags() failed', dom=self)#012libvirtError: Guest agent
> is not responding: QEMU guest agent is not connected
> Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered
> disabled state
> Nov 13 13:29:42 jb2-node03 kernel: device vnet15 left promiscuous mode
> Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered
> disabled state
> Nov 13 13:29:42 

[ovirt-users] Re: oVirt Node Crash

2020-11-17 Thread Anton Louw via Users
Hi Sandro,

Have you perhaps seen anything in the SOS report that could shed some light on 
the issues?

Thanks


Anton Louw
Cloud Engineer: Storage and Virtualization
__
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.l...@voxtelecom.co.za

www.vox.co.za



From: Anton Louw
Sent: 16 November 2020 07:30
To: Sandro Bonazzola ; Arik Hadas ; 
Dominik Holler 
Cc: users@ovirt.org; Johan Koen 
Subject: RE: [ovirt-users] oVirt Node Crash

I have also attached the SOS report as requested

From: Anton Louw
Sent: 16 November 2020 06:54
To: Sandro Bonazzola mailto:sbona...@redhat.com>>; Arik 
Hadas mailto:aha...@redhat.com>>; Dominik Holler 
mailto:dhol...@redhat.com>>
Cc: users@ovirt.org; Johan Koen 
mailto:johan.k...@voxtelecom.co.za>>
Subject: RE: [ovirt-users] oVirt Node Crash

Hi Sandro,

Thanks for the response. I logged onto oVirt this morning, and I see the node 
is in a “Unassigned” state. I can ping it, but cannot SSH, so there is 
something that is causing the host to be unresponsive.

On Saturday after I sent the mail, I opened a console to the node, and I saw 
the below entries before logging in:

audit:backlog limit exceeded

I the tried the solution of increasing the buffer size in the audit.rules file 
in /etc/audit/rules.d/ , as per below, but it did not resolve the issue.

## First rule - delete all
-D

## Increase the buffers to survive stress events.
## Make this bigger for busy systems
-b 8192

## Set failure mode to syslog
-f 1

Is it possible to upgrade the node to 4.4 while the engine is still on 4.3?

Thanks

From: Sandro Bonazzola mailto:sbona...@redhat.com>>
Sent: 13 November 2020 18:39
To: Anton Louw 
mailto:anton.l...@voxtelecom.co.za>>; Arik Hadas 
mailto:aha...@redhat.com>>; Dominik Holler 
mailto:dhol...@redhat.com>>
Cc: users@ovirt.org; Johan Koen 
mailto:johan.k...@voxtelecom.co.za>>
Subject: Re: [ovirt-users] oVirt Node Crash



Il giorno ven 13 nov 2020 alle ore 17:37 Sandro Bonazzola 
mailto:sbona...@redhat.com>> ha scritto:


Il giorno ven 13 nov 2020 alle ore 13:38 Anton Louw via Users 
mailto:users@ovirt.org>> ha scritto:

Hi Everybody,

I have built a new host which has been running fine for the last couple of 
days. I noticed today that the host crashed, but it is not giving me a reason 
as to why.

It happened at 13:45 today, but I have given time before that on the logs as 
well.

Is there something I am missing here?

Not related to the crash, but I see in the logs that 5 out of 20 guests have 
qemu guest agent not responding.

Also you seem to have some issues with some firewalld rules. (Maybe +Dominik 
Holler would like to have a look)

I don't see anything explaining why the host got rebooted.

Still related to guest agent I find a bit alarming the following lines:
Nov 13 13:29:34 jb2-node03 libvirtd: 2020-11-13 11:29:34.294+: 12603: error 
: qemuDomainAgentAvailable:9144 : Guest agent is not responding: QEMU guest 
agent is not connected
Nov 13 13:29:34 jb2-node03 vdsm[13843]: ERROR Shutdown by QEMU Guest Agent 
failed#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5304, in 
qemuGuestAgentShutdown#012
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)#012  File 
"/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in f#012   
 ret = attr(*args, **kwargs)#012  File 
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 131, 
in wrapper#012ret = f(*args, **kwargs)#012  File 
"/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in 
wrapper#012return func(inst, *args, **kwargs)#012  File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 2517, in 
shutdownFlags#012if ret == -1: raise libvirtError 
('virDomainShutdownFlags() failed', dom=self)#012libvirtError: Guest agent is 
not responding: QEMU guest agent is not connected
Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered disabled 
state
Nov 13 13:29:42 jb2-node03 kernel: device vnet15 left promiscuous mode
Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered disabled 
state
Nov 13 13:29:42 jb2-node03 NetworkManager[6027]:   [1605266982.6539] 
device (vnet15): state change: disconnected -> unmanaged (reason 'unmanaged', 
sys-iface-state: 'removed')
Nov 13 13:29:42 jb2-node03 NetworkManager[6027]:   [1605266982.6550] 
device (vnet15): released from master device vlan0077
Nov 13 13:29:42 jb2-node03 libvirtd: 2020-11-13 11:29:42.669+: 12557: error 
: qemuMonitorIO:718 : internal error: End of file from qemu monitor

+Arik Hadas any clue?

About the crash, can you please provide full sos report from the host? the log 
you provided is not enough to understand what caused the reported crash

Also, given python2 is used here, I assume you're on 4.3 or older. I would 

[ovirt-users] Re: oVirt Node Crash

2020-11-15 Thread Anton Louw via Users
Hi Sandro,

Thanks for the response. I logged onto oVirt this morning, and I see the node 
is in a “Unassigned” state. I can ping it, but cannot SSH, so there is 
something that is causing the host to be unresponsive.

On Saturday after I sent the mail, I opened a console to the node, and I saw 
the below entries before logging in:

audit:backlog limit exceeded

I the tried the solution of increasing the buffer size in the audit.rules file 
in /etc/audit/rules.d/ , as per below, but it did not resolve the issue.

## First rule - delete all
-D

## Increase the buffers to survive stress events.
## Make this bigger for busy systems
-b 8192

## Set failure mode to syslog
-f 1

Is it possible to upgrade the node to 4.4 while the engine is still on 4.3?

Thanks


Anton Louw
Cloud Engineer: Storage and Virtualization
__
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.l...@voxtelecom.co.za

www.vox.co.za



From: Sandro Bonazzola 
Sent: 13 November 2020 18:39
To: Anton Louw ; Arik Hadas ; 
Dominik Holler 
Cc: users@ovirt.org; Johan Koen 
Subject: Re: [ovirt-users] oVirt Node Crash



Il giorno ven 13 nov 2020 alle ore 17:37 Sandro Bonazzola 
mailto:sbona...@redhat.com>> ha scritto:


Il giorno ven 13 nov 2020 alle ore 13:38 Anton Louw via Users 
mailto:users@ovirt.org>> ha scritto:

Hi Everybody,

I have built a new host which has been running fine for the last couple of 
days. I noticed today that the host crashed, but it is not giving me a reason 
as to why.

It happened at 13:45 today, but I have given time before that on the logs as 
well.

Is there something I am missing here?

Not related to the crash, but I see in the logs that 5 out of 20 guests have 
qemu guest agent not responding.

Also you seem to have some issues with some firewalld rules. (Maybe +Dominik 
Holler would like to have a look)

I don't see anything explaining why the host got rebooted.

Still related to guest agent I find a bit alarming the following lines:
Nov 13 13:29:34 jb2-node03 libvirtd: 2020-11-13 11:29:34.294+: 12603: error 
: qemuDomainAgentAvailable:9144 : Guest agent is not responding: QEMU guest 
agent is not connected
Nov 13 13:29:34 jb2-node03 vdsm[13843]: ERROR Shutdown by QEMU Guest Agent 
failed#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5304, in 
qemuGuestAgentShutdown#012
self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)#012  File 
"/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in f#012   
 ret = attr(*args, **kwargs)#012  File 
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 131, 
in wrapper#012ret = f(*args, **kwargs)#012  File 
"/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in 
wrapper#012return func(inst, *args, **kwargs)#012  File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 2517, in 
shutdownFlags#012if ret == -1: raise libvirtError 
('virDomainShutdownFlags() failed', dom=self)#012libvirtError: Guest agent is 
not responding: QEMU guest agent is not connected
Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered disabled 
state
Nov 13 13:29:42 jb2-node03 kernel: device vnet15 left promiscuous mode
Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered disabled 
state
Nov 13 13:29:42 jb2-node03 NetworkManager[6027]:   [1605266982.6539] 
device (vnet15): state change: disconnected -> unmanaged (reason 'unmanaged', 
sys-iface-state: 'removed')
Nov 13 13:29:42 jb2-node03 NetworkManager[6027]:   [1605266982.6550] 
device (vnet15): released from master device vlan0077
Nov 13 13:29:42 jb2-node03 libvirtd: 2020-11-13 11:29:42.669+: 12557: error 
: qemuMonitorIO:718 : internal error: End of file from qemu monitor

+Arik Hadas any clue?

About the crash, can you please provide full sos report from the host? the log 
you provided is not enough to understand what caused the reported crash

Also, given python2 is used here, I assume you're on 4.3 or older. I would 
recommend to upgrade to 4.4 as soon as practical.






Thanks

Anton Louw
Cloud Engineer: Storage and Virtualization at Vox

T:  087 805  | D: 087 805 1572
M: N/A
E: anton.l...@voxtelecom.co.za
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
www.vox.co.za

[F]

[T]

[I]

[L]

[Y]


[#VoxBrand]

Disclaimer

The contents of this email are confidential to the sender and the intended 
recipient. Unless the contents are clearly and entirely of a personal nature, 
they 

[ovirt-users] Re: oVirt Node Crash

2020-11-13 Thread Sandro Bonazzola
Il giorno ven 13 nov 2020 alle ore 17:37 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

>
>
> Il giorno ven 13 nov 2020 alle ore 13:38 Anton Louw via Users <
> users@ovirt.org> ha scritto:
>
>>
>>
>> Hi Everybody,
>>
>>
>>
>> I have built a new host which has been running fine for the last couple
>> of days. I noticed today that the host crashed, but it is not giving me a
>> reason as to why.
>>
>>
>>
>> It happened at 13:45 today, but I have given time before that on the logs
>> as well.
>>
>>
>>
>> Is there something I am missing here?
>>
>
> Not related to the crash, but I see in the logs that 5 out of 20 guests
> have qemu guest agent not responding.
>
> Also you seem to have some issues with some firewalld rules. (Maybe +Dominik
> Holler  would like to have a look)
>
> I don't see anything explaining why the host got rebooted.
>
> Still related to guest agent I find a bit alarming the following lines:
> Nov 13 13:29:34 jb2-node03 libvirtd: 2020-11-13 11:29:34.294+: 12603:
> error : qemuDomainAgentAvailable:9144 : Guest agent is not responding: QEMU
> guest agent is not connected
> Nov 13 13:29:34 jb2-node03 vdsm[13843]: ERROR Shutdown by QEMU Guest Agent
> failed#012Traceback (most recent call last):#012  File
> "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5304, in
> qemuGuestAgentShutdown#012
>  self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in
> f#012ret = attr(*args, **kwargs)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
> 131, in wrapper#012ret = f(*args, **kwargs)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in
> wrapper#012return func(inst, *args, **kwargs)#012  File
> "/usr/lib64/python2.7/site-packages/libvirt.py", line 2517, in
> shutdownFlags#012if ret == -1: raise libvirtError
> ('virDomainShutdownFlags() failed', dom=self)#012libvirtError: Guest agent
> is not responding: QEMU guest agent is not connected
> Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered
> disabled state
> Nov 13 13:29:42 jb2-node03 kernel: device vnet15 left promiscuous mode
> Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered
> disabled state
> Nov 13 13:29:42 jb2-node03 NetworkManager[6027]:   [1605266982.6539]
> device (vnet15): state change: disconnected -> unmanaged (reason
> 'unmanaged', sys-iface-state: 'removed')
> Nov 13 13:29:42 jb2-node03 NetworkManager[6027]:   [1605266982.6550]
> device (vnet15): released from master device vlan0077
> Nov 13 13:29:42 jb2-node03 libvirtd: 2020-11-13 11:29:42.669+: 12557:
> error : qemuMonitorIO:718 : internal error: End of file from qemu monitor
>
> +Arik Hadas  any clue?
>
> About the crash, can you please provide full sos report from the host? the
> log you provided is not enough to understand what caused the reported crash
>

Also, given python2 is used here, I assume you're on 4.3 or older. I would
recommend to upgrade to 4.4 as soon as practical.



>
>
>
>
>>
>>
>> Thanks
>>
>> *Anton Louw*
>> *Cloud Engineer: Storage and Virtualization* at *Vox*
>> --
>> *T:*  087 805  | *D:* 087 805 1572
>> *M:* N/A
>> *E:* anton.l...@voxtelecom.co.za
>> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>> www.vox.co.za
>>
>> [image: F] 
>> [image: T] 
>> [image: I] 
>> [image: L] 
>> [image: Y] 
>>
>> [image: #VoxBrand]
>> 
>> *Disclaimer*
>>
>> The contents of this email are confidential to the sender and the
>> intended recipient. Unless the contents are clearly and entirely of a
>> personal nature, they are subject to copyright in favour of the holding
>> company of the Vox group of companies. Any recipient who receives this
>> email in error should immediately report the error to the sender and
>> permanently delete this email from all storage devices.
>>
>> This email has been scanned for viruses and malware, and may have been
>> automatically archived by *Mimecast Ltd*, an innovator in Software as a
>> Service (SaaS) for business. Providing a *safer* and *more useful* place
>> for your human generated data. Specializing in; Security, archiving and
>> compliance. To find out more Click Here
>> .
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> 

[ovirt-users] Re: oVirt Node Crash

2020-11-13 Thread Sandro Bonazzola
Il giorno ven 13 nov 2020 alle ore 13:38 Anton Louw via Users <
users@ovirt.org> ha scritto:

>
>
> Hi Everybody,
>
>
>
> I have built a new host which has been running fine for the last couple of
> days. I noticed today that the host crashed, but it is not giving me a
> reason as to why.
>
>
>
> It happened at 13:45 today, but I have given time before that on the logs
> as well.
>
>
>
> Is there something I am missing here?
>

Not related to the crash, but I see in the logs that 5 out of 20 guests
have qemu guest agent not responding.

Also you seem to have some issues with some firewalld rules. (Maybe +Dominik
Holler  would like to have a look)

I don't see anything explaining why the host got rebooted.

Still related to guest agent I find a bit alarming the following lines:
Nov 13 13:29:34 jb2-node03 libvirtd: 2020-11-13 11:29:34.294+: 12603:
error : qemuDomainAgentAvailable:9144 : Guest agent is not responding: QEMU
guest agent is not connected
Nov 13 13:29:34 jb2-node03 vdsm[13843]: ERROR Shutdown by QEMU Guest Agent
failed#012Traceback (most recent call last):#012  File
"/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5304, in
qemuGuestAgentShutdown#012
 self._dom.shutdownFlags(libvirt.VIR_DOMAIN_SHUTDOWN_GUEST_AGENT)#012  File
"/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in
f#012ret = attr(*args, **kwargs)#012  File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
131, in wrapper#012ret = f(*args, **kwargs)#012  File
"/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in
wrapper#012return func(inst, *args, **kwargs)#012  File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 2517, in
shutdownFlags#012if ret == -1: raise libvirtError
('virDomainShutdownFlags() failed', dom=self)#012libvirtError: Guest agent
is not responding: QEMU guest agent is not connected
Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered
disabled state
Nov 13 13:29:42 jb2-node03 kernel: device vnet15 left promiscuous mode
Nov 13 13:29:42 jb2-node03 kernel: vlan0077: port 11(vnet15) entered
disabled state
Nov 13 13:29:42 jb2-node03 NetworkManager[6027]:   [1605266982.6539]
device (vnet15): state change: disconnected -> unmanaged (reason
'unmanaged', sys-iface-state: 'removed')
Nov 13 13:29:42 jb2-node03 NetworkManager[6027]:   [1605266982.6550]
device (vnet15): released from master device vlan0077
Nov 13 13:29:42 jb2-node03 libvirtd: 2020-11-13 11:29:42.669+: 12557:
error : qemuMonitorIO:718 : internal error: End of file from qemu monitor

+Arik Hadas  any clue?

About the crash, can you please provide full sos report from the host? the
log you provided is not enough to understand what caused the reported crash




>
>
> Thanks
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> --
> *T:*  087 805  | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.l...@voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> [image: F] 
> [image: T] 
> [image: I] 
> [image: L] 
> [image: Y] 
>
> [image: #VoxBrand]
> 
> *Disclaimer*
>
> The contents of this email are confidential to the sender and the intended
> recipient. Unless the contents are clearly and entirely of a personal
> nature, they are subject to copyright in favour of the holding company of
> the Vox group of companies. Any recipient who receives this email in error
> should immediately report the error to the sender and permanently delete
> this email from all storage devices.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> .
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMRUDMRBYZKUJQXVPPAEAJIP7N3JPRLY/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*

[ovirt-users] Re: Ovirt Node 4.4.2 install Odroid-H2 64GB eMMC

2020-10-15 Thread Sandro Bonazzola
Il giorno mer 14 ott 2020 alle ore 17:28  ha
scritto:

> Hi All,
>
> I'm trying to install node 4.4.2 on an eMMC card, but when I get to the
> storage configuration of the installer, it doesn't save the settings (which
> is automatic configuration) I have chosen and displays failed to save
> storage configuration. I have deleted all partitions on the card before
> trying to install and I still get the same error. The only way I can get it
> to go is select manual configuration with LVM thin provisioning and
> automatically create. Am I doing something wrong. I can install Centos 8 no
> issues on this, but not oVirt node 4.4.2.
>

Can you please open a bug and attach anaconda logs?


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*


* *
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFIEBBNGMDIGMGZOUO7C6QYCT3NIPY6T/


[ovirt-users] Re: oVirt-node

2020-10-13 Thread Philip Brown
What is the longevity status of cockpit-machines-ovirt ?
I understand it was removed from automatically being in node, due to perceived 
redundancy.

But is the software package itself going away and/or not going to be usable in 
the future?


- Original Message -
From: "Sandro Bonazzola" 
To: "Budur Nagaraju" 
Cc: "users" 
Sent: Monday, October 12, 2020 5:05:01 AM
Subject: [ovirt-users] Re: oVirt-node

Il giorno lun 12 ott 2020 alle ore 12:36 Budur Nagaraju < [ 
mailto:nbud...@gmail.com | nbud...@gmail.com ] > ha scritto: 



Hi 

Is there a way to deploy vms on the ovirt node without using the oVirt engine? 

Hi, 
if you mean: 
"Can I use oVirt Node for running VMs without using oVirt Engine?" 
then yes, you can. 

oVirt Node is a CentOS Linux derivative and as such you can use virt-manager 
from your laptop to connect to it and manage VMs there as if it was a normal 
CentOS. 
You can also use cockpit for creating local VMs. 

If you mean: 
"Can I create VMs from oVirt Node and also manage them from the engine?" 
the short answer is no. 
The long answer is: you can still try using cockpit-machines-ovirt [ 
https://cockpit-project.org/guide/172/feature-ovirtvirtualmachines.html | 
https://cockpit-project.org/guide/172/feature-ovirtvirtualmachines.html ] 
which was deprecated in oVIrt 4.3 and removed in 4.4. 
Or run VMs on oVirt Node and try to make them visible to engine using KVM 
provider [ 
https://www.ovirt.org/documentation/administration_guide/#Adding_KVM_as_an_External_Provider
 | 
https://www.ovirt.org/documentation/administration_guide/#Adding_KVM_as_an_External_Provider
 ] 
But I wouldn't recommend using these flows. 

-- 


Sandro Bonazzola 

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV 

[ https://www.redhat.com/ | Red Hat EMEA ] 


[ mailto:sbona...@redhat.com | sbona...@redhat.com ] 
[ https://www.redhat.com/ ] 
Red Hat respects your work life balance. Therefore there is no need to answer 
this email out of your office hours. 
[ https://www.redhat.com/it/forums/emea/italy-track ] 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7RQZY3DCQ7TFFB4OHOO7EQOVYZCRCDJD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CYGFMNCKPW2I56FUV7IRQXHYK7IQNCDC/


[ovirt-users] Re: oVirt-node

2020-10-12 Thread Strahil Nikolov via Users
Hi Badur,

theoretically it's possible as oVirt is just a management layer.

You can use 'virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' as an alias 
of virsh and then you will be able to "virsh define yourVM.xml" & "virsh start 
yourVM".

Also it's suitable to start a VM during Engine's downtime.

Best Regards,
Strahil Nikolov










В понеделник, 12 октомври 2020 г., 13:36:31 Гринуич+3, Budur Nagaraju 
 написа: 





Hi 

Is there a way to deploy  vms on the ovirt node without using the oVirt engine?

Thanks,
Nagaraju
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEGDT6G6P3D4GEPXFKWECUVO33H73YH5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GX67ICZ2QCUB246GNFOGAWZLR5WPVBLN/


[ovirt-users] Re: oVirt-node

2020-10-12 Thread Budur Nagaraju
Hi Sandro,

Have not installed a hosted engine nor have installed an ovirt engine ,
have just installed an ovirt node in one of the bare metal servers and
logged into the server using a cockpit.
When I browse to ovirt virtual machine "create New vm' is greyed out, is
ovirt node dependent on the ovirt engine ?

Thanks,
Nagaraju


On Mon, Oct 12, 2020 at 5:54 PM Sandro Bonazzola 
wrote:

>
>
> Il giorno lun 12 ott 2020 alle ore 14:20 Budur Nagaraju 
> ha scritto:
>
>> Have logged in using cockpit but unable to create vms,  is the behavior
>> is like that?
>>
>> We can't use cockpit to create vms?
>>
>
>
> yum install
> http://mirror.centos.org/centos/8/AppStream/x86_64/os/Packages/cockpit-machines-211.3-1.el8.noarch.rpm
> should give you the cockpit plugin for running VMs.
> Just be aware this is not a use case that involves oVirt bits, this is
> basically CentOS workflow.
>
>
>
>>
>> Thanks,
>> Nagaraju
>>
>> On Mon, Oct 12, 2020, 5:35 PM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno lun 12 ott 2020 alle ore 12:36 Budur Nagaraju <
>>> nbud...@gmail.com> ha scritto:
>>>
 Hi

 Is there a way to deploy  vms on the ovirt node without using the oVirt
 engine?

>>>
>>> Hi,
>>> if you mean:
>>> "Can I use oVirt Node for running VMs without using oVirt Engine?"
>>> then yes, you can.
>>>
>>> oVirt Node is a CentOS Linux derivative and as such you can use
>>> virt-manager from your laptop to connect to it and manage VMs there as if
>>> it was a normal CentOS.
>>> You can also use cockpit for creating local VMs.
>>>
>>> If you mean:
>>> "Can I create VMs from oVirt Node and also manage them from the engine?"
>>> the short answer is no.
>>> The long answer is: you can still try using cockpit-machines-ovirt
>>> https://cockpit-project.org/guide/172/feature-ovirtvirtualmachines.html
>>> which was deprecated in oVIrt 4.3 and removed in 4.4.
>>> Or run VMs on oVirt Node and try to make them visible to engine using
>>> KVM provider
>>> https://www.ovirt.org/documentation/administration_guide/#Adding_KVM_as_an_External_Provider
>>> But I wouldn't recommend using these flows.
>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>
>>> Red Hat EMEA 
>>>
>>> sbona...@redhat.com
>>> 
>>>
>>> *Red Hat respects your work life balance. Therefore there is no need to
>>> answer this email out of your office hours.*
>>>
>>>
>>> * *
>>>
>>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
>
> * *
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMQ2J6CVXBYK735NGF3GS4BNT4YXE2EV/


[ovirt-users] Re: oVirt-node

2020-10-12 Thread Sandro Bonazzola
Il giorno lun 12 ott 2020 alle ore 14:20 Budur Nagaraju 
ha scritto:

> Have logged in using cockpit but unable to create vms,  is the behavior is
> like that?
>
> We can't use cockpit to create vms?
>


yum install
http://mirror.centos.org/centos/8/AppStream/x86_64/os/Packages/cockpit-machines-211.3-1.el8.noarch.rpm
should give you the cockpit plugin for running VMs.
Just be aware this is not a use case that involves oVirt bits, this is
basically CentOS workflow.



>
> Thanks,
> Nagaraju
>
> On Mon, Oct 12, 2020, 5:35 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno lun 12 ott 2020 alle ore 12:36 Budur Nagaraju <
>> nbud...@gmail.com> ha scritto:
>>
>>> Hi
>>>
>>> Is there a way to deploy  vms on the ovirt node without using the oVirt
>>> engine?
>>>
>>
>> Hi,
>> if you mean:
>> "Can I use oVirt Node for running VMs without using oVirt Engine?"
>> then yes, you can.
>>
>> oVirt Node is a CentOS Linux derivative and as such you can use
>> virt-manager from your laptop to connect to it and manage VMs there as if
>> it was a normal CentOS.
>> You can also use cockpit for creating local VMs.
>>
>> If you mean:
>> "Can I create VMs from oVirt Node and also manage them from the engine?"
>> the short answer is no.
>> The long answer is: you can still try using cockpit-machines-ovirt
>> https://cockpit-project.org/guide/172/feature-ovirtvirtualmachines.html
>> which was deprecated in oVIrt 4.3 and removed in 4.4.
>> Or run VMs on oVirt Node and try to make them visible to engine using KVM
>> provider
>> https://www.ovirt.org/documentation/administration_guide/#Adding_KVM_as_an_External_Provider
>> But I wouldn't recommend using these flows.
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.*
>>
>>
>> * *
>>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*


* *
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z3RPXNN4VD2K6D375CYQBAZPZ4VIDPFH/


[ovirt-users] Re: oVirt-node

2020-10-12 Thread Budur Nagaraju
Have logged in using cockpit but unable to create vms,  is the behavior is
like that?

We can't use cockpit to create vms?

Thanks,
Nagaraju

On Mon, Oct 12, 2020, 5:35 PM Sandro Bonazzola  wrote:

>
>
> Il giorno lun 12 ott 2020 alle ore 12:36 Budur Nagaraju 
> ha scritto:
>
>> Hi
>>
>> Is there a way to deploy  vms on the ovirt node without using the oVirt
>> engine?
>>
>
> Hi,
> if you mean:
> "Can I use oVirt Node for running VMs without using oVirt Engine?"
> then yes, you can.
>
> oVirt Node is a CentOS Linux derivative and as such you can use
> virt-manager from your laptop to connect to it and manage VMs there as if
> it was a normal CentOS.
> You can also use cockpit for creating local VMs.
>
> If you mean:
> "Can I create VMs from oVirt Node and also manage them from the engine?"
> the short answer is no.
> The long answer is: you can still try using cockpit-machines-ovirt
> https://cockpit-project.org/guide/172/feature-ovirtvirtualmachines.html
> which was deprecated in oVIrt 4.3 and removed in 4.4.
> Or run VMs on oVirt Node and try to make them visible to engine using KVM
> provider
> https://www.ovirt.org/documentation/administration_guide/#Adding_KVM_as_an_External_Provider
> But I wouldn't recommend using these flows.
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
> * *
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEDJLMJGWTQ4KTODWSKMRK62LMDAVMJP/


[ovirt-users] Re: oVirt-node

2020-10-12 Thread Sandro Bonazzola
Il giorno lun 12 ott 2020 alle ore 12:36 Budur Nagaraju 
ha scritto:

> Hi
>
> Is there a way to deploy  vms on the ovirt node without using the oVirt
> engine?
>

Hi,
if you mean:
"Can I use oVirt Node for running VMs without using oVirt Engine?"
then yes, you can.

oVirt Node is a CentOS Linux derivative and as such you can use
virt-manager from your laptop to connect to it and manage VMs there as if
it was a normal CentOS.
You can also use cockpit for creating local VMs.

If you mean:
"Can I create VMs from oVirt Node and also manage them from the engine?"
the short answer is no.
The long answer is: you can still try using cockpit-machines-ovirt
https://cockpit-project.org/guide/172/feature-ovirtvirtualmachines.html
which was deprecated in oVIrt 4.3 and removed in 4.4.
Or run VMs on oVirt Node and try to make them visible to engine using KVM
provider
https://www.ovirt.org/documentation/administration_guide/#Adding_KVM_as_an_External_Provider
But I wouldn't recommend using these flows.

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*


* *
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7RQZY3DCQ7TFFB4OHOO7EQOVYZCRCDJD/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-06 Thread Martin Perina
Hi Gianluca,

please see my replies inline

On Tue, Oct 6, 2020 at 11:37 AM Gianluca Cecchi 
wrote:

> On Tue, Oct 6, 2020 at 11:25 AM Martin Perina  wrote:
>
>>
>>> You say to drive a command form the engine that is a VM that runs inside
>>> the host, but ask to shutdown VMs running on host before...
>>> This is a self hosted engine composed by only one single host.
>>> Normally I would use the procedure from the engine web admin gui, one
>>> host at a time, but with single host it is not possible.
>>>
>>
>> We have said several times, that it doesn't make sense to use oVirt on a
>> single host system. So you either need to attach 2nd host to your setup
>> (preferred) or shutdown all VMS and run manual upgrade of your host OS
>>
>>
> We who
>

So I've spent the past hour deeply investigating our upstream documentation
and you are right, we don't have any clear requirements about the minimal
number of hosts in upstream oVirt documentation.
But here are the facts:

1. To be able to upgrade a host either from UI/RESTAPI or manually using
SSH, the host always needs to be in Maintenance:

https://www.ovirt.org/documentation/administration_guide/#Updating_a_host_between_minor_releases

2. To perform Reinstall or Enroll certificate of a host, the host needs to
be in Maintenance mode

https://www.ovirt.org/documentation/administration_guide/#Reinstalling_Hosts_admin

3. When host is in Maintenance mode, there are no oVirt managed VMs running
on it

https://www.ovirt.org/documentation/administration_guide/#Moving_a_host_to_maintenance_mode

4. When engine is not running (either stopped or crashed), VMs running on
hypervisor hosts are unaffected (meaning they are running independently on
engine), but they are pretty much "pinned to the host they are running on"
(for example VMs cannot be migrated or started/stopped (of course you can
stop this VM from within) without running engine)

So just using above facts here are logical conclusions:

1. Standalone engine installation with only one hypervisor host
- this means that engine runs on bare metal hosts (for example
engine.domain.com) and single hypervisor host is managed by it (for example
host1.domain.com)
- in this case scenario administrator is able to perform all
maintenance task (even though at the cost that VMs running on hypervisor
need to be stopped before switching to Maintenance mode),
  because engine is running independently on hypervisor

2. Hosted engine installation with one hypervisor hosts
- this means that engine runs as a VM (for example engine.domain.com)
inside a single hypervisor host, which is managed by it (for example
host1.domain.com)
- in this scenario maintenance of the host is very limited:
- you cannot move the host to Maintenance, because hosted engine VM
cannot be migrated outside a host
- you can perform global Maintenance and the probably manually stop
hosted engine VM, but then you don't have engine to be able to perform
maintenance tasks (for example, Upgrade, Reinstall or Enroll certificates)

But in both above use cases you cannot use the biggest oVirt advantage and
that's a shared storage among hypervisor hosts, which allows you to perform
live migration of VMs. And thanks to that feature you can perform
maintenance tasks on the host(s) without interruption in providing VM
services.

*From the above it's obvious that we need to really clearly state that in a
production environment oVirt requires to have at least 2 hypervisor hosts
for full functionality.*

In old times there was the all-in-one setup that was substituted from
> single host HCI
>

All-in-one feature has been deprecated in oVirt 3.6 and fully removed in
oVirt 4.0

> ... developers also put extra efforts to setup the wizard comprising the
> single host scenario.
>

Yes, you are right, you can initially set up oVirt with just a single host,
but it's expected that you are going to add an additional host(s) soon.

Obviously it is aimed at test bed / devel / home environments, not
> production ones.
>

Of course, for development use whatever your want, but for production you
care about your setup, because you want the services your offer to run
smoothly

> Do you want me to send you the list of bugzilla contributed by users using
> single host environments that helped Red Hat to have a better working RHV
> too?
>

It's clearly stated that at least 2 hypervisors are required for hosted
engine or standalone RHV installation:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/rhv_architecture
But as I mentioned above, we have a bug in oVirt documentation, that such
an important requirement is not clearly stated. And this is not a fault of
a community, this is a fault of oVirt maintainers, that we have forgotten
to mention such an important requirement in oVirt documentation and it's
clearly visible, that it caused a confusion to so many users.

But no matter what I 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-06 Thread Gianluca Cecchi
On Tue, Oct 6, 2020 at 11:25 AM Martin Perina  wrote:

>
>> You say to drive a command form the engine that is a VM that runs inside
>> the host, but ask to shutdown VMs running on host before...
>> This is a self hosted engine composed by only one single host.
>> Normally I would use the procedure from the engine web admin gui, one
>> host at a time, but with single host it is not possible.
>>
>
> We have said several times, that it doesn't make sense to use oVirt on a
> single host system. So you either need to attach 2nd host to your setup
> (preferred) or shutdown all VMS and run manual upgrade of your host OS
>
>
We who
In old times there was the all-in-one setup that was substituted from
single host HCI ... developers also put extra efforts to setup the wizard
comprising the single host scenario.
Obviously it is aimed at test bed / devel / home environments, not
production ones.
Do you want me to send you the list of bugzilla contributed by users using
single host environments that helped Red Hat to have a better working RHV
too?

Please think more deeply next time, thanks

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TTT5IGNQV3VJMTCQNWKEFRXL45YKSULB/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-06 Thread Martin Perina
On Mon, Oct 5, 2020 at 3:25 PM Gianluca Cecchi 
wrote:

>
>
> On Mon, Oct 5, 2020 at 3:13 PM Dana Elfassy  wrote:
>
>> Can you shutdown the vms just for the upgrade process?
>>
>> On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi 
>> wrote:
>>
>>> On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy 
>>> wrote:
>>>
 In order to run the playbooks you would also need the parameters that
 they use - some are set on the engine side
 Why can't you upgrade the host from the engine admin portal?


>>> Because when you upgrade a host you put it into maintenance before.
>>> And this implies no VMs in execution on it.
>>> But if you are in a single host composed environment you cannot
>>>
>>> Gianluca
>>>
>>
> we are talking about chicken-egg problem.
>
> You say to drive a command form the engine that is a VM that runs inside
> the host, but ask to shutdown VMs running on host before...
> This is a self hosted engine composed by only one single host.
> Normally I would use the procedure from the engine web admin gui, one host
> at a time, but with single host it is not possible.
>

We have said several times, that it doesn't make sense to use oVirt on a
single host system. So you either need to attach 2nd host to your setup
(preferred) or shutdown all VMS and run manual upgrade of your host OS


> Gianluca
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ZU43KQXYJO43CWTDDT733H4YZS4JA2U/
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7EATX7RPVUOAQWKLHYOTSTRVJG4M2O6Q/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 3:13 PM Dana Elfassy  wrote:

> Can you shutdown the vms just for the upgrade process?
>
> On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi 
> wrote:
>
>> On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:
>>
>>> In order to run the playbooks you would also need the parameters that
>>> they use - some are set on the engine side
>>> Why can't you upgrade the host from the engine admin portal?
>>>
>>>
>> Because when you upgrade a host you put it into maintenance before.
>> And this implies no VMs in execution on it.
>> But if you are in a single host composed environment you cannot
>>
>> Gianluca
>>
>
we are talking about chicken-egg problem.

You say to drive a command form the engine that is a VM that runs inside
the host, but ask to shutdown VMs running on host before...
This is a self hosted engine composed by only one single host.
Normally I would use the procedure from the engine web admin gui, one host
at a time, but with single host it is not possible.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ZU43KQXYJO43CWTDDT733H4YZS4JA2U/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
Can you shutdown the vms just for the upgrade process?

On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi 
wrote:

> On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:
>
>> In order to run the playbooks you would also need the parameters that
>> they use - some are set on the engine side
>> Why can't you upgrade the host from the engine admin portal?
>>
>>
> Because when you upgrade a host you put it into maintenance before.
> And this implies no VMs in execution on it.
> But if you are in a single host composed environment you cannot
>
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RAYP3AWPDRH7JHDBUJQZWTXRKPLA6DWI/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy  wrote:

> In order to run the playbooks you would also need the parameters that they
> use - some are set on the engine side
> Why can't you upgrade the host from the engine admin portal?
>
>
Because when you upgrade a host you put it into maintenance before.
And this implies no VMs in execution on it.
But if you are in a single host composed environment you cannot

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ROKDXJ7RIJPXOJXRMHSK7DGSYIELGKEN/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
In order to run the playbooks you would also need the parameters that they
use - some are set on the engine side
Why can't you upgrade the host from the engine admin portal?

On Mon, Oct 5, 2020 at 12:31 PM Gianluca Cecchi 
wrote:

> On Mon, Oct 5, 2020 at 10:37 AM Dana Elfassy  wrote:
>
>> Yes.
>> The additional main tasks that we execute during host upgrade besides
>> updating packages are certificates related (check for certificates
>> validity, enroll certificates) , configuring advanced virtualization and
>> lvm filter
>> Dana
>>
>>
> Thanks,
> What if I want to directly execute on the host? Any command / pointer to
> run after "yum update"?
> This is to cover a scenario with single host, where I cannot drive it from
> the engine...
>
> Gianluca
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEBWRZOIT4Y2B2T2L2XYFJJV6VQ3VAOF/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Nir Soffer
On Mon, Oct 5, 2020 at 9:06 AM Gianluca Cecchi
 wrote:
>
> On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer  wrote:
>>
>> On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
>> >
>> >
>> >
>> > On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi  
>> > wrote:
>> >>
>> >> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>> >>>
>> >>>
>> >>>
>> >>> Since there wasn't a filter set on the node, the 4.4.2 update added the 
>> >>> default filter for the root-lv pv
>> >>> if there was some filter set before the upgrade, it would not have been 
>> >>> added by the 4.4.2 update.
>> 
>> 
>> >>
>> >> Do you mean that I will get the same problem upgrading from 4.4.2 to an 
>> >> upcoming 4.4.3, as also now I don't have any filter set?
>> >> This would not be desirable
>> >
>> > Once you have got back into 4.4.2, it's recommended to set the lvm filter 
>> > to fit the pvs you use on your node
>> > for the local root pv you can run
>> > # vdsm-tool config-lvm-filter -y
>> > For the gluster bricks you'll need to add their uuids to the filter as 
>> > well.
>>
>> vdsm-tool is expected to add all the devices needed by the mounted
>> logical volumes, so adding devices manually should not be needed.
>>
>> If this does not work please file a bug and include all the info to reproduce
>> the issue.
>>
>
> I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0, 
> but the effect was that no filter at all was set up in lvm.conf, and so the 
> problem I had upgrading to 4.4.2.
> Any way to see related logs for 4.4.0? In which phase of the install of the 
> node itself or of the gluster based wizard is it supposed to run the 
> vdsm-tool command?
>
> Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:
>
> "
> [root@ovirt01 ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
>   mountpoint:  /gluster_bricks/data
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_engine
>   mountpoint:  /gluster_bricks/engine
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_vmstore
>   mountpoint:  /gluster_bricks/vmstore
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
>
>   logical volume:  /dev/mapper/onn-home
>   mountpoint:  /home
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
>   mountpoint:  /
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-swap
>   mountpoint:  [SWAP]
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-tmp
>   mountpoint:  /tmp
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var
>   mountpoint:  /var
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_crash
>   mountpoint:  /var/crash
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_log
>   mountpoint:  /var/log
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
>   logical volume:  /dev/mapper/onn-var_log_audit
>   mountpoint:  /var/log/audit
>   devices: 
> /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
>
> This is the recommended LVM filter for this host:
>
>   filter = [ 
> "a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|", 
> "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> To use the recommended filter we need to add multipath
> blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
>
>   blacklist {
>   wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V"
>   wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
>   }
>
>
> Configure host? [yes,NO]
>
> "
> Does this mean that answering "yes" I will get both lvm and multipath related 
> files modified?

Yes...

>
> Right now my multipath is configured this way:
>
> [root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^#" | grep 
> -v "^$"
> defaults {
> polling_interval5
> no_path_retry   4
> user_friendly_names no
> flush_on_last_del   yes
> 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 10:37 AM Dana Elfassy  wrote:

> Yes.
> The additional main tasks that we execute during host upgrade besides
> updating packages are certificates related (check for certificates
> validity, enroll certificates) , configuring advanced virtualization and
> lvm filter
> Dana
>
>
Thanks,
What if I want to directly execute on the host? Any command / pointer to
run after "yum update"?
This is to cover a scenario with single host, where I cannot drive it from
the engine...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6BDBZMEDS4NDV7D6MGLF2C35M4G66V5K/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Dana Elfassy
Yes.
The additional main tasks that we execute during host upgrade besides
updating packages are certificates related (check for certificates
validity, enroll certificates) , configuring advanced virtualization and
lvm filter
Dana

On Mon, Oct 5, 2020 at 9:31 AM Sandro Bonazzola  wrote:

>
>
> Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> ha scritto:
>>>


 On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
 wrote:

> oVirt Node 4.4.2 is now generally available
>
> The oVirt project is pleased to announce the general availability of
> oVirt Node 4.4.2 , as of September 25th, 2020.
>
> This release completes the oVirt 4.4.2 release published on September
> 17th
>

 Thanks fir the news!

 How to prevent hosts entering emergency mode after upgrade from oVirt
> 4.4.1
>
> Due to Bug 1837864
>  - Host enter
> emergency mode after upgrading to latest build
>
> If you have your root file system on a multipath device on your hosts
> you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
> your host entering emergency mode.
>
> In order to prevent this be sure to upgrade oVirt Engine first, then
> on your hosts:
>
>1.
>
>Remove the current lvm filter while still on 4.4.1, or in
>emergency mode (if rebooted).
>2.
>
>Reboot.
>3.
>
>Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
>4.
>
>Run vdsm-tool config-lvm-filter to confirm there is a new filter
>in place.
>5.
>
>Only if not using oVirt Node:
>- run "dracut --force --add multipath” to rebuild initramfs with
>the correct filter configuration
>6.
>
>Reboot.
>
>
>
 What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
 to follow the same steps as if I were in 4.4.1 or what?
 I would like to avoid going through 4.4.1 if possible.

>>>
>>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>>> should work for the same case.
>>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>>
>>> # grep '^filter = ' /etc/lvm/lvm.conf
>>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>>
>>>
>>>
>>>

 Thanks,
 Gianluca

>>>
>>>
>> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
>> and gluster wizard and never update until now.
>> Updated self hosted engine to 4.4.2 without problems.
>>
>> My host doesn't have any filter or global_filter set up in lvm.conf  in
>> 4.4.0.
>>
>> So I update it:
>>
>> [root@ovirt01 vdsm]# yum update
>>
>
> Please use the update command from the engine admin portal.
> The ansible code running from there also performs additional steps other
> than just yum update.
> +Dana Elfassy  can you elaborate on other steps
> performed during the upgrade?
>
>
>
>> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51
>> PM CEST.
>> Dependencies resolved.
>>
>> 
>>  Package ArchitectureVersion
>>   Repository  Size
>>
>> 
>> Installing:
>>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
>>   ovirt-4.4  782 M
>>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>>
>> Transaction Summary
>>
>> 
>> Install  1 Package
>>
>> Total download size: 782 M
>> Is this ok [y/N]: y
>> Downloading Packages:
>> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
>> 145 MB 01:45 ETA
>>
>>
>> 
>> Total   5.3
>> MB/s | 782 MB 02:28
>> Running transaction check
>> Transaction check succeeded.
>> Running transaction test
>> Transaction test succeeded.
>> Running transaction
>>   Preparing:
>>1/1
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Installing   : ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>>1/2
>>   Obsoleting   :
>> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>>  2/2
>>   

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 8:31 AM Sandro Bonazzola  wrote:

>
>> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
>> and gluster wizard and never update until now.
>> Updated self hosted engine to 4.4.2 without problems.
>>
>> My host doesn't have any filter or global_filter set up in lvm.conf  in
>> 4.4.0.
>>
>> So I update it:
>>
>> [root@ovirt01 vdsm]# yum update
>>
>
> Please use the update command from the engine admin portal.
> The ansible code running from there also performs additional steps other
> than just yum update.
> +Dana Elfassy  can you elaborate on other steps
> performed during the upgrade?
>
>
Yes, in general.
But for single host environments is not possible, at least I think.
Because you are upgrading the host where the engine is running...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCPWGUVJYERJJK7UY4K22U32L52ZXV5D/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Sandro Bonazzola
Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>>
>>>
>>> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
>>> wrote:
>>>
 oVirt Node 4.4.2 is now generally available

 The oVirt project is pleased to announce the general availability of
 oVirt Node 4.4.2 , as of September 25th, 2020.

 This release completes the oVirt 4.4.2 release published on September
 17th

>>>
>>> Thanks fir the news!
>>>
>>> How to prevent hosts entering emergency mode after upgrade from oVirt
 4.4.1

 Due to Bug 1837864
  - Host enter
 emergency mode after upgrading to latest build

 If you have your root file system on a multipath device on your hosts
 you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
 your host entering emergency mode.

 In order to prevent this be sure to upgrade oVirt Engine first, then on
 your hosts:

1.

Remove the current lvm filter while still on 4.4.1, or in emergency
mode (if rebooted).
2.

Reboot.
3.

Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.

Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.

Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with
the correct filter configuration
6.

Reboot.



>>> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
>>> to follow the same steps as if I were in 4.4.1 or what?
>>> I would like to avoid going through 4.4.1 if possible.
>>>
>>
>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>> should work for the same case.
>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>
>> # grep '^filter = ' /etc/lvm/lvm.conf
>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>
>>
>>
>>
>>>
>>> Thanks,
>>> Gianluca
>>>
>>
>>
> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
> and gluster wizard and never update until now.
> Updated self hosted engine to 4.4.2 without problems.
>
> My host doesn't have any filter or global_filter set up in lvm.conf  in
> 4.4.0.
>
> So I update it:
>
> [root@ovirt01 vdsm]# yum update
>

Please use the update command from the engine admin portal.
The ansible code running from there also performs additional steps other
than just yum update.
+Dana Elfassy  can you elaborate on other steps
performed during the upgrade?



> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM
> CEST.
> Dependencies resolved.
>
> 
>  Package ArchitectureVersion
> Repository  Size
>
> 
> Installing:
>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
> ovirt-4.4  782 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>
> Transaction Summary
>
> 
> Install  1 Package
>
> Total download size: 782 M
> Is this ok [y/N]: y
> Downloading Packages:
> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
> 145 MB 01:45 ETA
>
>
> 
> Total   5.3
> MB/s | 782 MB 02:28
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:
>  1/1
>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Installing   : ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Obsoleting   :
> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>  2/2
>   Verifying: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>  1/2
>   Verifying:
> ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch
>  2/2
> Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch.rpm
>
> Installed:
>   ovirt-node-ng-image-update-4.4.2-1.el8.noarch
>
>
> Complete!
> [root@ovirt01 vdsm]# sync
> [root@ovirt01 vdsm]#
>
> I reboot and I'm proposed 4.4.2 by default with 4.4.0 available too.
> But 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-05 Thread Gianluca Cecchi
On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer  wrote:

> On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
> >
> >
> >
> > On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
> >>
> >> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
> >>>
> >>>
> >>>
> >>> Since there wasn't a filter set on the node, the 4.4.2 update added
> the default filter for the root-lv pv
> >>> if there was some filter set before the upgrade, it would not have
> been added by the 4.4.2 update.
> 
> 
> >>
> >> Do you mean that I will get the same problem upgrading from 4.4.2 to an
> upcoming 4.4.3, as also now I don't have any filter set?
> >> This would not be desirable
> >
> > Once you have got back into 4.4.2, it's recommended to set the lvm
> filter to fit the pvs you use on your node
> > for the local root pv you can run
> > # vdsm-tool config-lvm-filter -y
> > For the gluster bricks you'll need to add their uuids to the filter as
> well.
>
> vdsm-tool is expected to add all the devices needed by the mounted
> logical volumes, so adding devices manually should not be needed.
>
> If this does not work please file a bug and include all the info to
> reproduce
> the issue.
>
>
I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0,
but the effect was that no filter at all was set up in lvm.conf, and so the
problem I had upgrading to 4.4.2.
Any way to see related logs for 4.4.0? In which phase of the install of the
node itself or of the gluster based wizard is it supposed to run the
vdsm-tool command?

Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:

"
[root@ovirt01 ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_data
  mountpoint:  /gluster_bricks/data
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_engine
  mountpoint:  /gluster_bricks/engine
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/gluster_vg_sda-gluster_lv_vmstore
  mountpoint:  /gluster_bricks/vmstore
  devices:
/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr

  logical volume:  /dev/mapper/onn-home
  mountpoint:  /home
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
  mountpoint:  /
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-swap
  mountpoint:  [SWAP]
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-tmp
  mountpoint:  /tmp
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var
  mountpoint:  /var
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_crash
  mountpoint:  /var/crash
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_log
  mountpoint:  /var/log
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

  logical volume:  /dev/mapper/onn-var_log_audit
  mountpoint:  /var/log/audit
  devices:
/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7

This is the recommended LVM filter for this host:

  filter = [
"a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|",
"a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|",
"r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

To use the recommended filter we need to add multipath
blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:

  blacklist {
  wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V"
  wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
  }


Configure host? [yes,NO]

"
Does this mean that answering "yes" I will get both lvm and multipath
related files modified?

Right now my multipath is configured this way:

[root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^#" |
grep -v "^$"
defaults {
polling_interval5
no_path_retry   4
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
}
blacklist {
protocol "(scsi:adt|scsi:sbp)"
}
overrides {
  no_path_retry4
}
[root@ovirt01 ~]#

with blacklist explicit on both disks but inside different files:

root disk:
[root@ovirt01 ~]# cat /etc/multipath/conf.d/vdsm_blacklist.conf
# This file is managed by vdsm, 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Nir Soffer
On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer  wrote:
>
>
>
> On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi  
> wrote:
>>
>> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>>>
>>>
>>>
>>> Since there wasn't a filter set on the node, the 4.4.2 update added the 
>>> default filter for the root-lv pv
>>> if there was some filter set before the upgrade, it would not have been 
>>> added by the 4.4.2 update.


>>
>> Do you mean that I will get the same problem upgrading from 4.4.2 to an 
>> upcoming 4.4.3, as also now I don't have any filter set?
>> This would not be desirable
>
> Once you have got back into 4.4.2, it's recommended to set the lvm filter to 
> fit the pvs you use on your node
> for the local root pv you can run
> # vdsm-tool config-lvm-filter -y
> For the gluster bricks you'll need to add their uuids to the filter as well.

vdsm-tool is expected to add all the devices needed by the mounted
logical volumes, so adding devices manually should not be needed.

If this does not work please file a bug and include all the info to reproduce
the issue.

> The next upgrade should not set a filter on its own if one is already set.
>
>>
>>


 Right now only two problems:

 1) a long running problem that from engine web admin all the volumes are 
 seen as up and also the storage domains up, while only the hosted engine 
 one is up, while "data" and vmstore" are down, as I can verify from the 
 host, only one /rhev/data-center/ mount:

>> [snip]


 I already reported this, but I don't know if there is yet a bugzilla open 
 for it.
>>>
>>> Did you get any response for the original mail? haven't seen it on the 
>>> users-list.
>>
>>
>> I think it was this thread related to 4.4.0 released and question about 
>> auto-start of VMs.
>> A script from Derek that tested if domains were active and got false 
>> positive, and my comments about the same registered behaviour:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UOEL2SNHUUC7M4WAJ5NO/
>>
>> But I think there was no answer on that particular item/problem.
>> Indeed I think you can easily reproduce, I don't know if only with Gluster 
>> or also with other storage domains.
>> I don't know if it can have a part the fact that on the last host during a 
>> whole shutdown (and the only host in case of single host) you have to run  
>> the script
>> /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
>> otherwise you risk not to get a complete shutdown sometimes.
>> And perhaps this stop can have an influence on the following startup.
>> In any case the web admin gui (and the API access) should not show the 
>> domains active when they are not. I think there is a bug in the code that 
>> checks this.
>
> If it got no response so far, I think it could be helpful to file a bug with 
> the details of the setup and the steps involved here so it will get tracked.
>
>>
>>>

 2) I see that I cannot connect to cockpit console of node.

>> [snip]

 NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
>>>
>>> Might be required to set DNS for authenticity, maybe other members on the 
>>> list could tell better.
>>
>>
>> It would be the first time I see it. The access to web admin GUI works ok 
>> even without DNS resolution.
>> I'm not sure if I had the same problem with the cockpit host console on 
>> 4.4.0.
>
> Perhaps +Yedidyah Bar David  could help regarding cockpit web access.
>
>>
>> Gianluca
>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYWJPRKRESPBAR7I45QSVNTCVWNRZ5WQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRSYXNIUTNXC7S2B4ALAQIWECBKUCR4H/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Amit Bawer
On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi 
wrote:

> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:
>
>>
>>
>> Since there wasn't a filter set on the node, the 4.4.2 update added the
>> default filter for the root-lv pv
>> if there was some filter set before the upgrade, it would not have been
>> added by the 4.4.2 update.
>>
>>>
>>>
> Do you mean that I will get the same problem upgrading from 4.4.2 to an
> upcoming 4.4.3, as also now I don't have any filter set?
> This would not be desirable
>
Once you have got back into 4.4.2, it's recommended to set the lvm filter
to fit the pvs you use on your node
for the local root pv you can run
# vdsm-tool config-lvm-filter -y
For the gluster bricks you'll need to add their uuids to the filter as well.
The next upgrade should not set a filter on its own if one is already set.


>
>
>>
>>> Right now only two problems:
>>>
>>> 1) a long running problem that from engine web admin all the volumes are
>>> seen as up and also the storage domains up, while only the hosted engine
>>> one is up, while "data" and vmstore" are down, as I can verify from the
>>> host, only one /rhev/data-center/ mount:
>>>
>>> [snip]
>
>>
>>> I already reported this, but I don't know if there is yet a bugzilla
>>> open for it.
>>>
>> Did you get any response for the original mail? haven't seen it on the
>> users-list.
>>
>
> I think it was this thread related to 4.4.0 released and question about
> auto-start of VMs.
> A script from Derek that tested if domains were active and got false
> positive, and my comments about the same registered behaviour:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UOEL2SNHUUC7M4WAJ5NO/
>
> But I think there was no answer on that particular item/problem.
> Indeed I think you can easily reproduce, I don't know if only with Gluster
> or also with other storage domains.
> I don't know if it can have a part the fact that on the last host during a
> whole shutdown (and the only host in case of single host) you have to run
> the script
> /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
> otherwise you risk not to get a complete shutdown sometimes.
> And perhaps this stop can have an influence on the following startup.
> In any case the web admin gui (and the API access) should not show the
> domains active when they are not. I think there is a bug in the code that
> checks this.
>
If it got no response so far, I think it could be helpful to file a bug
with the details of the setup and the steps involved here so it will get
tracked.


>
>>
>>> 2) I see that I cannot connect to cockpit console of node.
>>>
>>> [snip]
>
>> NOTE: the ost is not resolved by DNS but I put an entry in my hosts
>>> client.
>>>
>> Might be required to set DNS for authenticity, maybe other members on the
>> list could tell better.
>>
>
> It would be the first time I see it. The access to web admin GUI works ok
> even without DNS resolution.
> I'm not sure if I had the same problem with the cockpit host console on
> 4.4.0.
>
Perhaps +Yedidyah Bar David   could help regarding cockpit
web access.


> Gianluca
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYWJPRKRESPBAR7I45QSVNTCVWNRZ5WQ/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Gianluca Cecchi
On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer  wrote:

>
>
> Since there wasn't a filter set on the node, the 4.4.2 update added the
> default filter for the root-lv pv
> if there was some filter set before the upgrade, it would not have been
> added by the 4.4.2 update.
>
>>
>>
Do you mean that I will get the same problem upgrading from 4.4.2 to an
upcoming 4.4.3, as also now I don't have any filter set?
This would not be desirable



>
>> Right now only two problems:
>>
>> 1) a long running problem that from engine web admin all the volumes are
>> seen as up and also the storage domains up, while only the hosted engine
>> one is up, while "data" and vmstore" are down, as I can verify from the
>> host, only one /rhev/data-center/ mount:
>>
>> [snip]

>
>> I already reported this, but I don't know if there is yet a bugzilla open
>> for it.
>>
> Did you get any response for the original mail? haven't seen it on the
> users-list.
>

I think it was this thread related to 4.4.0 released and question about
auto-start of VMs.
A script from Derek that tested if domains were active and got false
positive, and my comments about the same registered behaviour:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UOEL2SNHUUC7M4WAJ5NO/

But I think there was no answer on that particular item/problem.
Indeed I think you can easily reproduce, I don't know if only with Gluster
or also with other storage domains.
I don't know if it can have a part the fact that on the last host during a
whole shutdown (and the only host in case of single host) you have to run
the script
/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
otherwise you risk not to get a complete shutdown sometimes.
And perhaps this stop can have an influence on the following startup.
In any case the web admin gui (and the API access) should not show the
domains active when they are not. I think there is a bug in the code that
checks this.


>
>> 2) I see that I cannot connect to cockpit console of node.
>>
>> [snip]

> NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
>>
> Might be required to set DNS for authenticity, maybe other members on the
> list could tell better.
>

It would be the first time I see it. The access to web admin GUI works ok
even without DNS resolution.
I'm not sure if I had the same problem with the cockpit host console on
4.4.0.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6CDZDF2PRXN27FAQML2J26LZWXMEYCQ/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-04 Thread Amit Bawer
On Sun, Oct 4, 2020 at 2:07 AM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer  wrote:
>
>>
>>
>> On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:
>>
>>>
>>>
>>> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>>>
>>
>> Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
>> maintenance mode
>> if the fs is mounted as read only, try
>>
>> mount -o remount,rw /
>>
>> sync and try to reboot 4.4.2.
>>
>>
> Indeed if i run, when in emergency shell in 4.4.2, the command:
>
> lvs --config 'devices { filter = [ "a|.*|" ] }'
>
> I see also all the gluster volumes, so I think the update injected the
> nasty filter.
> Possibly during update the command
> # vdsm-tool config-lvm-filter -y
> was executed and erroneously created the filter?
>
Since there wasn't a filter set on the node, the 4.4.2 update added the
default filter for the root-lv pv
if there was some filter set before the upgrade, it would not have been
added by the 4.4.2 update.


> Anyway remounting read write the root filesystem and removing the filter
> line from lvm.conf and rebooting worked and 4.4.2 booted ok and I was able
> to exit global maintenance and have the engine up.
>
> Thanks Amit for the help and all the insights.
>
> Right now only two problems:
>
> 1) a long running problem that from engine web admin all the volumes are
> seen as up and also the storage domains up, while only the hosted engine
> one is up, while "data" and vmstore" are down, as I can verify from the
> host, only one /rhev/data-center/ mount:
>
> [root@ovirt01 ~]# df -h
> Filesystem  Size  Used Avail
> Use% Mounted on
> devtmpfs 16G 0   16G
> 0% /dev
> tmpfs16G   16K   16G
> 1% /dev/shm
> tmpfs16G   18M   16G
> 1% /run
> tmpfs16G 0   16G
> 0% /sys/fs/cgroup
> /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1  133G  3.9G  129G
> 3% /
> /dev/mapper/onn-tmp1014M   40M  975M
> 4% /tmp
> /dev/mapper/gluster_vg_sda-gluster_lv_engine100G  9.0G   91G
> 9% /gluster_bricks/engine
> /dev/mapper/gluster_vg_sda-gluster_lv_data  500G  126G  375G
>  26% /gluster_bricks/data
> /dev/mapper/gluster_vg_sda-gluster_lv_vmstore90G  6.9G   84G
> 8% /gluster_bricks/vmstore
> /dev/mapper/onn-home   1014M   40M  975M
> 4% /home
> /dev/sdb2   976M  307M  603M
>  34% /boot
> /dev/sdb1   599M  6.8M  593M
> 2% /boot/efi
> /dev/mapper/onn-var  15G  263M   15G
> 2% /var
> /dev/mapper/onn-var_log 8.0G  541M  7.5G
> 7% /var/log
> /dev/mapper/onn-var_crash10G  105M  9.9G
> 2% /var/crash
> /dev/mapper/onn-var_log_audit   2.0G   79M  2.0G
> 4% /var/log/audit
> ovirt01st.lutwyn.storage:/engine100G   10G   90G
>  10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
> tmpfs   3.2G 0  3.2G
> 0% /run/user/1000
> [root@ovirt01 ~]#
>
> I can also wait 10 minutes and no change. The way I use to exit from this
> stalled situation is power on a VM, so that obviously it fails
> VM f32 is down with error. Exit message: Unable to get volume size for
> domain d39ed9a3-3b10-46bf-b334-e8970f5deca1 volume
> 242d16c6-1fd9-4918-b9dd-0d477a86424c.
> 10/4/20 12:50:41 AM
>
> and suddenly all the data storage domains are deactivated (from engine
> point of view, because actually they were not active...):
> Storage Domain vmstore (Data Center Default) was deactivated by system
> because it's not visible by any of the hosts.
> 10/4/20 12:50:31 AM
>
> and I can go in Data Centers --> Default --> Storage and activate
> "vmstore" and "data" storage domains and suddenly I get them activated and
> filesystems mounted.
>
> [root@ovirt01 ~]# df -h | grep rhev
> ovirt01st.lutwyn.storage:/engine100G   10G   90G
>  10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
> ovirt01st.lutwyn.storage:/data  500G  131G  370G
>  27% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_data
> ovirt01st.lutwyn.storage:/vmstore90G  7.8G   83G
> 9% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_vmstore
> [root@ovirt01 ~]#
>
> and VM starts ok now.
>
> I already reported this, but I don't know if there is yet a bugzilla open
> for it.
>
Did you get any response for the original mail? haven't seen it on the
users-list.


> 2) I see that I cannot connect to cockpit console 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer  wrote:

>
>
> On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:
>
>>
>>
>> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>>
>
> Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
> maintenance mode
> if the fs is mounted as read only, try
>
> mount -o remount,rw /
>
> sync and try to reboot 4.4.2.
>
>
Indeed if i run, when in emergency shell in 4.4.2, the command:

lvs --config 'devices { filter = [ "a|.*|" ] }'

I see also all the gluster volumes, so I think the update injected the
nasty filter.
Possibly during update the command
# vdsm-tool config-lvm-filter -y
was executed and erroneously created the filter?

Anyway remounting read write the root filesystem and removing the filter
line from lvm.conf and rebooting worked and 4.4.2 booted ok and I was able
to exit global maintenance and have the engine up.

Thanks Amit for the help and all the insights.

Right now only two problems:

1) a long running problem that from engine web admin all the volumes are
seen as up and also the storage domains up, while only the hosted engine
one is up, while "data" and vmstore" are down, as I can verify from the
host, only one /rhev/data-center/ mount:

[root@ovirt01 ~]# df -h
Filesystem  Size  Used Avail
Use% Mounted on
devtmpfs 16G 0   16G
0% /dev
tmpfs16G   16K   16G
1% /dev/shm
tmpfs16G   18M   16G
1% /run
tmpfs16G 0   16G
0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1  133G  3.9G  129G
3% /
/dev/mapper/onn-tmp1014M   40M  975M
4% /tmp
/dev/mapper/gluster_vg_sda-gluster_lv_engine100G  9.0G   91G
9% /gluster_bricks/engine
/dev/mapper/gluster_vg_sda-gluster_lv_data  500G  126G  375G
 26% /gluster_bricks/data
/dev/mapper/gluster_vg_sda-gluster_lv_vmstore90G  6.9G   84G
8% /gluster_bricks/vmstore
/dev/mapper/onn-home   1014M   40M  975M
4% /home
/dev/sdb2   976M  307M  603M
 34% /boot
/dev/sdb1   599M  6.8M  593M
2% /boot/efi
/dev/mapper/onn-var  15G  263M   15G
2% /var
/dev/mapper/onn-var_log 8.0G  541M  7.5G
7% /var/log
/dev/mapper/onn-var_crash10G  105M  9.9G
2% /var/crash
/dev/mapper/onn-var_log_audit   2.0G   79M  2.0G
4% /var/log/audit
ovirt01st.lutwyn.storage:/engine100G   10G   90G
 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
tmpfs   3.2G 0  3.2G
0% /run/user/1000
[root@ovirt01 ~]#

I can also wait 10 minutes and no change. The way I use to exit from this
stalled situation is power on a VM, so that obviously it fails
VM f32 is down with error. Exit message: Unable to get volume size for
domain d39ed9a3-3b10-46bf-b334-e8970f5deca1 volume
242d16c6-1fd9-4918-b9dd-0d477a86424c.
10/4/20 12:50:41 AM

and suddenly all the data storage domains are deactivated (from engine
point of view, because actually they were not active...):
Storage Domain vmstore (Data Center Default) was deactivated by system
because it's not visible by any of the hosts.
10/4/20 12:50:31 AM

and I can go in Data Centers --> Default --> Storage and activate "vmstore"
and "data" storage domains and suddenly I get them activated and
filesystems mounted.

[root@ovirt01 ~]# df -h | grep rhev
ovirt01st.lutwyn.storage:/engine100G   10G   90G
 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine
ovirt01st.lutwyn.storage:/data  500G  131G  370G
 27% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_data
ovirt01st.lutwyn.storage:/vmstore90G  7.8G   83G
9% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_vmstore
[root@ovirt01 ~]#

and VM starts ok now.

I already reported this, but I don't know if there is yet a bugzilla open
for it.

2) I see that I cannot connect to cockpit console of node.

In firefox (version 80) in my Fedora 31 I get:
"
Secure Connection Failed

An error occurred during a connection to ovirt01.lutwyn.local:9090.
PR_CONNECT_RESET_ERROR

The page you are trying to view cannot be shown because the
authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem.

Learn more…
"
In Chrome (build 85.0.4183.121)

"
Your connection is not private
Attackers might be trying to steal your information from
ovirt01.lutwyn.local (for example, 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer  wrote:

>
>
> On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi 
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This is a testbed NUC I use for testing.
>> It has 2 disks, the one named sdb is where ovirt node has been installed.
>> The one named sda is where I configured gluster though the wizard,
>> configuring the 3 volumes for engine, vm, data
>>
>> The filter that you do have in the 4.4.2 screenshot should correspond to
>>> your root pv,
>>> you can confirm that by doing (replace the pv-uuid with the one from
>>> your filter):
>>>
>>> #udevadm info
>>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>> P:
>>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>>> N: sda2
>>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>>
>>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>>
>>
>> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
>> special file created of type /dev/disk/by-id/
>>
> What does "udevadm info" show for /dev/sdb3 on 4.4.2?
>
>
>> See here for udevadm command on 4.4.0 that shows sdb3 that is the
>> partition corresponding to PV of root disk
>>
>> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>>
>>
>>
>>> Can you give the output of lsblk on your node?
>>>
>>
>> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>>
>> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>>
>> ANd here lsblk as seen from 4.4.2 with an empty sda:
>>
>> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>>
>>
>>> Can you check that the same filter is in initramfs?
>>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>>
>>
>> Here the command from 4.4.0 that shows no filter
>>
>> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>>
>> And here from 4.4.2 emergency mode, where I have to use the path
>> /boot/ovirt-node-ng-4.4.2-0/initramfs-
>> because no initrd file in /boot (in screenshot you also see output of "ll
>> /boot)
>>
>> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>>
>>
>>
>>> We have the following tool on the hosts
>>> # vdsm-tool config-lvm-filter -y
>>> it only sets the filter for local lvm devices, this is run as part of
>>> deployment and upgrade when done from
>>> the engine.
>>>
>>> If you have other volumes which have to be mounted as part of your
>>> startup
>>> then you should add their uuids to the filter as well.
>>>
>>
>> I didn't anything special in 4.4.0: I installed node on the intended
>> disk, that was seen as sdb and then through the single node hci wizard I
>> configured the gluster volumes on sda
>>
>> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
>> command from 4.4.0 to correct initramfs of 4.4.2?
>>
> The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
> what needs to be fixed in this case.
>
>
>> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
>> go with engine in 4.4.2?
>>
> Might work, probably not too tested.
>
> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>

Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2
maintenance mode
if the fs is mounted as read only, try

mount -o remount,rw /

sync and try to reboot 4.4.2.


>
>>
>>
>> Thanks,
>> Gianluca
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZK3JS7OUIPU4H5KJLGOW7C5IPPAIYPTM/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for testing.
> It has 2 disks, the one named sdb is where ovirt node has been installed.
> The one named sda is where I configured gluster though the wizard,
> configuring the 3 volumes for engine, vm, data
>
> The filter that you do have in the 4.4.2 screenshot should correspond to
>> your root pv,
>> you can confirm that by doing (replace the pv-uuid with the one from your
>> filter):
>>
>> #udevadm info
>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>> P:
>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>> N: sda2
>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>
>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>
>
> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
> special file created of type /dev/disk/by-id/
>
What does "udevadm info" show for /dev/sdb3 on 4.4.2?


> See here for udevadm command on 4.4.0 that shows sdb3 that is the
> partition corresponding to PV of root disk
>
> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>
>
>
>> Can you give the output of lsblk on your node?
>>
>
> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>
> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>
> ANd here lsblk as seen from 4.4.2 with an empty sda:
>
> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>
>
>> Can you check that the same filter is in initramfs?
>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>
>
> Here the command from 4.4.0 that shows no filter
>
> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>
> And here from 4.4.2 emergency mode, where I have to use the path
> /boot/ovirt-node-ng-4.4.2-0/initramfs-
> because no initrd file in /boot (in screenshot you also see output of "ll
> /boot)
>
> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>
>
>
>> We have the following tool on the hosts
>> # vdsm-tool config-lvm-filter -y
>> it only sets the filter for local lvm devices, this is run as part of
>> deployment and upgrade when done from
>> the engine.
>>
>> If you have other volumes which have to be mounted as part of your startup
>> then you should add their uuids to the filter as well.
>>
>
> I didn't anything special in 4.4.0: I installed node on the intended disk,
> that was seen as sdb and then through the single node hci wizard I
> configured the gluster volumes on sda
>
> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
> command from 4.4.0 to correct initramfs of 4.4.2?
>
The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see
what needs to be fixed in this case.


> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
> go with engine in 4.4.2?
>
Might work, probably not too tested.

For the gluster bricks being filtered out in 4.4.2, this seems like [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805


>
>
> Thanks,
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJHASPYE5PC2HFJC2LJDPGKV2JA7MAV/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 6:33 PM Gianluca Cecchi 
wrote:

> Sorry I see that there was an error in the lsinitrd command in 4.4.2,
> inerting the "-f" position.
> Here the screenshot that shows anyway no filter active:
>
> https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=sharing
>
> Gianluca
>
>
> On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi 
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This is a testbed NUC I use for testing.
>> It has 2 disks, the one named sdb is where ovirt node has been installed.
>> The one named sda is where I configured gluster though the wizard,
>> configuring the 3 volumes for engine, vm, data
>>
>> The filter that you do have in the 4.4.2 screenshot should correspond to
>>> your root pv,
>>> you can confirm that by doing (replace the pv-uuid with the one from
>>> your filter):
>>>
>>> #udevadm info
>>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>> P:
>>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>>> N: sda2
>>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>>
>>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>>
>>
>> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
>> special file created of type /dev/disk/by-id/
>> See here for udevadm command on 4.4.0 that shows sdb3 that is the
>> partition corresponding to PV of root disk
>>
>> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>>
>>
>>
>>> Can you give the output of lsblk on your node?
>>>
>>
>> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>>
>> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>>
>> ANd here lsblk as seen from 4.4.2 with an empty sda:
>>
>> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>>
>>
>>> Can you check that the same filter is in initramfs?
>>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>>
>>
>> Here the command from 4.4.0 that shows no filter
>>
>> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>>
>> And here from 4.4.2 emergency mode, where I have to use the path
>> /boot/ovirt-node-ng-4.4.2-0/initramfs-
>> because no initrd file in /boot (in screenshot you also see output of "ll
>> /boot)
>>
>> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>>
>>
>>
>>> We have the following tool on the hosts
>>> # vdsm-tool config-lvm-filter -y
>>> it only sets the filter for local lvm devices, this is run as part of
>>> deployment and upgrade when done from
>>> the engine.
>>>
>>> If you have other volumes which have to be mounted as part of your
>>> startup
>>> then you should add their uuids to the filter as well.
>>>
>>
>> I didn't anything special in 4.4.0: I installed node on the intended
>> disk, that was seen as sdb and then through the single node hci wizard I
>> configured the gluster volumes on sda
>>
>> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
>> command from 4.4.0 to correct initramfs of 4.4.2?
>>
>> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
>> go with engine in 4.4.2?
>>
>> Thanks,
>> Gianluca
>>
>

Two many photos... ;-)

I used the 4.4.0 initramfs.
Here the output using the 4.4.2 initramfs

https://drive.google.com/file/d/1yLzJzokK5C1LHNuFbNoXWHXfzFncXe0O/view?usp=sharing

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEWNQHRAMLKAL3XZOJGOOQ3J77DAMHFA/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
Sorry I see that there was an error in the lsinitrd command in 4.4.2,
inerting the "-f" position.
Here the screenshot that shows anyway no filter active:
https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=sharing

Gianluca


On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi 
wrote:

> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for testing.
> It has 2 disks, the one named sdb is where ovirt node has been installed.
> The one named sda is where I configured gluster though the wizard,
> configuring the 3 volumes for engine, vm, data
>
> The filter that you do have in the 4.4.2 screenshot should correspond to
>> your root pv,
>> you can confirm that by doing (replace the pv-uuid with the one from your
>> filter):
>>
>> #udevadm info
>>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>> P:
>> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
>> N: sda2
>> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
>> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>>
>> In this case sda2 is the partition of the root-lv shown by lsblk.
>>
>
> Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
> special file created of type /dev/disk/by-id/
> See here for udevadm command on 4.4.0 that shows sdb3 that is the
> partition corresponding to PV of root disk
>
> https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing
>
>
>
>> Can you give the output of lsblk on your node?
>>
>
> Here lsblk as seen by 4.4.0 with gluster volumes on sda:
>
> https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing
>
> ANd here lsblk as seen from 4.4.2 with an empty sda:
>
> https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing
>
>
>> Can you check that the same filter is in initramfs?
>> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>>
>
> Here the command from 4.4.0 that shows no filter
>
> https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing
>
> And here from 4.4.2 emergency mode, where I have to use the path
> /boot/ovirt-node-ng-4.4.2-0/initramfs-
> because no initrd file in /boot (in screenshot you also see output of "ll
> /boot)
>
> https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing
>
>
>
>> We have the following tool on the hosts
>> # vdsm-tool config-lvm-filter -y
>> it only sets the filter for local lvm devices, this is run as part of
>> deployment and upgrade when done from
>> the engine.
>>
>> If you have other volumes which have to be mounted as part of your startup
>> then you should add their uuids to the filter as well.
>>
>
> I didn't anything special in 4.4.0: I installed node on the intended disk,
> that was seen as sdb and then through the single node hci wizard I
> configured the gluster volumes on sda
>
> Any suggestion on what to do on 4.4.2 initrd or running correct dracut
> command from 4.4.0 to correct initramfs of 4.4.2?
>
> BTW: could in the mean time if necessary also boot from 4.4.0 and let it
> go with engine in 4.4.2?
>
> Thanks,
> Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GMP6UHTWIR3BCCNEJT6KU4QRORFSC5DB/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Gianluca Cecchi
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer  wrote:

> From the info it seems that startup panics because gluster bricks cannot
> be mounted.
>
>
Yes, it is so
This is a testbed NUC I use for testing.
It has 2 disks, the one named sdb is where ovirt node has been installed.
The one named sda is where I configured gluster though the wizard,
configuring the 3 volumes for engine, vm, data

The filter that you do have in the 4.4.2 screenshot should correspond to
> your root pv,
> you can confirm that by doing (replace the pv-uuid with the one from your
> filter):
>
> #udevadm info
>  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
> P:
> /devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
> N: sda2
> S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
> S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
>
> In this case sda2 is the partition of the root-lv shown by lsblk.
>

Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no
special file created of type /dev/disk/by-id/
See here for udevadm command on 4.4.0 that shows sdb3 that is the partition
corresponding to PV of root disk
https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing



> Can you give the output of lsblk on your node?
>

Here lsblk as seen by 4.4.0 with gluster volumes on sda:
https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing

ANd here lsblk as seen from 4.4.2 with an empty sda:
https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing


> Can you check that the same filter is in initramfs?
> # lsinitrd -f  /etc/lvm/lvm.conf | grep filter
>

Here the command from 4.4.0 that shows no filter
https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing

And here from 4.4.2 emergency mode, where I have to use the path
/boot/ovirt-node-ng-4.4.2-0/initramfs-
because no initrd file in /boot (in screenshot you also see output of "ll
/boot)
https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing



> We have the following tool on the hosts
> # vdsm-tool config-lvm-filter -y
> it only sets the filter for local lvm devices, this is run as part of
> deployment and upgrade when done from
> the engine.
>
> If you have other volumes which have to be mounted as part of your startup
> then you should add their uuids to the filter as well.
>

I didn't anything special in 4.4.0: I installed node on the intended disk,
that was seen as sdb and then through the single node hci wizard I
configured the gluster volumes on sda

Any suggestion on what to do on 4.4.2 initrd or running correct dracut
command from 4.4.0 to correct initramfs of 4.4.2?

BTW: could in the mean time if necessary also boot from 4.4.0 and let it go
with engine in 4.4.2?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VV4NAZ6XFITMYPRDMHRWVWOMFCASTKY6/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-10-03 Thread Amit Bawer
>From the info it seems that startup panics because gluster bricks cannot be
mounted.

The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv,
you can confirm that by doing (replace the pv-uuid with the one from your
filter):

#udevadm info
 /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
P:
/devices/pci:00/:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
N: sda2
S: disk/by-id/ata-QEMU_HARDDISK_QM3-part2
S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ

In this case sda2 is the partition of the root-lv shown by lsblk.

Can you give the output of lsblk on your node?

Can you check that the same filter is in initramfs?
# lsinitrd -f  /etc/lvm/lvm.conf | grep filter

We have the following tool on the hosts
# vdsm-tool config-lvm-filter -y
it only sets the filter for local lvm devices, this is run as part of
deployment and upgrade when done from
the engine.

If you have other volumes which have to be mounted as part of your startup
then you should add their uuids to the filter as well.


On Sat, Oct 3, 2020 at 3:19 PM Gianluca Cecchi 
wrote:

> On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
>> gianluca.cec...@gmail.com> ha scritto:
>>
>>>
>>>
>>> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
>>> wrote:
>>>
 oVirt Node 4.4.2 is now generally available

 The oVirt project is pleased to announce the general availability of
 oVirt Node 4.4.2 , as of September 25th, 2020.

 This release completes the oVirt 4.4.2 release published on September
 17th

>>>
>>> Thanks fir the news!
>>>
>>> How to prevent hosts entering emergency mode after upgrade from oVirt
 4.4.1

 Due to Bug 1837864
  - Host enter
 emergency mode after upgrading to latest build

 If you have your root file system on a multipath device on your hosts
 you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get
 your host entering emergency mode.

 In order to prevent this be sure to upgrade oVirt Engine first, then on
 your hosts:

1.

Remove the current lvm filter while still on 4.4.1, or in emergency
mode (if rebooted).
2.

Reboot.
3.

Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.

Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.

Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with
the correct filter configuration
6.

Reboot.



>>> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have
>>> to follow the same steps as if I were in 4.4.1 or what?
>>> I would like to avoid going through 4.4.1 if possible.
>>>
>>
>> I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
>> should work for the same case.
>> The problematic filter in /etc/lvm/lvm.conf looks like:
>>
>> # grep '^filter = ' /etc/lvm/lvm.conf
>> filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
>>
>>
>>
>>
>>>
>>> Thanks,
>>> Gianluca
>>>
>>
>>
> OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0
> and gluster wizard and never update until now.
> Updated self hosted engine to 4.4.2 without problems.
>
> My host doesn't have any filter or global_filter set up in lvm.conf  in
> 4.4.0.
>
> So I update it:
>
> [root@ovirt01 vdsm]# yum update
> Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM
> CEST.
> Dependencies resolved.
>
> 
>  Package ArchitectureVersion
> Repository  Size
>
> 
> Installing:
>  ovirt-node-ng-image-update  noarch  4.4.2-1.el8
> ovirt-4.4  782 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
>
> Transaction Summary
>
> 
> Install  1 Package
>
> Total download size: 782 M
> Is this ok [y/N]: y
> Downloading Packages:
> ovirt-node-ng-image-update-4.4  27% [= ] 6.0 MB/s |
> 145 MB 01:45 ETA
>
>
> 
> Total   5.3
> MB/s | 782 MB 02:28
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:
>  1/1
>   Running scriptlet: 

[ovirt-users] Re: ovirt-node-4.4.2 grub is not reading new grub.cfg at boot

2020-10-01 Thread Strahil Nikolov via Users
Either use 'grub2-editenv' or 'grub2-editenv - unset kernelopts' + 
'grub2-mkconfig -o /boot/grub2/grub.cfg'

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-command-line-parameters_managing-monitoring-and-updating-the-kernel
 

https://access.redhat.com/solutions/3710121

Best Regards,
Strahil Nikolov





В четвъртък, 1 октомври 2020 г., 16:12:52 Гринуич+3, Mike Lindsay 
 написа: 





Hey Folks,

I've got a bit of a strange one here. I downloaded and installed
ovirt-node-ng-installer-4.4.2-2020091810.el8.iso today on an old dev
laptop and to get it to install I needed to add acpi=off to the kernel
boot param to get the installing to work (known issue with my old
laptop). After installation it was still booting with acpi=off, no
biggie (seen that happen with Centos 5,6,7 before on occasion) right,
just change the line in /etc/defaults/grub and run grub2-mkconfig (ran
for both efi and legacy for good measure even knowing EFI isn't used)
and reboot...done this hundreds of times without any problems.

But this time after rebooting if I hit 'e' to look at the kernel
params on boot, acpi=off is still there. Basically any changes to
/etc/default/grub are being ignored or over-ridden but I'll be damned
if I can't find where.

I know I'm missing something simple here, I do this all the time but
to be honest this is the first Centos 8 based install I've had time to
play with. Any suggestions would be greatly appreciated.

The drive layout is a bit weird but had no issues running fedora or
centos in the past. boot drive is a mSATA (/dev/sdb) and there is a
SSD data drive at /dev/sda...having sda installed or removed makes no
difference and /boot is mounted where it should /dev/sdb1very
strange

Cheers,
Mike

[root@ovirt-node01 ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX='crashkernel=auto resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
noapic rhgb quiet'
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
GRUB_DISABLE_OS_PROBER='true'



[root@ovirt-node01 ~]# cat /boot/grub2/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
set pager=1

if [ -f ${config_directory}/grubenv ]; then
  load_env -f ${config_directory}/grubenv
elif [ -s $prefix/grubenv ]; then
  load_env
fi
if [ "${next_entry}" ] ; then
  set default="${next_entry}"
  set next_entry=
  save_env next_entry
  set boot_once=true
else
  set default="${saved_entry}"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}

function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

terminal_output console
if [ x$feature_timeout_style = xy ] ; then
  set timeout_style=menu
  set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
  set timeout=5
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/00_tuned ###
set tuned_params=""
set tuned_initrd=""
### END /etc/grub.d/00_tuned ###

### BEGIN /etc/grub.d/01_users ###
if [ -f ${prefix}/user.cfg ]; then
  source ${prefix}/user.cfg
  if [ -n "${GRUB2_PASSWORD}" ]; then
    set superusers="root"
    export superusers
    password_pbkdf2 root ${GRUB2_PASSWORD}
  fi
fi
### END /etc/grub.d/01_users ###

### BEGIN /etc/grub.d/08_fallback_counting ###
insmod increment
# Check if boot_counter exists and boot_success=0 to activate this behaviour.
if [ -n "${boot_counter}" -a "${boot_success}" = "0" ]; then
  # if countdown has ended, choose to boot rollback deployment,
  # i.e. default=1 on OSTree-based systems.
  if  [ "${boot_counter}" = "0" -o "${boot_counter}" = "-1" ]; then
    set default=1
    set boot_counter=-1
  # otherwise decrement boot_counter
  else
    decrement boot_counter
  fi
  save_env boot_counter
fi
### END /etc/grub.d/08_fallback_counting ###

### BEGIN /etc/grub.d/10_linux ###
insmod part_msdos
insmod ext2
set root='hd1,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1
--hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1
b6557c59-e11f-471b-8cb1-70c47b0b4b29
else
  search --no-floppy 

[ovirt-users] Re: ovirt-node-4.4.2 grub is not reading new grub.cfg at boot

2020-10-01 Thread Mike Lindsay
Wow, that's annoying...6 hours I spent trying to figure out what was
different with the Centos/RHEL 8 grub.cfg configuration and nothing
popped up about grubby ;p

Thanks very much for that, it's making for an interesting read.

Cheers,
Mike

On Thu, 1 Oct 2020 at 10:10, Amit Bawer  wrote:
>
>
>
> On Thu, Oct 1, 2020 at 4:12 PM Mike Lindsay  wrote:
>>
>> Hey Folks,
>>
>> I've got a bit of a strange one here. I downloaded and installed
>> ovirt-node-ng-installer-4.4.2-2020091810.el8.iso today on an old dev
>> laptop and to get it to install I needed to add acpi=off to the kernel
>> boot param to get the installing to work (known issue with my old
>> laptop). After installation it was still booting with acpi=off, no
>> biggie (seen that happen with Centos 5,6,7 before on occasion) right,
>> just change the line in /etc/defaults/grub and run grub2-mkconfig (ran
>> for both efi and legacy for good measure even knowing EFI isn't used)
>> and reboot...done this hundreds of times without any problems.
>>
>> But this time after rebooting if I hit 'e' to look at the kernel
>> params on boot, acpi=off is still there. Basically any changes to
>> /etc/default/grub are being ignored or over-ridden but I'll be damned
>> if I can't find where.
>
>
> According to RHEL information [1] you should be using "grubby" to update grub 
> parameters,
> in your case:
>
> # grubby --args=acpi=off --update-kernel=ALL
>
> more acpi=off info in [2]
>
> [1] 
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-command-line-parameters_managing-monitoring-and-updating-the-kernel
> [2] 
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-acpi-ca
>
>>
>> I know I'm missing something simple here, I do this all the time but
>> to be honest this is the first Centos 8 based install I've had time to
>> play with. Any suggestions would be greatly appreciated.
>>
>> The drive layout is a bit weird but had no issues running fedora or
>> centos in the past. boot drive is a mSATA (/dev/sdb) and there is a
>> SSD data drive at /dev/sda...having sda installed or removed makes no
>> difference and /boot is mounted where it should /dev/sdb1very
>> strange
>>
>> Cheers,
>> Mike
>>
>> [root@ovirt-node01 ~]# cat /etc/default/grub
>> GRUB_TIMEOUT=5
>> GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
>> GRUB_DEFAULT=saved
>> GRUB_DISABLE_SUBMENU=true
>> GRUB_TERMINAL_OUTPUT="console"
>> GRUB_CMDLINE_LINUX='crashkernel=auto resume=/dev/mapper/onn-swap
>> rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
>> noapic rhgb quiet'
>> GRUB_DISABLE_RECOVERY="true"
>> GRUB_ENABLE_BLSCFG=true
>> GRUB_DISABLE_OS_PROBER='true'
>>
>>
>>
>> [root@ovirt-node01 ~]# cat /boot/grub2/grub.cfg
>> #
>> # DO NOT EDIT THIS FILE
>> #
>> # It is automatically generated by grub2-mkconfig using templates
>> # from /etc/grub.d and settings from /etc/default/grub
>> #
>>
>> ### BEGIN /etc/grub.d/00_header ###
>> set pager=1
>>
>> if [ -f ${config_directory}/grubenv ]; then
>>   load_env -f ${config_directory}/grubenv
>> elif [ -s $prefix/grubenv ]; then
>>   load_env
>> fi
>> if [ "${next_entry}" ] ; then
>>set default="${next_entry}"
>>set next_entry=
>>save_env next_entry
>>set boot_once=true
>> else
>>set default="${saved_entry}"
>> fi
>>
>> if [ x"${feature_menuentry_id}" = xy ]; then
>>   menuentry_id_option="--id"
>> else
>>   menuentry_id_option=""
>> fi
>>
>> export menuentry_id_option
>>
>> if [ "${prev_saved_entry}" ]; then
>>   set saved_entry="${prev_saved_entry}"
>>   save_env saved_entry
>>   set prev_saved_entry=
>>   save_env prev_saved_entry
>>   set boot_once=true
>> fi
>>
>> function savedefault {
>>   if [ -z "${boot_once}" ]; then
>> saved_entry="${chosen}"
>> save_env saved_entry
>>   fi
>> }
>>
>> function load_video {
>>   if [ x$feature_all_video_module = xy ]; then
>> insmod all_video
>>   else
>> insmod efi_gop
>> insmod efi_uga
>> insmod ieee1275_fb
>> insmod vbe
>> insmod vga
>> insmod video_bochs
>> insmod video_cirrus
>>   fi
>> }
>>
>> terminal_output console
>> if [ x$feature_timeout_style = xy ] ; then
>>   set timeout_style=menu
>>   set timeout=5
>> # Fallback normal timeout code in case the timeout_style feature is
>> # unavailable.
>> else
>>   set timeout=5
>> fi
>> ### END /etc/grub.d/00_header ###
>>
>> ### BEGIN /etc/grub.d/00_tuned ###
>> set tuned_params=""
>> set tuned_initrd=""
>> ### END /etc/grub.d/00_tuned ###
>>
>> ### BEGIN /etc/grub.d/01_users ###
>> if [ -f ${prefix}/user.cfg ]; then
>>   source ${prefix}/user.cfg
>>   if [ -n "${GRUB2_PASSWORD}" ]; then
>> set superusers="root"
>> export superusers
>> password_pbkdf2 root ${GRUB2_PASSWORD}
>>   fi
>> fi
>> ### END /etc/grub.d/01_users ###
>>
>> ### BEGIN /etc/grub.d/08_fallback_counting ###
>> insmod increment
>> # Check 

[ovirt-users] Re: ovirt-node-4.4.2 grub is not reading new grub.cfg at boot

2020-10-01 Thread Amit Bawer
On Thu, Oct 1, 2020 at 4:12 PM Mike Lindsay  wrote:

> Hey Folks,
>
> I've got a bit of a strange one here. I downloaded and installed
> ovirt-node-ng-installer-4.4.2-2020091810.el8.iso today on an old dev
> laptop and to get it to install I needed to add acpi=off to the kernel
> boot param to get the installing to work (known issue with my old
> laptop). After installation it was still booting with acpi=off, no
> biggie (seen that happen with Centos 5,6,7 before on occasion) right,
> just change the line in /etc/defaults/grub and run grub2-mkconfig (ran
> for both efi and legacy for good measure even knowing EFI isn't used)
> and reboot...done this hundreds of times without any problems.
>
> But this time after rebooting if I hit 'e' to look at the kernel
> params on boot, acpi=off is still there. Basically any changes to
> /etc/default/grub are being ignored or over-ridden but I'll be damned
> if I can't find where.
>

According to RHEL information [1] you should be using "grubby" to update
grub parameters,
in your case:

# *grubby --args=acpi=off --update-kernel=ALL*

more acpi=off info in [2]

[1]
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-command-line-parameters_managing-monitoring-and-updating-the-kernel
[2]
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-acpi-ca


> I know I'm missing something simple here, I do this all the time but
> to be honest this is the first Centos 8 based install I've had time to
> play with. Any suggestions would be greatly appreciated.
>
> The drive layout is a bit weird but had no issues running fedora or
> centos in the past. boot drive is a mSATA (/dev/sdb) and there is a
> SSD data drive at /dev/sda...having sda installed or removed makes no
> difference and /boot is mounted where it should /dev/sdb1very
> strange
>
> Cheers,
> Mike
>
> [root@ovirt-node01 ~]# cat /etc/default/grub
> GRUB_TIMEOUT=5
> GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
> GRUB_DEFAULT=saved
> GRUB_DISABLE_SUBMENU=true
> GRUB_TERMINAL_OUTPUT="console"
> GRUB_CMDLINE_LINUX='crashkernel=auto resume=/dev/mapper/onn-swap
> rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
> noapic rhgb quiet'
> GRUB_DISABLE_RECOVERY="true"
> GRUB_ENABLE_BLSCFG=true
> GRUB_DISABLE_OS_PROBER='true'
>
>
>
> [root@ovirt-node01 ~]# cat /boot/grub2/grub.cfg
> #
> # DO NOT EDIT THIS FILE
> #
> # It is automatically generated by grub2-mkconfig using templates
> # from /etc/grub.d and settings from /etc/default/grub
> #
>
> ### BEGIN /etc/grub.d/00_header ###
> set pager=1
>
> if [ -f ${config_directory}/grubenv ]; then
>   load_env -f ${config_directory}/grubenv
> elif [ -s $prefix/grubenv ]; then
>   load_env
> fi
> if [ "${next_entry}" ] ; then
>set default="${next_entry}"
>set next_entry=
>save_env next_entry
>set boot_once=true
> else
>set default="${saved_entry}"
> fi
>
> if [ x"${feature_menuentry_id}" = xy ]; then
>   menuentry_id_option="--id"
> else
>   menuentry_id_option=""
> fi
>
> export menuentry_id_option
>
> if [ "${prev_saved_entry}" ]; then
>   set saved_entry="${prev_saved_entry}"
>   save_env saved_entry
>   set prev_saved_entry=
>   save_env prev_saved_entry
>   set boot_once=true
> fi
>
> function savedefault {
>   if [ -z "${boot_once}" ]; then
> saved_entry="${chosen}"
> save_env saved_entry
>   fi
> }
>
> function load_video {
>   if [ x$feature_all_video_module = xy ]; then
> insmod all_video
>   else
> insmod efi_gop
> insmod efi_uga
> insmod ieee1275_fb
> insmod vbe
> insmod vga
> insmod video_bochs
> insmod video_cirrus
>   fi
> }
>
> terminal_output console
> if [ x$feature_timeout_style = xy ] ; then
>   set timeout_style=menu
>   set timeout=5
> # Fallback normal timeout code in case the timeout_style feature is
> # unavailable.
> else
>   set timeout=5
> fi
> ### END /etc/grub.d/00_header ###
>
> ### BEGIN /etc/grub.d/00_tuned ###
> set tuned_params=""
> set tuned_initrd=""
> ### END /etc/grub.d/00_tuned ###
>
> ### BEGIN /etc/grub.d/01_users ###
> if [ -f ${prefix}/user.cfg ]; then
>   source ${prefix}/user.cfg
>   if [ -n "${GRUB2_PASSWORD}" ]; then
> set superusers="root"
> export superusers
> password_pbkdf2 root ${GRUB2_PASSWORD}
>   fi
> fi
> ### END /etc/grub.d/01_users ###
>
> ### BEGIN /etc/grub.d/08_fallback_counting ###
> insmod increment
> # Check if boot_counter exists and boot_success=0 to activate this
> behaviour.
> if [ -n "${boot_counter}" -a "${boot_success}" = "0" ]; then
>   # if countdown has ended, choose to boot rollback deployment,
>   # i.e. default=1 on OSTree-based systems.
>   if  [ "${boot_counter}" = "0" -o "${boot_counter}" = "-1" ]; then
> set default=1
> set boot_counter=-1
>   # otherwise decrement boot_counter
>   else
> decrement boot_counter
>   fi
>   save_env 

[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-09-25 Thread Sandro Bonazzola
Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

>
>
> On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
> wrote:
>
>> oVirt Node 4.4.2 is now generally available
>>
>> The oVirt project is pleased to announce the general availability of
>> oVirt Node 4.4.2 , as of September 25th, 2020.
>>
>> This release completes the oVirt 4.4.2 release published on September 17th
>>
>
> Thanks fir the news!
>
> How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
>>
>>
>> Due to Bug 1837864  -
>> Host enter emergency mode after upgrading to latest build
>>
>> If you have your root file system on a multipath device on your hosts you
>> should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your
>> host entering emergency mode.
>>
>> In order to prevent this be sure to upgrade oVirt Engine first, then on
>> your hosts:
>>
>>1.
>>
>>Remove the current lvm filter while still on 4.4.1, or in emergency
>>mode (if rebooted).
>>2.
>>
>>Reboot.
>>3.
>>
>>Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
>>4.
>>
>>Run vdsm-tool config-lvm-filter to confirm there is a new filter in
>>place.
>>5.
>>
>>Only if not using oVirt Node:
>>- run "dracut --force --add multipath” to rebuild initramfs with the
>>correct filter configuration
>>6.
>>
>>Reboot.
>>
>>
>>
> What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to
> follow the same steps as if I were in 4.4.1 or what?
> I would like to avoid going through 4.4.1 if possible.
>

I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure
should work for the same case.
The problematic filter in /etc/lvm/lvm.conf looks like:

# grep '^filter = ' /etc/lvm/lvm.conf
filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]




>
> Thanks,
> Gianluca
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNHBFXEE5W3NTR3BPPNZXH2QQAO4MJD6/


[ovirt-users] Re: oVirt Node 4.4.2 is now generally available

2020-09-25 Thread Gianluca Cecchi
On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola 
wrote:

> oVirt Node 4.4.2 is now generally available
>
> The oVirt project is pleased to announce the general availability of oVirt
> Node 4.4.2 , as of September 25th, 2020.
>
> This release completes the oVirt 4.4.2 release published on September 17th
>

Thanks fir the news!

How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
>
>
> Due to Bug 1837864  -
> Host enter emergency mode after upgrading to latest build
>
> If you have your root file system on a multipath device on your hosts you
> should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your
> host entering emergency mode.
>
> In order to prevent this be sure to upgrade oVirt Engine first, then on
> your hosts:
>
>1.
>
>Remove the current lvm filter while still on 4.4.1, or in emergency
>mode (if rebooted).
>2.
>
>Reboot.
>3.
>
>Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
>4.
>
>Run vdsm-tool config-lvm-filter to confirm there is a new filter in
>place.
>5.
>
>Only if not using oVirt Node:
>- run "dracut --force --add multipath” to rebuild initramfs with the
>correct filter configuration
>6.
>
>Reboot.
>
>
>
What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to
follow the same steps as if I were in 4.4.1 or what?
I would like to avoid going through 4.4.1 if possible.

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PI2LS3NCULH3FXQKBSB4IGXLKUBXE6UL/


[ovirt-users] Re: Ovirt Node 4.4.2 Engine Deployment - Notification Settings

2020-08-29 Thread Strahil Nikolov via Users
It's more focused towards the enterprise.
You can reconfigure this by ssh-ing to the HE and updating your postfix 
configuration.

Best Regards,
Strahil Nikolov






В събота, 29 август 2020 г., 17:55:53 Гринуич+3, David White via Users 
 написа: 





I finally got oVirt node installed with gluster on a single node. 
So that's great progress!

Once that step was complete...
I noticed that the Engine Deployment wizard asks for SMTP settings for where to 
send notifications. I was kind of surprised that it doesn't allow one to enter 
any credentials. It looks like this requires an unauthenticated local relay. I 
don't like that. :) See attached screenshot.

Has there been any talk about adding this into the wizard deployment?


Sent with ProtonMail Secure Email.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4S5S6KNZOUA4WKPCFF6MYGXCFPP4AR7Z/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFBZ7GRBZUWF5JCQP5AOHZRFUABUOR6C/


[ovirt-users] Re: oVirt node 4.4.1 deploy FQDN not reachable

2020-08-25 Thread David White via Users
I ran out of time to finish properly testing things Saturday evening (I'm in 
Eastern Time in the States), and wasn't able to spend any time on it Sunday or 
Monday.

I intend to finish testing this evening (for the RC2 image that Roberto found 
worked), and will update the list at that point. I want to make sure the 
problem isn't me. 



Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Tuesday, August 25, 2020 3:47 AM, Yedidyah Bar David  wrote:

> On Sun, Aug 23, 2020 at 3:25 AM David White via Users users@ovirt.org wrote:
> 

> > Getting the same problem on 4.4.2-2020081922.
> > I'll try the image that Roberto found to work, and will report back.
> 

> Thanks.
> 

> Perhaps one of you would like to open a bug, and/or check/share
> relevant logs when this happens?
> 

> Best regards,
> 

> > Perhaps I'm still too new to this. :)
> > Sent with ProtonMail Secure Email.
> > ‐‐‐ Original Message ‐‐‐
> > On Saturday, August 22, 2020 7:12 PM, David White via Users users@ovirt.org 
> > wrote:
> > I'm running into the same problem.
> > I just wiped my CentOS 8.2 system, and in place of that, installed oVirt 
> > Node 4.4.1.
> > I'm downloading 4.4.2-2020081922 now.
> > Sent with ProtonMail Secure Email.
> > ‐‐‐ Original Message ‐‐‐
> > On Friday, August 7, 2020 11:55 AM, Roberto Nunin robnu...@gmail.com wrote:
> > Il giorno ven 7 ago 2020 alle ore 12:59 Roberto Nunin robnu...@gmail.com ha 
> > scritto:
> > 

> > > Hi all
> > > I have an issue while trying to deploy hyperconverged solution on three 
> > > ovirt node boxes.
> > > ISO used is ovirt-node-ng-installer-4.4.1-2020072310.el8.iso.
> > > When from cockpit I choose gluster deployment, I have a form where I can 
> > > insert both gluster fqdn names and public fqdn names (that is what I 
> > > need, due to distinct network cards & networks)
> > > If I insert right names, that are resolved by nodes, I receive, anyway, 
> > > FQDN is not reachable below Host1 entries.
> > > As already stated, these names are certainly resolved by DNS used.
> > > Any hints about ?
> > 

> > Using ovirt-node-ng-installer-4.4.2-2020080612.el8.iso (4.4.2 RC2) the same 
> > issue do not happen.
> > --
> > Roberto Nunin
> > 

> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/FUJOBNZDE2B6H6J2GNXXYG7X7GQHJRSH/
> 

> --
> 

> Didi



publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMJGD4MTHRJ7AA6IE4XERTZSQ4NVT6QD/


  1   2   3   >