[ovirt-users] Re: Instability after update

2022-01-06 Thread Ritesh Chikatwar
try downgrading in all host and give try

On Thu, Jan 6, 2022 at 10:05 PM Andrea Chierici <
andrea.chier...@cnaf.infn.it> wrote:

> Ritesh,
> I downgraded one host to 6.0.0 as you said:
>
> # rpm -qa|grep qemu
> qemu-kvm-block-curl-6.0.0-33.el8s.x86_64
> qemu-kvm-common-6.0.0-33.el8s.x86_64
> ipxe-roms-qemu-20181214-8.git133f4c47.el8.noarch
> qemu-img-6.0.0-33.el8s.x86_64
> libvirt-daemon-driver-qemu-7.10.0-1.module_el8.6.0+1046+bd8eec5e.x86_64
> qemu-kvm-block-ssh-6.0.0-33.el8s.x86_64
> qemu-kvm-block-gluster-6.0.0-33.el8s.x86_64
> qemu-kvm-6.0.0-33.el8s.x86_64
> qemu-kvm-ui-opengl-6.0.0-33.el8s.x86_64
> qemu-kvm-docs-6.0.0-33.el8s.x86_64
> qemu-kvm-block-rbd-6.0.0-33.el8s.x86_64
> qemu-kvm-core-6.0.0-33.el8s.x86_64
> qemu-kvm-hw-usbredir-6.0.0-33.el8s.x86_64
> qemu-kvm-block-iscsi-6.0.0-33.el8s.x86_64
> qemu-kvm-ui-spice-6.0.0-33.el8s.x86_64
>
> But now if I try to migrate any vm to that host, the operation fails :
>
> Andrea
>
>
>
>
> On 06/01/2022 17:27, Andrea Chierici wrote:
>
> On 05/01/2022 19:08, Ritesh Chikatwar wrote:
>
> Hello
>
> What’s the qemo version if it’s greater then 6.0.0 then
> Can you please try downgrading qemu version to 6.0.0 and see if it helps?
>
>
> Dear,
> here is the situation:
>
> # rpm -qa|grep qemu
> qemu-kvm-block-iscsi-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-hw-usbredir-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> ipxe-roms-qemu-20181214-8.git133f4c47.el8.noarch
> qemu-kvm-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> libvirt-daemon-driver-qemu-7.10.0-1.module_el8.6.0+1046+bd8eec5e.x86_64
> qemu-kvm-common-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-docs-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-block-rbd-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-block-curl-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-ui-spice-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-core-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-ui-opengl-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-block-ssh-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-kvm-block-gluster-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
> qemu-img-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
>
> So should I downgrade all these rpms?
> Thanks,
>
> Andrea
>
> --
> Andrea Chierici - INFN-CNAF   
> Viale Berti Pichat 6/2, 40127 BOLOGNA
> Office Tel: +39 051 2095463   
> SkypeID ataruz
> --
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NX62H7S4NZRJUDQACLAFIMO5G2EU45FC/
>
>
>
> --
> Andrea Chierici - INFN-CNAF   
> Viale Berti Pichat 6/2, 40127 BOLOGNA
> Office Tel: +39 051 2095463   
> SkypeID ataruz
> --
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ3GDZQXYUMNOE4HITTWKADXRZCG2RVR/


[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-06 Thread Andy via Users
 Sir,
I can see the data domain written to 
"/rhev/data-center/mnt/glusterSD/vstore00:_engine" on the host I am trying to 
deploy from.  when I attempt to write to the directory with:
sudo -u vdsm dd if=/dev/zero 
of=/rhev/data-center/mnt/glusterSD/vstore00:_engine/test.txt oflag=direct 
bs=512 count=10  I am able to create the file.   Is that the command you were 
referring to? 

thanks



On Thursday, January 6, 2022, 07:19:21 PM EST, Strahil Nikolov 
 wrote:  
 
  Can you write on the storage domain like this:
sudo -u vdsm if=/dev/zero of=/rhev//full/path/ oflag=direct 
bs=512 count=10
Best Regards,Strahil Nikolov
 
  On Fri, Jan 7, 2022 at 0:19, Andy via Users wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EKURCFRH66GOIYGDMBOOLHJKV3O7GD2G/
  
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DQ7YN6TI7W5CWPR5HNLRT43COHZE2HZH/


[ovirt-users] Re: After upgrade to vdsm-4.40.90.4-1.el8 - Internal JSON-RPC error - how to fix?

2022-01-06 Thread Adam Xu


在 2022/1/6 15:53, Liran Rotenberg 写道:



On Thu, Jan 6, 2022 at 9:20 AM Adam Xu  wrote:

I also got the error when I try to import an ova from vmware to my
ovirt cluster using a san storage domain.

I resovled this by importing this ova to a standalone host which
is using its local storage.

在 2021/11/18 17:41, John Mortensen 写道:

Hi,
After we upgraded to vdsm-4.40.90.4-1.el8 on our two node cluster two 
things has happned:
1. First node that was upgraded now continuously logs this error:
VDSM  command Get Host Statistics failed: Internal JSON-RPC error: 
{'reason': "'str' object has no attribute 'decode'"}


Hi,
Looks like [1], however in that report the import process succeeded 
despite those errors.

Can you please share the import logs?


when I try to import ova to san storage. there are tons of log like:

2022-01-06 09:15:55,058+08 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-8) 
[] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt4.adagene.cn 
command Get Host Statistics failed: Internal JSON-RPC error: {'reason': 
"'str' object has no attribute 'decode'"}
2022-01-06 09:15:55,059+08 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-8) 
[] Unable to GetStats: VDSErrorException: VDSGenericException: 
VDSErrorException: Failed to Get Host Statistics, error = Internal 
JSON-RPC error: {'reason': "'str' object has no attribute 'decode'"}, 
code = -32603
2022-01-06 09:15:58,347+08 INFO 
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-93) 
[b8ca5e99-e3e5-4a09-8b66-e225f45fdf77] Command 'ImportVmFromOva' (id: 
'a1ef50bc-5dba-40af-9ebd-48bb421db495') waiting on child command id: 
'7943bb90-5964-4458-a1b8-d957cff94f09' type:'ConvertOva' to complete
2022-01-06 09:16:08,358+08 INFO 
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-88) 
[b8ca5e99-e3e5-4a09-8b66-e225f45fdf77] Command 'ImportVmFromOva' (id: 
'a1ef50bc-5dba-40af-9ebd-48bb421db495') waiting on child command id: 
'7943bb90-5964-4458-a1b8-d957cff94f09' type:'ConvertOva' to complete
2022-01-06 09:16:10,090+08 WARN 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsAsyncVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-13) 
[] Unexpected return value: Status [code=-32603, message=Internal 
JSON-RPC error: {'reason': "'str' object has no attribute 'decode'"}]
2022-01-06 09:16:10,090+08 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsAsyncVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-13) 
[] Failed in 'Get Host Statistics' method
2022-01-06 09:16:10,090+08 WARN 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsAsyncVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-13) 
[] Unexpected return value: Status [code=-32603, message=Internal 
JSON-RPC error: {'reason': "'str' object has no attribute 'decode'"}]
2022-01-06 09:16:10,092+08 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-13) 
[] Unable to GetStats: VDSErrorException: VDSGenericException: 
VDSErrorException: Failed to Get Host Statistics, error = Internal 
JSON-RPC error: {'reason': "'str' object has no attribute 'decode'"}, 
code = -32603
2022-01-06 09:16:18,367+08 INFO 
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-30) 
[b8ca5e99-e3e5-4a09-8b66-e225f45fdf77] Command 'ImportVmFromOva' (id: 
'a1ef50bc-5dba-40af-9ebd-48bb421db495') waiting on child command id: 
'7943bb90-5964-4458-a1b8-d957cff94f09' type:'ConvertOva' to complete
2022-01-06 09:16:25,099+08 WARN 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsAsyncVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool
-Thread-34) [] Unexpected return value: Status [code=-32603, 
message=Internal JSON-RPC error: {'reason': "'str' object has no 
attribute 'decode'"}]
2022-01-06 09:16:25,099+08 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsAsyncVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool

-Thread-34) [] Failed in 'Get Host Statistics' method
2022-01-06 09:16:25,099+08 WARN 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsAsyncVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool
-Thread-34) [] Unexpected return value: Status [code=-32603, 
message=Internal JSON-RPC error: {'reason': "'str' object has no 
attribute 'decode'"}]


these logs seems endless, I have to stop importing tasks after several 
hours waitting.



[ovirt-users] Re: Linux VMs cannot boot from software raid0

2022-01-06 Thread Strahil Nikolov via Users

To be honest in grub rescue I can see only hd0 which leaded me to the issue 
(and qemu-6.2+ has a fix for it): 
https://bugzilla.proxmox.com/show_bug.cgi?id=3010

Can someone also test creating a Linux VM with /boot being a raid0 software MD 
device ?

Best Regards,Strahil Nikolov
 
 
  On Fri, Jan 7, 2022 at 2:15, Strahil Nikolov via Users 
wrote:   Hi All,
I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from 
software raid0 (I have multiple gluster volumes) is not possible with Cluster 
compatibility 4.6 .
I've tested creating a fresh VM and it also suffers the problem. Changing 
various options (virtio-scsi to virtio, chipset, VM type) did not help .
Booting from rescue media shows that the data is still there, but grub always 
drops to rescue.
Any hints are welcome.
Host: CentOS Stream 8 with qemu-6.0.0oVirt 4.4.9 (latest)VM OS: RHEL7.9/RHEL8.5
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OTMKYD3B5JTK5OUJVO2BN6J3BDFYJO6O/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QO55CYMZX7CFO4EOOJLVY6CZ7F5LASXH/


[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-06 Thread Strahil Nikolov via Users
 Can you write on the storage domain like this:
sudo -u vdsm if=/dev/zero of=/rhev//full/path/ oflag=direct 
bs=512 count=10
Best Regards,Strahil Nikolov
 
  On Fri, Jan 7, 2022 at 0:19, Andy via Users wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EKURCFRH66GOIYGDMBOOLHJKV3O7GD2G/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOM6UEXE3ADZCNL7VTN5UKCLWWJA73VR/


[ovirt-users] Linux VMs cannot boot from software raid0

2022-01-06 Thread Strahil Nikolov via Users
Hi All,
I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from 
software raid0 (I have multiple gluster volumes) is not possible with Cluster 
compatibility 4.6 .
I've tested creating a fresh VM and it also suffers the problem. Changing 
various options (virtio-scsi to virtio, chipset, VM type) did not help .
Booting from rescue media shows that the data is still there, but grub always 
drops to rescue.
Any hints are welcome.
Host: CentOS Stream 8 with qemu-6.0.0oVirt 4.4.9 (latest)VM OS: RHEL7.9/RHEL8.5
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OTMKYD3B5JTK5OUJVO2BN6J3BDFYJO6O/


[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-06 Thread sohail_akhter3
Hi Didi

downgrading qemu-kvm fixed the issue. What is the reason it is not working with 
version 6.1.0. Currently this is the version installed on my host

#yum info qemu-kvm
Last metadata expiration check: 2:03:58 ago on Thu 06 Jan 2022 03:18:40 PM UTC.
Installed Packages
Name : qemu-kvm
Epoch: 15
Version  : 6.0.0
Release  : 33.el8s
Architecture : x86_64
Size : 0.0  
Source   : qemu-kvm-6.0.0-33.el8s.src.rpm
Repository   : @System
From repo: ovirt-4.4-centos-stream-advanced-virtualization
Summary  : QEMU is a machine emulator and virtualizer
URL  : http://www.qemu.org/
License  : GPLv2 and GPLv2+ and CC-BY
Description  : qemu-kvm is an open source virtualizer that provides hardware
 : emulation for the KVM hypervisor. qemu-kvm acts as a virtual
 : machine monitor together with the KVM kernel modules, and 
emulates the
 : hardware for a full system such as a PC and its associated 
peripherals.

Available Packages
Name : qemu-kvm
Epoch: 15
Version  : 6.1.0
Release  : 5.module_el8.6.0+1040+0ae94936
Architecture : x86_64
Size : 156 k
Source   : qemu-kvm-6.1.0-5.module_el8.6.0+1040+0ae94936.src.rpm
Repository   : appstream
Summary  : QEMU is a machine emulator and virtualizer
URL  : http://www.qemu.org/
License  : GPLv2 and GPLv2+ and CC-BY
Description  : qemu-kvm is an open source virtualizer that provides hardware
 : emulation for the KVM hypervisor. qemu-kvm acts as a virtual
 : machine monitor together with the KVM kernel modules, and 
emulates the
 : hardware for a full system such as a PC and its associated 
peripherals.

Many thanks for your help

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6S4CYOYQLD3M5YBGPPWB7Z7OK5BKVHE/


[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-06 Thread Andy via Users
The latest on this I downgraded qemu-kvm to the lowest version in the CentOS8 
Stream/Ovirt Repo: 

qemu-kvm-common-6.0.0-26.el8s.x86_64
qemu-kvm-block-ssh-6.0.0-26.el8s.x86_64
qemu-kvm-block-gluster-6.0.0-26.el8s.x86_64
qemu-kvm-6.0.0-26.el8s.x86_64
qemu-kvm-ui-opengl-6.0.0-26.el8s.x86_64
qemu-kvm-docs-6.0.0-26.el8s.x86_64
qemu-kvm-block-rbd-6.0.0-26.el8s.x86_64
qemu-kvm-block-curl-6.0.0-26.el8s.x86_64
qemu-kvm-core-6.0.0-26.el8s.x86_64
qemu-kvm-hw-usbredir-6.0.0-26.el8s.x86_64
qemu-kvm-block-iscsi-6.0.0-26.el8s.x86_64
qemu-kvm-ui-spice-6.0.0-26.el8s.x86_64

When the install exposes the engine, I am able to log in, see the storage 
domain get attached, see the host become active, see the disks copied to the 
hosted_engine storage domain, however as with every attempted install, it fails 
at: 

2022-01-06 17:03:16,097-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:106 
{'results': [{'msg': 'non-zero return code', 'cmd': ['virt-copy-out', '-a', 
'/var/tmp/localvmjgnfgb1f/images/31647f4a-5dd8-4c24-9b36-82427e9f9d54/5445e40c-290f-4ea5-af25-9a86d3d72956',
 '/var/lo
g', '/var/log/ovirt-hosted-engine-setup/engine-logs-2022-01-06T22:02:55Z/'], 
'stdout': '', 'stderr': "libguestfs: error: appliance closed the connection u
nexpectedly.\nThis usually means the libguestfs appliance crashed.
I have attempted to downgrade the libguestfish however it wont as it appears to 
want the libguestfs-appliance-1:1.44.0-4.module_el8.6.0+983+a7505f3f.x86_64.   
All libguestfs versions installed are the 1.44.0-4
thanks

   On Thursday, January 6, 2022, 04:12:44 PM EST, Andy via Users 
 wrote:  
 
  Attached is every log from the engine setup.  Thanks

On Thursday, January 6, 2022, 02:41:53 PM EST, Andy  
wrote:  
 
  I removed all network bridges created by previous installs, reran the install 
which produced the same results.   Attached at the latest setup logs.  Thanks

On Thursday, January 6, 2022, 01:23:39 PM EST, Andy via Users 
 wrote:  
 
  Attached is a updated install log with the same error.  It appears when the 
network config information is being "inject" it fails to copy and or the engine 
just disappears.  Since this install has been attempted a few times and the 
ovirtmgmt bridge is already created would that affect this?   I am going to 
remove it and attempt another install.  If anyone has any insight into this I'd 
appreciate it.  

Thanks
AK

On Thursday, January 6, 2022, 12:26:19 PM EST, Andy via Users 
 wrote:  
 
  Here are the configured options for the gluster volume:
Options Reconfigured:
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-gid: 36
storage.owner-uid: 36
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on


 
On Tuesday, January 4, 2022, 07:23:24 PM EST, AK via Users 
 wrote:  
 
 Yes sir and I do see the storage domain being created.  I also validated the 
UID, GID, and the folder for the brick is all owned by 36.  
Thanks 
On Jan 3, 2022 4:06 PM, Darrell Budic  wrote:

Did you confirm that vdsm:kvm (36:36) has full permissions to the selected 
storage?


On Jan 2, 2022, at 10:43 AM, Andy via Users  wrote:
 Attached are the setup logs

On Sunday, January 2, 2022, 11:20:34 AM EST, AK via Users  
wrote:  
 
 Aldo I didnt know if this was an SELINUX problem, as I it is set to permissive 
which produces the same error.  thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSM4OER6J4K5SBCTA2VRN22VFE2UQ4Y2/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LZIXVEJHYGI6EWE5QU2OG7UKRH3AYQZ7/




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy 

[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-06 Thread Andy via Users
 Here are the configured options for the gluster volume:
Options Reconfigured:
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-gid: 36
storage.owner-uid: 36
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on


 
On Tuesday, January 4, 2022, 07:23:24 PM EST, AK via Users 
 wrote:  
 
 Yes sir and I do see the storage domain being created.  I also validated the 
UID, GID, and the folder for the brick is all owned by 36.  
Thanks 
On Jan 3, 2022 4:06 PM, Darrell Budic  wrote:

Did you confirm that vdsm:kvm (36:36) has full permissions to the selected 
storage?


On Jan 2, 2022, at 10:43 AM, Andy via Users  wrote:
 Attached are the setup logs

On Sunday, January 2, 2022, 11:20:34 AM EST, AK via Users  
wrote:  
 
 Aldo I didnt know if this was an SELINUX problem, as I it is set to permissive 
which produces the same error.  thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSM4OER6J4K5SBCTA2VRN22VFE2UQ4Y2/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LZIXVEJHYGI6EWE5QU2OG7UKRH3AYQZ7/




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDCLILB54EMJELG2HQTYSVSCPH4KLHK7/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4KZWPZXMMMR3GRLKEGHUOLPYFV52YQP/


[ovirt-users] How to find out I / O usge from servers

2022-01-06 Thread ovirt . org
Hi everyone, I would like via API, or. pull information about I / O usage by 
individual servers directly from the postre database. But I have a data 
warehouse turned off for performance reasons. Is this information collected 
somewhere so that I can collect it from somewhere in my external database or 
read it from the virus database?

Thanks for the information.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXYYXEXOR2ZGRBQ6PE425L3NFJ3IBIWP/


[ovirt-users] Re: Instability after update

2022-01-06 Thread Andrea Chierici

Ritesh,
I downgraded one host to 6.0.0 as you said:

# rpm -qa|grep qemu
qemu-kvm-block-curl-6.0.0-33.el8s.x86_64
qemu-kvm-common-6.0.0-33.el8s.x86_64
ipxe-roms-qemu-20181214-8.git133f4c47.el8.noarch
qemu-img-6.0.0-33.el8s.x86_64
libvirt-daemon-driver-qemu-7.10.0-1.module_el8.6.0+1046+bd8eec5e.x86_64
qemu-kvm-block-ssh-6.0.0-33.el8s.x86_64
qemu-kvm-block-gluster-6.0.0-33.el8s.x86_64
qemu-kvm-6.0.0-33.el8s.x86_64
qemu-kvm-ui-opengl-6.0.0-33.el8s.x86_64
qemu-kvm-docs-6.0.0-33.el8s.x86_64
qemu-kvm-block-rbd-6.0.0-33.el8s.x86_64
qemu-kvm-core-6.0.0-33.el8s.x86_64
qemu-kvm-hw-usbredir-6.0.0-33.el8s.x86_64
qemu-kvm-block-iscsi-6.0.0-33.el8s.x86_64
qemu-kvm-ui-spice-6.0.0-33.el8s.x86_64

But now if I try to migrate any vm to that host, the operation fails :

Andrea




On 06/01/2022 17:27, Andrea Chierici wrote:

On 05/01/2022 19:08, Ritesh Chikatwar wrote:

Hello

What’s the qemo version if it’s greater then 6.0.0 then
Can you please try downgrading qemu version to 6.0.0 and see if it 
helps?


Dear,
here is the situation:

# rpm -qa|grep qemu
qemu-kvm-block-iscsi-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-hw-usbredir-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
ipxe-roms-qemu-20181214-8.git133f4c47.el8.noarch
qemu-kvm-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
libvirt-daemon-driver-qemu-7.10.0-1.module_el8.6.0+1046+bd8eec5e.x86_64
qemu-kvm-common-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-docs-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-block-rbd-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-block-curl-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-ui-spice-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-core-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-ui-opengl-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-block-ssh-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-block-gluster-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-img-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64

So should I downgrade all these rpms?
Thanks,

Andrea
--
Andrea Chierici - INFN-CNAF 
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463 
SkypeID ataruz
--

___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/NX62H7S4NZRJUDQACLAFIMO5G2EU45FC/



--
Andrea Chierici - INFN-CNAF 
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463 
SkypeID ataruz
--


smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FI7TLDO3DSZDT7ITIEPGYFZCTPLLMC6D/


[ovirt-users] Re: Instability after update

2022-01-06 Thread Andrea Chierici

On 05/01/2022 19:08, Ritesh Chikatwar wrote:

Hello

What’s the qemo version if it’s greater then 6.0.0 then
Can you please try downgrading qemu version to 6.0.0 and see if it helps?


Dear,
here is the situation:

# rpm -qa|grep qemu
qemu-kvm-block-iscsi-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-hw-usbredir-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
ipxe-roms-qemu-20181214-8.git133f4c47.el8.noarch
qemu-kvm-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
libvirt-daemon-driver-qemu-7.10.0-1.module_el8.6.0+1046+bd8eec5e.x86_64
qemu-kvm-common-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-docs-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-block-rbd-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-block-curl-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-ui-spice-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-core-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-ui-opengl-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-block-ssh-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-kvm-block-gluster-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64
qemu-img-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_64

So should I downgrade all these rpms?
Thanks,

Andrea

--
Andrea Chierici - INFN-CNAF 
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463 
SkypeID ataruz
--


smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NX62H7S4NZRJUDQACLAFIMO5G2EU45FC/


[ovirt-users] Re: sanlock issues after 4.3 to 4.4 migration

2022-01-06 Thread Strahil Nikolov via Users
 
The engine was not starting till downgrading to 6.0.0 qemu rpms from the 
Advanced Virtualization

Best Regards,
Strahil Nikolov В четвъртък, 6 януари 2022 г., 11:51:27 Гринуич+2, Strahil 
Nikolov via Users  написа:  
 
  It seems that after the last attempt I managed to move forward:

systemctl start ovirt-ha-agent ovirt-ha-broker

then stopped the ovirt-ha-agent and run "hosted-engine --reinitialize-lockspace"

Now the situation changed a little bit:
# sanlock client status
daemon 5f37f400-b865-11dc-a4f5-2c4d54502372
p -1 helper
p -1 listener
p 89795 HostedEngine
p -1 status
s 
hosted-engine:1:/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1:0
s 
ca3807b9-5afc-4bcd-a557-aacbcc53c340:1:/rhev/data-center/mnt/glusterSD/ovirt2\:_engine44/ca3807b9-5afc-4bcd-a557-aacbcc53c340/dom_md/ids:0
r 
ca3807b9-5afc-4bcd-a557-aacbcc53c340:292c2cac-8dad-4229-a9a3-e64811f4b34e:/rhev/data-center/mnt/glusterSD/ovirt2\:_engine44/ca3807b9-5afc-4bcd-a557-aacbcc53c340/images/1deecc6a-0584-4758-8fbb-6386662a8075/292c2cac-8dad-4229-a9a3-e64811f4b34e.lease:0:1
 p 89795

And the engine is running:
--== Host ovirt2.localdomain (id: 1) status ==--

Host ID : 1
Host timestamp : 31136
Score : 3400
Engine status : {"vm": "up", "health": "bad", "detail": "Up", "reason": "failed 
liveliness check"}
Hostname : ovirt2.localdomain
Local maintenance : False
stopped : False
crc32 : 5f5bbd94
conf_on_shared_storage : True
local_conf_timestamp : 31136
Status up-to-date : True
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=31136 (Thu Jan 6 11:46:23 2022)
 host-id=1
 score=3400
 vm_conf_refresh_time=31136 (Thu Jan 6 11:46:23 2022)
 conf_on_shared_storage=True
 maintenance=False
 state=EngineStarting
 stopped=False


I will leave it for a while before trying to troubleshoot.

Best Regards,
Strahil Nikolov
 В четвъртък, 6 януари 2022 г., 09:23:11 Гринуич+2, Strahil Nikolov via 
Users  написа:  
 
 Hello All,

I was trying to upgrade my single node setup (Actually it used to be 2+1 
arbiter, but one of the data nodes died) from 4.3.10 to 4.4.? 

The deployment failed on 'hosted-engine --reinitialize-lockspace --force' and 
it seems that sanlock fails to obtain a lock:

# hosted-engine --reinitialize-lockspace --force
Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py",
 line 30, in 
    ha_cli.reset_lockspace(force)
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 286, in reset_lockspace
    stats = broker.get_stats_from_storage()
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 148, in get_stats_from_storage
    result = self._proxy.get_stats()
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
    verbose=self.__verbose
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request
    http_conn = self.send_request(host, handler, request_body, verbose)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request
    self.send_content(connection, request_body)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content
    connection.endheaders(request_body)
  File "/usr/lib64/python3.6/http/client.py", line 1268, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1044, in _send_output
    self.send(msg)
  File "/usr/lib64/python3.6/http/client.py", line 982, in send
    self.connect()
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 
74, in connect
    self.sock.connect(base64.b16decode(self.host))
FileNotFoundError: [Errno 2] No such file or directory

# grep sanlock /var/log/messages | tail
Jan  6 08:29:48 ovirt2 sanlock[1269]: 2022-01-06 08:29:48 19341 [77108]: s1777 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:29:49 ovirt2 sanlock[1269]: 2022-01-06 08:29:49 19342 [1310]: s1777 
add_lockspace fail result -223
Jan  6 08:29:54 ovirt2 sanlock[1269]: 2022-01-06 08:29:54 19347 [77113]: s1778 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:29:55 

[ovirt-users] Re: Lots of storage.MailBox.SpmMailMonitor

2022-01-06 Thread Petr Kyselák
Hello Nir, 
I recently upgrade oVirt engine to 4.4.9 from 4.3.10 (hosts will follow ASAP).
I found out in the vdsm.log same strange messages:
2022-01-06 10:35:41,333+0100 ERROR (mailbox-spm) 
[storage.MailBox.SpmMailMonitor] mailbox 65 checksum failed, not clearing 
mailbox, clearing new mail (data='\xff\xff\xff\xff\...lot of 
data...\x00\x00\x00', checksum=, 
expected='\xbfG\x00\x00') (mailbox:603)
2022-01-06 10:35:41,334+0100 ERROR (mailbox-spm) 
[storage.MailBox.SpmMailMonitor] mailbox 66 checksum failed, not clearing 
mailbox, clearing new mail (data='\x00\x00\x00\x00\...lot of data...\xff\xff', 
checksum=, expected='\x04\xf0\x0b\x00') 
(mailbox:603)

We have 7 iSCSI and 1 NFS (old export domain).

lvscan | grep inbox
  ACTIVE'/dev/8ee251ed-a50b-4235-a279-0829d7e8e9a0/inbox' [128.00 
MiB] inherit
  ACTIVE'/dev/dfd0134d-2d63-432f-af9f-b60aa6e1fefb/inbox' [128.00 
MiB] inherit
  ACTIVE'/dev/0633a601-d73a-4750-8ff6-c893fe064469/inbox' [128.00 
MiB] inherit
  ACTIVE'/dev/47814a07-b6bc-4d1f-b01d-1919c07878a6/inbox' [128.00 
MiB] inherit
  ACTIVE'/dev/1c61030e-91a5-4e17-af37-92e1dada7c19/inbox' [128.00 
MiB] inherit
  ACTIVE'/dev/a74c32e3-ddd5-4f06-9d9c-3ba7aa153d98/inbox' [128.00 
MiB] inherit
  ACTIVE'/dev/333a7e7e-0da9-4db8-b486-fea1f1ee8171/inbox' [128.00 
MiB] inherit

 I am not fully sure which inbox/outbox I should try to clear manually. Can you 
try help me with this please?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L7WD2FY25XJCNMB3YMTA4ASKMZGKCDZM/


[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-06 Thread Yedidyah Bar David
On Thu, Jan 6, 2022 at 11:47 AM  wrote:
>
> Hi Didi,
>
> Apologies as this is my first post. I am referring the issue that is 
> mentioned in the Red hat solution mentioned in this thread.
> https://access.redhat.com/solutions/4462431
> I am trying to deploy hosted engine VM. I tried via cockpit gui and through 
> CLI. In both cased deployment fails with error message
> From the below error message I can see VM is in powering down state and 
> health status is bad
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 180, "changed": true, 
> "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.162941", 
> "end": "2022-01-06 00:43:07.060659", "rc": 0, "start": "2022-01-06 
> 00:43:06.897718", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": 
> {\"host-id\": 1, \"host-ts\": 117459, \"score\": 3400, \"engine-status\": 
> {\"vm\": \"up\", \"health\": \"bad\", \"detail\": \"Powering down\", 
> \"reason\": \"failed liveliness check\"}, \"hostname\": 
> \"seliics00123.ovirt4.fl.dselab.seli.gic.ericsson.se\", \"maintenance\": 
> false, \"stopped\": false, \"crc32\": \"d889fd9b\", 
> \"conf_on_shared_storage\": true, \"local_conf_timestamp\": 117459, 
> \"extra\": 
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=117459 
> (Thu Jan  6 00:43:02 
> 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=117459 (Thu Jan  6 
> 00:43:02 
> 2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Fri
>  Jan  2 08:38:08 1970
>  \\n\", \"live-data\": true}, \"global_maintenance\": false}", 
> "stdout_lines": ["{\"1\": {\"host-id\": 1, \"host-ts\": 117459, \"score\": 
> 3400, \"engine-status\": {\"vm\": \"up\", \"health\": \"bad\", \"detail\": 
> \"Powering down\", \"reason\": \"failed liveliness check\"}, \"hostname\": 
> \"seliics00123.ovirt4.fl.dselab.seli.gic.ericsson.se\", \"maintenance\": 
> false, \"stopped\": false, \"crc32\": \"d889fd9b\", 
> \"conf_on_shared_storage\": true, \"local_conf_timestamp\": 117459, 
> \"extra\": 
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=117459 
> (Thu Jan  6 00:43:02 
> 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=117459 (Thu Jan  6 
> 00:43:02 
> 2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Fri
>  Jan  2 08:38:08 1970\\n\", \"live-data\": true}, \"global_maintenance\": 
> false}"]}
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check VM status at virt 
> level]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if engine VM is not 
> running]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get target engine VM IP 
> address]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get VDSM's target engine VM 
> stats]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Convert stats to JSON 
> format]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get target engine VM IP 
> address from VDSM stats]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Engine IP is 
> different from engine's he_fqdn resolved IP]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM 
> IP address is  while the engine's he_fqdn 
> manager-ovirt4.fl.dselab.seli.gic.ericsson.se resolves to 10.228.170.36. If 
> you are using DHCP, check your DHCP reservation configuration"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing 
> ansible-playbook
>
> VM is running
> [root@host]# virsh -c 
> qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf list
>  Id   Name   State
> --
>  37   HostedEngine   running
>
>
> Please let me know if you need any further output or log file.

Can you check qemu version, and try downgrading it to 6.0.0 if
applicable? Thanks.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJUJZNFYEHYXL5OQTNOLT35NVRK6KTKY/


[ovirt-users] Re: sanlock issues after 4.3 to 4.4 migration

2022-01-06 Thread Strahil Nikolov via Users
 It seems that after the last attempt I managed to move forward:

systemctl start ovirt-ha-agent ovirt-ha-broker

then stopped the ovirt-ha-agent and run "hosted-engine --reinitialize-lockspace"

Now the situation changed a little bit:
# sanlock client status
daemon 5f37f400-b865-11dc-a4f5-2c4d54502372
p -1 helper
p -1 listener
p 89795 HostedEngine
p -1 status
s 
hosted-engine:1:/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1:0
s 
ca3807b9-5afc-4bcd-a557-aacbcc53c340:1:/rhev/data-center/mnt/glusterSD/ovirt2\:_engine44/ca3807b9-5afc-4bcd-a557-aacbcc53c340/dom_md/ids:0
r 
ca3807b9-5afc-4bcd-a557-aacbcc53c340:292c2cac-8dad-4229-a9a3-e64811f4b34e:/rhev/data-center/mnt/glusterSD/ovirt2\:_engine44/ca3807b9-5afc-4bcd-a557-aacbcc53c340/images/1deecc6a-0584-4758-8fbb-6386662a8075/292c2cac-8dad-4229-a9a3-e64811f4b34e.lease:0:1
 p 89795

And the engine is running:
--== Host ovirt2.localdomain (id: 1) status ==--

Host ID : 1
Host timestamp : 31136
Score : 3400
Engine status : {"vm": "up", "health": "bad", "detail": "Up", "reason": "failed 
liveliness check"}
Hostname : ovirt2.localdomain
Local maintenance : False
stopped : False
crc32 : 5f5bbd94
conf_on_shared_storage : True
local_conf_timestamp : 31136
Status up-to-date : True
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=31136 (Thu Jan 6 11:46:23 2022)
 host-id=1
 score=3400
 vm_conf_refresh_time=31136 (Thu Jan 6 11:46:23 2022)
 conf_on_shared_storage=True
 maintenance=False
 state=EngineStarting
 stopped=False


I will leave it for a while before trying to troubleshoot.

Best Regards,
Strahil Nikolov
 В четвъртък, 6 януари 2022 г., 09:23:11 Гринуич+2, Strahil Nikolov via 
Users  написа:  
 
 Hello All,

I was trying to upgrade my single node setup (Actually it used to be 2+1 
arbiter, but one of the data nodes died) from 4.3.10 to 4.4.? 

The deployment failed on 'hosted-engine --reinitialize-lockspace --force' and 
it seems that sanlock fails to obtain a lock:

# hosted-engine --reinitialize-lockspace --force
Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py",
 line 30, in 
    ha_cli.reset_lockspace(force)
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 286, in reset_lockspace
    stats = broker.get_stats_from_storage()
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 148, in get_stats_from_storage
    result = self._proxy.get_stats()
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
    verbose=self.__verbose
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request
    http_conn = self.send_request(host, handler, request_body, verbose)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request
    self.send_content(connection, request_body)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content
    connection.endheaders(request_body)
  File "/usr/lib64/python3.6/http/client.py", line 1268, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1044, in _send_output
    self.send(msg)
  File "/usr/lib64/python3.6/http/client.py", line 982, in send
    self.connect()
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 
74, in connect
    self.sock.connect(base64.b16decode(self.host))
FileNotFoundError: [Errno 2] No such file or directory

# grep sanlock /var/log/messages | tail
Jan  6 08:29:48 ovirt2 sanlock[1269]: 2022-01-06 08:29:48 19341 [77108]: s1777 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:29:49 ovirt2 sanlock[1269]: 2022-01-06 08:29:49 19342 [1310]: s1777 
add_lockspace fail result -223
Jan  6 08:29:54 ovirt2 sanlock[1269]: 2022-01-06 08:29:54 19347 [77113]: s1778 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:29:55 ovirt2 sanlock[1269]: 2022-01-06 08:29:55 19348 [1310]: s1778 
add_lockspace fail result -223
Jan  6 08:30:00 ovirt2 sanlock[1269]: 2022-01-06 08:30:00 19353 [77138]: s1779 
failed to read device to find sector size error -223 

[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-06 Thread sohail_akhter3
Hi Didi,

Apologies as this is my first post. I am referring the issue that is mentioned 
in the Red hat solution mentioned in this thread.
https://access.redhat.com/solutions/4462431
I am trying to deploy hosted engine VM. I tried via cockpit gui and through 
CLI. In both cased deployment fails with error message
From the below error message I can see VM is in powering down state and health 
status is bad

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check engine VM health]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 180, "changed": true, 
"cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.162941", 
"end": "2022-01-06 00:43:07.060659", "rc": 0, "start": "2022-01-06 
00:43:06.897718", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": 
{\"host-id\": 1, \"host-ts\": 117459, \"score\": 3400, \"engine-status\": 
{\"vm\": \"up\", \"health\": \"bad\", \"detail\": \"Powering down\", 
\"reason\": \"failed liveliness check\"}, \"hostname\": 
\"seliics00123.ovirt4.fl.dselab.seli.gic.ericsson.se\", \"maintenance\": false, 
\"stopped\": false, \"crc32\": \"d889fd9b\", \"conf_on_shared_storage\": true, 
\"local_conf_timestamp\": 117459, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=117459 (Thu 
Jan  6 00:43:02 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=117459 
(Thu Jan  6 00:43:02 
2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Fri
 Jan  2 08:38:08 1970
 \\n\", \"live-data\": true}, \"global_maintenance\": false}", "stdout_lines": 
["{\"1\": {\"host-id\": 1, \"host-ts\": 117459, \"score\": 3400, 
\"engine-status\": {\"vm\": \"up\", \"health\": \"bad\", \"detail\": \"Powering 
down\", \"reason\": \"failed liveliness check\"}, \"hostname\": 
\"seliics00123.ovirt4.fl.dselab.seli.gic.ericsson.se\", \"maintenance\": false, 
\"stopped\": false, \"crc32\": \"d889fd9b\", \"conf_on_shared_storage\": true, 
\"local_conf_timestamp\": 117459, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=117459 (Thu 
Jan  6 00:43:02 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=117459 
(Thu Jan  6 00:43:02 
2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Fri
 Jan  2 08:38:08 1970\\n\", \"live-data\": true}, \"global_maintenance\": 
false}"]}
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check VM status at virt level]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if engine VM is not 
running]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get target engine VM IP 
address]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get VDSM's target engine VM 
stats]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Convert stats to JSON format]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get target engine VM IP 
address from VDSM stats]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Engine IP is 
different from engine's he_fqdn resolved IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM 
IP address is  while the engine's he_fqdn 
manager-ovirt4.fl.dselab.seli.gic.ericsson.se resolves to 10.228.170.36. If you 
are using DHCP, check your DHCP reservation configuration"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook

VM is running
[root@host]# virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf list
 Id   Name   State
--
 37   HostedEngine   running


Please let me know if you need any further output or log file. 

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXSUW6XMTQJM2J2SNOUWGZXTFKBFY2V2/