[ovirt-users] Bond creation issue via hosted engine

2020-11-06 Thread Harry O
Hi guys and girls,

Every time I try to create a bond with my two onboard intel nic's (any type of 
bond) via hosted engine, it will fail with the following error "Error while 
executing action HostSetupNetworks: Unexpected exception", it will still get 
created on the node but then the engine will get the capabilities out of sync 
with the node and can't refresh. I will need to delete the bond manually on the 
node for the engine to be happy again. Anyone knows why?

Here is some logs from engine log file:

[root@ovirt1-engine ~]# cat /var/log/ovirt-engine/engine.log
2020-11-06 08:29:56,230+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [] 
Attempting to update VMs/Templates Ovf.
2020-11-06 08:29:56,236+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Before acquiring and wait lock 
'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]',
 sharedLocks=''}'
2020-11-06 08:29:56,236+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Lock-wait acquired to object 
'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]',
 sharedLocks=''}'
2020-11-06 08:29:56,237+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Running command: ProcessOvfUpdateForStoragePoolCommand internal: 
true. Entities affected :  ID: b150e472-1f45-11eb-8e70-00163e78288d Type: 
StoragePool
2020-11-06 08:29:56,242+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Attempting to update VM OVFs in Data Center 'Default'
2020-11-06 08:29:56,243+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Successfully updated VM OVFs in Data Center 'Default'
2020-11-06 08:29:56,243+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Attempting to update template OVFs in Data Center 'Default'
2020-11-06 08:29:56,243+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Successfully updated templates OVFs in Data Center 'Default'
2020-11-06 08:29:56,243+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Attempting to remove unneeded template/vm OVFs in Data Center 
'Default'
2020-11-06 08:29:56,245+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Successfully removed unneeded template/vm OVFs in Data Center 
'Default'
2020-11-06 08:29:56,245+01 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) 
[7094b20b] Lock freed to object 
'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]',
 sharedLocks=''}'
2020-11-06 08:29:56,284+01 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-22513) [] START, 
GetStorageDeviceListVDSCommand(HostName = ovirtn3.5ervers.lan, 
VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), 
log id: 59a1c9e
2020-11-06 08:29:56,284+01 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-22514) [] START, 
GetStorageDeviceListVDSCommand(HostName = ovirtn2.5ervers.lan, 
VdsIdVDSCommandParametersBase:{hostId='a4904c7c-92d7-4e4f-adf7-755f3c17335d'}), 
log id: 735f6d30
2020-11-06 08:29:57,020+01 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: 
Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' 
object has no attribute 'iface'"}]
2020-11-06 08:29:57,020+01 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: 
Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' 
object has no attribute 'iface'"}]
2020-11-06 08:29:57,020+01 E

[ovirt-users] Re: Bond creation issue via hosted engine

2020-11-06 Thread Harry O
Here it is 
cat /var/log/vdsm/supervdsm.log
MainProcess|mpathhealth::DEBUG::2020-11-06 
09:49:53,528::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call 
dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2020-11-06 
09:49:53,528::commands::153::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-23 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2020-11-06 
09:49:53,559::commands::98::common.commands::(run) SUCCESS:  = b'';  = 0
MainProcess|mpathhealth::DEBUG::2020-11-06 
09:49:53,559::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return 
dmsetup_run_status with b'ST4000NM0033-9ZM170_Z1Z8JNPX: 0 7814037168 multipath 
2 0 0 0 1 1 A 0 1 2 8:16 A 0 0 1 \n'
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,880::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call 
setupNetworks with ({}, {'bond0': {'nics': ['eno1', 'enp9s0'], 'options': 
'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}}, 
{'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 
'true'}) {}
MainProcess|jsonrpc/1::INFO::2020-11-06 
09:49:59,880::api::220::root::(setupNetworks) Setting up network according to 
configuration: networks:{}, bondings:{'bond0': {'nics': ['eno1', 'enp9s0'], 
'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}}, 
options:{'connectivityTimeout': 120, 'commitOnSuccess': True, 
'connectivityCheck': 'true'}
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,887::routes::115::root::(get_gateway) The gateway IP-ADDR1 is 
duplicated for the device ovirtmgmt
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,888::routes::115::root::(get_gateway) The gateway IP-ADDR1 is 
duplicated for the device ovirtmgmt
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,889::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,896::cmdutils::138::root::(exec_cmd) SUCCESS:  = b'';  = 0
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,897::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev 
enp0s29u1u1 classid 0:1388 (cwd None)
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,902::cmdutils::138::root::(exec_cmd) SUCCESS:  = b'';  = 0
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,955::vsctl::74::root::(commit) Executing commands: /usr/bin/ovs-vsctl 
--timeout=5 --oneline --format=json -- list Bridge -- list Port -- list 
Interface
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,955::cmdutils::130::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 
--oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,968::cmdutils::138::root::(exec_cmd) SUCCESS:  = b'';  = 0
MainProcess|jsonrpc/1::INFO::2020-11-06 
09:49:59,976::netconfpersistence::58::root::(setNetwork) Adding network 
ovirtmgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'nic': 'enp0s29u1u1', 
'defaultRoute': True, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': 
False, 'ipaddr': 'IP-ADDR126', 'netmask': '255.255.255.0', 'gateway': 
'IP-ADDR1', 'ipv6addr': '2001:470:df4e:2:fe4d:d4ff:fe3e:fb86/64', 
'ipv6gateway': 'fe80::250:56ff:fe8b:6e21', 'switch': 'legacy', 'nameservers': 
['IP-ADDR4']})
MainProcess|jsonrpc/1::INFO::2020-11-06 
09:49:59,977::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': 
['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 
'switch': 'legacy'})
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:49:59,979::commands::153::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-23 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe (cwd None)
MainProcess|jsonrpc/1::INFO::2020-11-06 
09:50:00,355::hooks::122::root::(_runHooksDir) 
/usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=b''
MainProcess|jsonrpc/1::INFO::2020-11-06 
09:50:00,356::configurator::195::root::(_setup_nmstate) Processing setup 
through nmstate
MainProcess|jsonrpc/1::INFO::2020-11-06 
09:50:00,384::configurator::197::root::(_setup_nmstate) Desired state: 
{'interfaces': [{'name': 'bond0', 'type': 'bond', 'state': 'up', 
'link-aggregation': {'slaves': ['eno1', 'enp9s0'], 'options': {'miimon': '100', 
'xmit_hash_policy': '2'}, 'mode': '802.3ad'}}, {'name': 'ovirtmgmt', 'mtu': 
1500}]}
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:50:00,439::checkpoint::121::root::(create) Checkpoint 
/org/freedesktop/NetworkManager/Checkpoint/40 created for all devices: 60
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:50:00,439::netapplier::239::root::(_add_interfaces) Adding new interfaces: 
['bond0']
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:50:00,442::netapplier::251::root::(_edit_interfaces) Editing interfaces: 
['ovirtmgmt', 'eno1', 'enp9s0']
MainProcess|jsonrpc/1::WARNING::2020-11-06 
09:50:00,443::ipv6::188::root::(_set_static) IPv6 link local address 
fe80::64c3:73ff:fe2f:10d7/64 is ignored when applying desired state
MainProcess|jsonrpc/1::DEBUG::2020-11-06 
09:50:00,

[ovirt-users] Re: Bond creation issue via hosted engine

2020-11-06 Thread Harry O
I get this output on the node:
cat /var/log/vdsm/vdsm.log | grep "I am the actual vdsm"
2020-11-05 10:32:24,495+0100 INFO  (MainThread) [vds] (PID: 15547) I am the 
actual vdsm 4.40.26.3.1 ovirtn1.5ervers.lan (4.18.0-193.19.1.el8_2.x86_64) 
(vdsmd:155)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TRMBQJSEEYW5UY22MNMNQQPCC647TRCP/


[ovirt-users] Re: Bond creation issue via hosted engine

2020-11-06 Thread Harry O
I also have this:

systemctl status network.service -l
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; generated)
   Active: failed (Result: exit-code) since Fri 2020-11-06 11:23:24 CET; 21s ago
 Docs: man:systemd-sysv-generator(8)
  Process: 498501 ExecStart=/etc/rc.d/init.d/network start (code=exited, 
status=1/FAILURE)

Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File 
exists
Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File 
exists
Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File 
exists
Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File 
exists
Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File 
exists
Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File 
exists
Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File 
exists
Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: network.service: Control 
process exited, code=exited status=1
Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: network.service: Failed with 
result 'exit-code'.
Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: Failed to start LSB: Bring 
up/down networking.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q3U3YO2QMABC7MQ3CIXDS52R6SKHB55H/


[ovirt-users] Re: Bond creation issue via hosted engine

2020-11-06 Thread Harry O
It's a brand new cluster, I want to setup bonding and tunking for the first 
time one the hosts and cluster, so no upgrades done.

ls /usr/lib/python3.6/site-packages/vdsm/network/link/bond/ -a
.  ..  bond_speed.py  __init__.py  __pycache__  speed.py  sysfs_driver.py  
sysfs_options_mapper.py  sysfs_options.py

Restart vdsmd and supervdsmd services and then
request the capabilities again, didn't work
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5446IXL3UQ4EJ7WDQO6TM67LEP2VQQKX/


[ovirt-users] Re: Bond creation issue via hosted engine

2020-11-06 Thread Harry O
host:

OS Version:
RHEL - 8.2 - 2.2004.0.2.el8
OS Description:
CentOS Linux 8 (Core)
Kernel Version:
4.18.0 - 193.19.1.el8_2.x86_64
KVM Version:
4.2.0 - 29.el8.3
LIBVIRT Version:
libvirt-6.0.0-25.2.el8
VDSM Version:
vdsm-4.40.26.3-1.el8
SPICE Version:
0.14.2 - 1.el8
GlusterFS Version:
glusterfs-7.8-1.el8
CEPH Version:
librbd1-12.2.7-9.el8
Open vSwitch Version:
[N/A]
Nmstate Version:
nmstate-0.2.10-1.el8
Kernel Features:
MDS: (Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable), 
L1TF: (Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT 
vulnerable), SRBDS: (Not affected), MELTDOWN: (Mitigation: PTI), SPECTRE_V1: 
(Mitigation: usercopy/swapgs barriers and __user pointer sanitization), 
SPECTRE_V2: (Mitigation: Full generic retpoline, STIBP: disabled, RSB filling), 
ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages), TSX_ASYNC_ABORT: (Not 
affected), SPEC_STORE_BYPASS: (Vulnerable)
VNC Encryption:
Disabled
FIPS mode enabled:
Disabled
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IYYDKBP3JXSZWTEPGL32BLHFFXQNFLWF/


[ovirt-users] Re: Bond creation issue via hosted engine

2020-11-06 Thread Harry O
It works, thanks! You can do magic.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/STZ6ZG4EEJ6RYCXLBXNYMHDBUKPYUBMX/


[ovirt-users] Gluster volume slower then raid1 zpool speed

2020-11-23 Thread Harry O
Hi,
Can anyone help me with the performance on my 3 node gluster on zfs (it is 
setup with one arbiter)
The performance on the single vm I have on it (with engine) is 50% worse then a 
single bare metal disk, on the writes.
I have enabled "Optimize for virt store"
I run 1Gbps 1500MTU network, could this be the write performance killer?
Is this to be expected from a 2xHDD zfs raid one on each node, with 3xNode 
arbiter setup?
Maybe I should move to raid 5 or 6?
Maybe I should add SSD cache to raid1 zfs zpools?
What are your thoughts? What to do for optimize this setup?
I would like to run zfs with gluster and I can deal with a little performance 
loss, but not that much.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SYP4I4MQDKLCIFMUSXVYCUOFNC25LNDR/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-23 Thread Harry O
Thanks for looking into this. I will try the stuff out.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BBQIH2CEQTCMM4NVEQU23KYKSD4XDH7B/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread Harry O
Unfortunately I didn't get any improvement by upgrading the network.

Bare metal (zfs raid1 zvol):
dd if=/dev/zero of=/gluster_bricks/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.6471 s, 68.6 MB/s

Centos VM on gluster volume:
dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 36.8618 s, 29.1 MB/s

Does this performance look normal?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZKRIMXDVN3MAVE7GVQDUIL5ZE473LAL/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-26 Thread Harry O
I would love to see something similar to your performance numbers WK.
Here is my gluster volume options and info:
[root@ovirtn1 ~]# gluster v info vmstore
 
Volume Name: vmstore
Type: Replicate
Volume ID: stuff
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirtn1.5ervers.lan:/gluster_bricks/vmstore/vmstore
Brick2: ovirtn2.5ervers.lan:/gluster_bricks/vmstore/vmstore
Brick3: ovirtn3.5ervers.lan:/gluster_bricks/vmstore/vmstore (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on

Does it look like sharding is on Strahil Nikolov?

Running "gluster volume set vmstore group virt" had no effect.

I don't know why I ended up using dsync flag.
For real work test, I have crystal disk mark on windows VM, this is the results:
https://gofile.io/d/7nOeEL
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZBPFPA5K6XCCFIBTKUMEVYFY4AHJN5H/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-26 Thread Harry O
New results from centos vm on vmstore:
[root@host2 ~]# dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 26.6353 s, 40.3 MB/s
[root@host2 ~]# rm -rf /test12.img
[root@host2 ~]#
[root@host2 ~]# dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 61.4851 s, 17.5 MB/s
[root@host2 ~]# rm -rf /test12.img
[root@host2 ~]#
[root@host2 ~]# dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 28.2097 s, 38.1 MB/s
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPH2OG6FOB4Y36JJPROWDT6V3WTVOPUX/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-26 Thread Harry O
So my gluster performance results is expected?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RCQ5LA77ZFQF5V5VM5FLX3PG3AYQ3FMK/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2021-01-28 Thread Harry O
Ok guys now my setup it like this:
2 x Servers with 5 x 4TB 7200RPM drives in raidz1 and 10G storage network (mtu 
9000) in each - my gluster_bricks folders
1 x SFF workstation with 2 x 50GB SSD's in ZFS mirror - my gluster_bricks 
folder for arbiter
My gluster vol info looks like this:
Volume Name: vmstore
Type: Replicate
Volume ID: 7deac39b-3109-4229-b99f-afa50fc8d5a1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirtn1.5erverssan.lan:/gluster_bricks/vmstore/vmstore
Brick2: ovirtn2.5erverssan.lan:/gluster_bricks/vmstore/vmstore
Brick3: ovirtn3.5erverssan.lan:/gluster_bricks/vmstore/vmstore (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: off
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on


And my test results look like this:
starting on engine
/tmp
50M
dd: error writing './junk': No space left on device
40+0 records in
39+0 records out
2044723200 bytes (2.0 GB, 1.9 GiB) copied, 22.1341 s, 92.4 MB/s
starting
/tmp
10M
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 11.4612 s, 91.5 MB/s
starting
/tmp
1M
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.602421 s, 174 MB/s




starting on node1
/gluster_bricks
50M
100+0 records in
100+0 records out
524288 bytes (5.2 GB, 4.9 GiB) copied, 40.8802 s, 128 MB/s
starting
/gluster_bricks
10M
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 7.49434 s, 140 MB/s
starting
/gluster_bricks
1M
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.164098 s, 639 MB/s




starting on node2
/gluster_bricks
50M
100+0 records in
100+0 records out
524288 bytes (5.2 GB, 4.9 GiB) copied, 22.0764 s, 237 MB/s
starting
/gluster_bricks
10M
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 4.32239 s, 243 MB/s
starting
/gluster_bricks
1M
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0584058 s, 1.8 GB/s

I don't know why my zfs arrays perform different, its the same drives with the 
same config.
Is this performace normal or bad? I think it is too bad hmm... Any tips or 
tricks for this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O43GWVLAZK3RGK3TOZCUWZ4C6SFYOBCV/


[ovirt-users] Ooops! in last step of Hyperconverged deployment

2021-05-11 Thread Harry O
Hi,

In the second engine dep run in Hyperconverged deployment I get red "Ooops!" in 
cockpit.
I think it fails on some networking setup.
The first oVirt Node says "Hosted Engine is up!" but the other nodes is not 
added to HostedEngine yet.
There is no network connectivity to the Engine outside node1, I can ssh to 
engine from node1 on the right IP-address.
Please tell what logs I should pull.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOFPBMMPGW64P2YAZDI3DFOI6SKAA7YA/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-11 Thread Harry O
Thanks Strahil, I think I we found the relevent error in logs:

[  00:03  ] Make the engine aware that the external VM is stopped
[  00:01  ] Wait for the local bootstrap VM to be down at engine eyes
[  00:02  ] Remove bootstrap external VM from the engine
[  00:04  ] Remove ovirt-engine-appliance rpm
[ < 1 sec ] Include custom tasks for after setup customization
[ < 1 sec ] Include Host vars
[ FAILED  ] Set Engine public key as authorized key without validating the 
TLS/SSL certificates
2021-05-11 11:15:34,420+0200 DEBUG ansible on_any args 
  kwargs
[root@hej1 ~]# cat 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_target_vm-2021050328-fcgvwi.log

Can you recognize an issue you know from all the info I have provided, do I 
need to dig further?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DXLFILNM7TDXZ5E44SKKYE7V6S373W5Z/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-14 Thread Harry O
Like this?

 - name: Detect VLAN ID
shell: ip -d link show {{ he_bridge_if }} | grep 'vlan ' | grep -Po 'id 
\K[\d]+' | cat
environment: "{{ he_cmd_lang }}"
register: vlan_id_out
changed_when: true
  - debug: var=vlan_id_out
  - name: Set Engine public key as authorized key without validating the 
TLS/SSL certificates
authorized_key:
  user: root
  state: present
  key: https://{{ he_fqdn 
}}/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY
  validate_certs: false
  register: output
  failed_when: never
  - include_tasks: auth_sso.yml
  - name: DEBUG
  debug:
  var: 'output'
  - name: Ensure that the target datacenter is present
ovirt_datacenter:
  state: present
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4PAZVHEYLSOGGZNIYBXOZP5EBXTLNEU4/


[ovirt-users] Create Brick from Engine host view

2021-05-14 Thread Harry O
When I try to create a single disk brick via host view "storage devices" on 
engine, I get the following error.
Error while executing action Create Brick: Internal Engine Error
Failed to create brick lalaf on host hej1.5ervers.lan of cluster Clu1.
I want the brick to be single disk no raid, no cache. Is there a way to create 
it via CLI? Do I need to pull some logs?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3R75BVRMRY6DTLMQHQJYSYJQGLHKUAVU/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-17 Thread Harry O
The error still persist after change in following, is this the worng place? I 
couldn't do it under HCI setup: 
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml

Change:
---
- name: Add host
  block:
  - name: Wait for ovirt-engine service to start
uri:
  url: http://{{ he_fqdn }}/ovirt-engine/services/health
  return_content: true
register: engine_status
until: "'DB Up!Welcome to Health Status!' in engine_status.content"
retries: 30
delay: 20
  - debug: var=engine_status
  - name: Open a port on firewalld
firewalld:
  port: "{{ he_webui_forward_port }}/tcp"
  permanent: false
  immediate: true
  state: enabled
  - name: Expose engine VM webui over a local port via ssh port forwarding
command: >-
  sshpass -e ssh -tt -o ServerAliveInterval=5 -o StrictHostKeyChecking=no 
-o UserKnownHostsFile=/dev/null -g -L
  {{ he_webui_forward_port }}:{{ he_fqdn }}:443 {{ he_fqdn }}
environment:
  - "{{ he_cmd_lang }}"
  - SSHPASS: "{{ he_appliance_password }}"
changed_when: true
async: 86400
poll: 0
register: sshpf
  - debug: var=sshpf
  - name: Evaluate temporary bootstrap engine URL
set_fact: bootstrap_engine_url="https://{{ he_host_address }}:{{ 
he_webui_forward_port }}/ovirt-engine/"
  - debug:
  msg: >-
The bootstrap engine is temporary accessible over {{ 
bootstrap_engine_url }}
  - name: Detect VLAN ID
shell: ip -d link show {{ he_bridge_if }} | grep 'vlan ' | grep -Po 'id 
\K[\d]+' | cat
environment: "{{ he_cmd_lang }}"
register: vlan_id_out
changed_when: true
  - debug: var=vlan_id_out
  - name: Set Engine public key as authorized key without validating the 
TLS/SSL certificates
authorized_key:
  user: root
  state: present
  key: https://{{ he_fqdn 
}}/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY
  validate_certs: false
  register: output
  failed_when: never
  - name: DEBUG
  debug:
  var: 'output'
  - include_tasks: auth_sso.yml
  - name: Ensure that the target datacenter is present
ovirt_datacenter:
  state: present
  name: "{{ he_data_center }}"
  wait: true
  local: false
  auth: "{{ ovirt_auth }}"
register: dc_result_presence
  - name: Ensure that the target cluster is present in the target datacenter
ovirt_cluster:
  state: present
  name: "{{ he_cluster }}"
  data_center: "{{ he_data_center }}"
  cpu_type: "{{ he_cluster_cpu_type | default(omit) }}"
  wait: true
  auth: "{{ ovirt_auth }}"
register: cluster_result_presence
  - name: Check actual cluster location
fail:
  msg: >-
A cluster named '{{ he_cluster }}' has been created earlier in a 
different
datacenter and cluster moving is still not supported.
You can avoid this specifying a different cluster name;
please fix accordingly and try again.
when: cluster_result_presence.cluster.data_center.id != 
dc_result_presence.datacenter.id
  - name: Enable GlusterFS at cluster level
ovirt_cluster:
  data_center: "{{ he_data_center }}"
  name: "{{ he_cluster }}"
  auth: "{{ ovirt_auth }}"
  virt: true
  gluster: true
  fence_skip_if_gluster_bricks_up: true
  fence_skip_if_gluster_quorum_not_met: true
when: he_enable_hc_gluster_service is defined and 
he_enable_hc_gluster_service
  - name: Set VLAN ID at datacenter level
ovirt_network:
  data_center: "{{ he_data_center }}"
  name: "{{ he_mgmt_network }}"
  vlan_tag: "{{ vlan_id_out.stdout }}"
  auth: "{{ ovirt_auth }}"
when: vlan_id_out.stdout|length > 0
  - name: Get active list of active firewalld zones
shell: set -euo pipefail && firewall-cmd --get-active-zones | grep -v 
"^\s*interfaces"
environment: "{{ he_cmd_lang }}"
register: active_f_zone
changed_when: true
  - name: Configure libvirt firewalld zone
firewalld:
  zone: libvirt
  service: "{{ service_item }}"
  permanent: true
  immediate: true
  state: enabled
with_items:
  - vdsm
  - libvirt-tls
  - ovirt-imageio
  - ovirt-vmconsole
  - ssh
  - vdsm
loop_control:
  loop_var: service_item
when: "'libvirt' in active_f_zone.stdout_lines"
  - name: Add host
ovirt_host:
  cluster: "{{ he_cluster }}"
  name: "{{ he_host_name }}"
  state: present
  public_key: true
  address: "{{ he_host_address }}"
  auth: "{{ ovirt_auth }}"
async: 1
poll: 0
  - name: Pause the execution to let the user interactively reconfigure the host
block:
  - name: Let the user connect to the bootstrap engine to manually fix host 
configuration
debug:
  msg: >-
You can now connect to {{ bootstrap_engine_url }} and check the 
status of this host and
eventually remediate it, please continue only when the host is 
listed as 'up'

[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-17 Thread Harry O
Would that be this log file you need?
cat 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_target_vm-20210517144031-ebmk45.log
Thanks for helping me :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KTJOCOWVLQ7EYG7LB2SSBMTI5MOFLHFI/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-17 Thread Harry O
Cockpit crashes with an Ooops! And therefore closes the ansible output console, 
so we need to find the file with that output.
/ovirt-dashboard just shows blank white screen.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SFS5AEKSNDHMZG6LYFADMNFPN57O7QF5/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-21 Thread Harry O
Do know what i mean?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IRIFXWYSHGB2JG5YFU3MS5I2EVRRCOST/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-23 Thread Harry O
I have found some logs here, hope is is usable.

journalctl | grep ovirt:
May 23 21:25:17 hej1.5ervers.lan ovirt-ha-broker[72235]: ovirt-ha-broker 
mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
May 23 21:25:20 hej1.5ervers.lan platform-python[77432]: ansible-dnf Invoked 
with name=['ovirt-engine-appliance'] state=absent allow_downgrade=False 
autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] 
disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] 
installroot=/ install_repoquery=True install_weak_deps=True security=False 
skip_broken=False update_cache=False update_only=False validate_certs=True 
lock_timeout=30 conf_file=None disable_excludes=None download_dir=None 
list=None releasever=None
May 23 21:25:27 hej1.5ervers.lan ovirt-ha-broker[72235]: ovirt-ha-broker 
mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result




journalctl | grep cockpit:
May 23 20:19:49 hej1.5ervers.lan dbus-daemon[576]: [system] Activating via 
systemd: service name='org.freedesktop.hostname1' 
unit='dbus-org.freedesktop.hostname1.service' requested by ':1.465' (uid=0 
pid=2340 comm="cockpit-bridge " 
label="unconfined_u:unconfined_r:unconfined_t:s0")
May 23 20:24:10 hej1.5ervers.lan cockpit-bridge[2340]: [WARNING]: Consider 
using the yum, dnf or zypper module rather than running
May 23 20:24:10 hej1.5ervers.lan cockpit-bridge[2340]: 'rpm'.  If you need to 
use command because yum, dnf or zypper is insufficient
May 23 20:24:10 hej1.5ervers.lan cockpit-bridge[2340]: you can add 'warn: 
false' to this command task or set 'command_warnings=False'
May 23 20:24:10 hej1.5ervers.lan cockpit-bridge[2340]: in ansible.cfg to get 
rid of this message.
May 23 20:27:46 hej1.5ervers.lan cockpit-bridge[2340]: ls: cannot access 
'/var/log/cockpit/ovirt-dashboard': No such file or directory
May 23 20:27:59 hej1.5ervers.lan cockpit-bridge[2340]: [WARNING]: provided 
hosts list is empty, only localhost is available. Note that
May 23 20:27:59 hej1.5ervers.lan cockpit-bridge[2340]: the implicit localhost 
does not match 'all'
May 23 21:08:50 hej1.5ervers.lan python3[64721]: ansible-firewalld Invoked with 
service=cockpit permanent=True immediate=True state=enabled timeout=0 
icmp_block=None icmp_block_inversion=None port=None rich_rule=None zone=None 
source=None interface=None masquerade=None offline=None
May 23 21:26:02 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:26:02 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:26:03 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:26:03 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:26:03 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:26:03 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:26:03 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:26:03 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:28:57 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:28:57 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:28:57 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:28:57 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:28:57 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:28:57 hej1.5ervers.lan cockpit-tls[2274]: cockpit-tls: 
gnutls_handshake failed: A TLS fatal alert has been received.
May 23 21:28:57 hej1.5ervers.lan cockpit-ws[2287]: New connection to session 1
May 23 21:28:58 hej1.5ervers.lan dbus-daemon[576]: [system] Activating via 
systemd: service name='org.freedesktop.hostname1' 
unit='dbus-org.freedesktop.hostname1.service' requested by ':1.465' (uid=0 
pid=2340 comm="cockpit-bridge " 
label="unconfined_u:unconfined_r:unconfined_t:s0")




cockpit js console:
system/services#/?type=service:1 Refused to apply style from 
'https://hej1:9090/cockpit/$42573f990c732f5837e2f88e49b898a696f905eb9ac77b058eb1335cafc7082a/shell/nav.css'
 because its MIME type ('text/html') is not a supported stylesheet MIME type, 
and strict MIME checking is enabled.
cockpit.js:621 grep: /sys/class/dmi/id/power/autosuspend_delay_ms: Input/output 
error

p @ cockpit.js:621
cockpit.js:621 grep: /et

[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-24 Thread Harry O
It's same issue in other browsers.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2ZWSGWOGBLVL5DJOFOCV4H64UM44ENE5/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-27 Thread Harry O
Hi,

This did the job for the first issue, thanks.
vi  /usr/lib/systemd/system/cockpit.service
ExecStart=/usr/libexec/cockpit-tls --idle-timeout 180

Now I get this:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Engine IP is different 
from engine's he_fqdn resolved IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM 
IP address is while the engine's he_fqdn hej.5ervers.lan resolves to 
192.168.4.144. If you are using DHCP, check your DHCP reservation 
configuration"}

But the information is correct.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JFMPDSNVFQNHBMYGDSMCSKLXU7HUFZK/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-05-27 Thread Harry O
Nope, it didn't fix it, just typed in the wrong IP-address
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GTTUZS5YNBWSZF7KMB2X6XGX5AZYB5QX/


[ovirt-users] Hosted-Engine import

2021-06-06 Thread Harry O
Hi,

Is it possible to import hosted engine vm from vm files on gluster only?
If yes, how?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QQIIJLUADCYHDG66ZRTU2QFMNQ7POEH5/


[ovirt-users] Re: Hosted-Engine import

2021-06-06 Thread Harry O
How to 'tune' it?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZFT4F725ERSGC36C456T73M4A6EUQUNY/


[ovirt-users] Re: Hosted-Engine import

2021-06-06 Thread Harry O
That file is not existing
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQGPCCIPU5QD7ANZ7KUH7KWTMPH4LPXS/


[ovirt-users] Re: Hosted-Engine import

2021-06-07 Thread Harry O
I think there is not hosted-engine configuration on the node, but I have backup 
of hosted-engine config files from node only, not a real hosted-engine backup, 
I also have access to the engine gluster vol it ran on. Can I combine 
hosted-engine config files from node and the engine vm gluster data to get the 
hostedengine up and running?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VGZ3AJ4LBBWZVO5PLZRNPBSF5CS4T555/


[ovirt-users] hosted-engine --vm-start not working

2021-06-21 Thread Harry O
Hi,
When i run: hosted-engine --vm-start I get this:
VM exists and is Down, cleaning up and restarting
VM in WaitForLaunch

But the VM never starts:
virsh list --all
 Id   Name   State
---
 -HostedEngine   shut off


systemctl status -l ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; disabled; 
vendor preset: disabled)
   Active: active (running) since Wed 2021-06-16 13:27:27 CEST; 3min 26s ago
 Main PID: 79702 (ovirt-ha-agent)
Tasks: 2 (limit: 198090)
   Memory: 28.3M
   CGroup: /system.slice/ovirt-ha-agent.service
   └─79702 /usr/libexec/platform-python 
/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent

Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: ovirt-ha-agent.service: Succeeded.
Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: Stopped oVirt Hosted Engine High 
Availability Monitoring Agent.
Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: Started oVirt Hosted Engine High 
Availability Monitoring Agent.
Jun 16 13:29:42 hej1.5ervers.lan ovirt-ha-agent[79702]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped 
on localhost





hosted-engine --vm-status


--== Host hej1.5ervers.lan (id: 1) status ==--

Host ID: 1
Host timestamp : 3547
Score  : 3400
Engine status  : {"vm": "down", "health": "bad", "detail": 
"Down", "reason": "bad vm status"}
Hostname   : hej1.5ervers.lan
Local maintenance  : False
stopped: False
crc32  : f35899f8
conf_on_shared_storage : True
local_conf_timestamp   : 3547
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3547 (Wed Jun 16 13:32:12 2021)
host-id=1
score=3400
vm_conf_refresh_time=3547 (Wed Jun 16 13:32:12 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host hej2.5ervers.lan (id: 2) status ==--

Host ID: 2
Host timestamp : 94681
Score  : 0
Engine status  : {"vm": "down_unexpected", "health": "bad", 
"detail": "Down", "reason": "bad vm status"}
Hostname   : hej2.5ervers.lan
Local maintenance  : False
stopped: False
crc32  : 40a3f809
conf_on_shared_storage : True
local_conf_timestamp   : 94681
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=94681 (Wed Jun 16 13:32:05 2021)
host-id=2
score=0
vm_conf_refresh_time=94681 (Wed Jun 16 13:32:05 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan  2 03:23:40 1970


--== Host hej3.5ervers.lan (id: 3) status ==--

Host ID: 3
Host timestamp : 94666
Score  : 0
Engine status  : {"vm": "down_unexpected", "health": "bad", 
"detail": "Down", "reason": "bad vm status"}
Hostname   : hej3.5ervers.lan
Local maintenance  : False
stopped: False
crc32  : a50c2b3e
conf_on_shared_storage : True
local_conf_timestamp   : 94666
Status up-to-date  : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=94666 (Wed Jun 16 13:32:09 2021)
host-id=3
score=0
vm_conf_refresh_time=94666 (Wed Jun 16 13:32:09 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan  2 03:23:16 1970
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33W3L6MTJ45U257JGU3NNF4WP3GTCAFM/


[ovirt-users] Re: hosted-engine cannot can't communicate with vm

2021-06-21 Thread Harry O
Hi Timothy,

My hostedengine vm is running ok via virsh, libvirt and vdsm but my only issue 
is that the ovirt-ha-agent on the hosts can't work with the vm anymore. Did you 
get your vm up and running?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ESR57F34J77O7MKNZ7CRVHMZTIGI4SET/


[ovirt-users] Re: Hosted-Engine import

2021-06-21 Thread Harry O
I get:
vdsmd_init_common.sh[85756]: libvirt: XML-RPC error : Failed to connect socket 
to '/var/run/libvirt/libvirt-sock': Connection refused
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKCORURZA5PQFII4QH46OVYUPQ4BI4U6/


[ovirt-users] hosted-engine cannot can't communicate with vm

2021-06-21 Thread Harry O
The VM is up and the ids match, but it fail when I try anything.
hosted-engine --vm-shutdown
Command VM.shutdown with args {'vmID': '350a168a-beb9-4417-9fbd-5a8121863a57', 
'delay': '120', 'message': 'VM is shutting down!'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 
'350a168a-beb9-4417-9fbd-5a8121863a57'})


virsh list
 Id   Name   State
--
 2HostedEngine   running


virsh domuuid HostedEngine
350a168a-beb9-4417-9fbd-5a8121863a57







MainThread::ERROR::2021-06-17 
09:58:54,537::hosted_engine::953::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
 Failed to stop engine VM: Command VM.destroy with args {'vmID': 
'350a168a-beb9-4417-9fbd-5a8121863a57'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 
'350a168a-beb9-4417-9fbd-5a8121863a57'})

MainThread::INFO::2021-06-17 
09:58:54,563::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition 
(EngineForceStop-ReinitializeFSM) sent? ignored
MainThread::INFO::2021-06-17 
09:58:54,569::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state ReinitializeFSM (score: 0)
MainThread::INFO::2021-06-17 
09:59:04,654::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (ReinitializeFSM-EngineDown) 
sent? ignored
MainThread::INFO::2021-06-17 
09:59:04,738::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineDown (score: 3400)
MainThread::INFO::2021-06-17 
09:59:13,759::states::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Engine down and local host has best score (3400), attempting to start engine VM
MainThread::INFO::2021-06-17 
09:59:13,796::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (EngineDown-EngineStart) sent? 
ignored
MainThread::INFO::2021-06-17 
09:59:13,888::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineStart (score: 3400)
MainThread::INFO::2021-06-17 
09:59:13,903::hosted_engine::895::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state)
 Ensuring VDSM state is clear for engine VM
MainThread::INFO::2021-06-17 
09:59:13,909::hosted_engine::907::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state)
 Vdsm state for VM clean
MainThread::INFO::2021-06-17 
09:59:13,909::hosted_engine::853::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 Starting vm using `/usr/sbin/hosted-engine --vm-start`
MainThread::INFO::2021-06-17 
09:59:14,444::hosted_engine::862::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 stdout: VM in WaitForLaunch

MainThread::INFO::2021-06-17 
09:59:14,444::hosted_engine::863::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 stderr: Command VM.getStats with args {'vmID': 
'350a168a-beb9-4417-9fbd-5a8121863a57'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 
'350a168a-beb9-4417-9fbd-5a8121863a57'})

MainThread::INFO::2021-06-17 
09:59:14,444::hosted_engine::875::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
 Engine VM started on localhost
MainThread::INFO::2021-06-17 
09:59:14,472::brokerlink::73::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
 Success, was notification of state_transition (EngineStart-EngineStarting) 
sent? ignored
MainThread::INFO::2021-06-17 
09:59:14,479::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop)
 Current state EngineStarting (score: 3400)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/733MY2I2IB3MBMPMP2AKJ2TM72C77DS4/


[ovirt-users] Re: Hosted-Engine import

2021-06-21 Thread Harry O
I will just try harder then xD
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKI3G4K4G3SYRNPVZQQT7ZIK5PSH7DPQ/


[ovirt-users] Re: Hosted-Engine import

2021-06-21 Thread Harry O
Anyone know why I get this:
virsh list --all
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection 
refused
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4VHMJEBVULBYBV4BS6QEMJZTMWHF2Y6M/


[ovirt-users] Host reinstall from engine

2021-06-28 Thread Harry O
Hi,
Should the engine not deploy gluster when host reinstall is ran? How do I 
deploy my gluster setup on a replacement node replacing a dead node?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RMAAIPKL76BNKHH3DJXOZMTMZHRIBR7L/


[ovirt-users] Re: hosted-engine cannot can't communicate with vm

2021-06-28 Thread Harry O
I think is was because the VM was not running from "/run/vdsm/storage/"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H2J2442FMNE5A4P6GOWMJJDRSH4EBGTA/


[ovirt-users] Re: Host reinstall from engine

2021-06-28 Thread Harry O
Thanks, lost track there.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4LTFNUKF7K3PZREQD4XNEVMH2ZLDU6AL/


[ovirt-users] migrate hosted engine

2021-06-29 Thread Harry O
Hi,
I get the following when trying to migrate my HostedEngine VM to a new node in 
the cluster, I just did node reinstall via HostedEngine and rebuild of the 
gluster array on that node, bacause its a replacement for a crashed dead old 
node.
ID: 120
Migration failed due to an Error: Failed to connect to remote libvirt URI 
qemu+tls://hej3.5ervers.lan/system: Cannot read CA certificate 
'/etc/pki/CA/cacert.pem': No such file or directory (VM: HostedEngine, Source: 
hej1.5ervers.lan, Destination: hej3.5ervers.lan)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4QA2DTMTXSBOYCQDRPFASLB4HG2E5AZ6/


[ovirt-users] Re: migrate hosted engine

2021-07-01 Thread Harry O
When I put in the "/etc/pki/CA/cacert.pem" from the old node, I just get next 
error as follows:
Migration failed due to an Error: Fatal error during migration (VM: 
HostedEngine, Source: hej1.5ervers.lan, Destination: hej2.5ervers.lan).
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LFPPS4WXSRGM54VLJDRYF723RSTPNBOO/


[ovirt-users] Hosted Engine Deployment

2021-07-07 Thread Harry O
Why does the HE Deployment Not deploy a hosted engine dc and cluster which fits 
the host in version?
My Hosted Engine Deployment now fails again because of "Host hej1.5ervers.lan 
is compatible with versions (4.2,4.3,4.4,4.5) and cannot join Cluster Default 
which is set to version 4.6."
I think the deployment shoud make a dc and cluster that fits the host that is 
used to deploy. Otherwise it's doomed to fail.
Is there a proccess for fixing this? I can't change the version from HE ui as 
I'm instructed to, there is no other options on datacenter other then 4.6:
[ INFO ] You can now connect to https://hej1.5ervers.lan:6900/ovirt-engine/ and 
check the status of this host and eventually remediate it, please continue only 
when the host is listed as 'up'
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until 
/tmp/ansible.z_g6jh7h_he_setup_lock is removed, delete it once ready to proceed]
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTMATAQJBGAEQKCL5E74OZVQ4SHJMD7T/


[ovirt-users] COarse-grained LOck-stepping for oVirt

2021-09-24 Thread Harry O
Hi,
Will COLO be implemented in oVirt?
Is it possible to do it myself? I see qemu-kvm and lots of other qemu installed 
on my oVirt nodes.
It's in Qemu upstream (v4.0)
https://wiki.qemu.org/Features/COLO
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6HTRAOVE3NKPTCB4MUVDK6I5MW3A5W2T/


[ovirt-users] Re: Ooops! in last step of Hyperconverged deployment

2021-09-24 Thread Harry O
I had conflicting IP's
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D3ZLIF2NXSGDGBUGUJUPZWT7ZKZHTHAA/


[ovirt-users] Re: COarse-grained LOck-stepping for oVirt

2021-09-24 Thread Harry O
Thanks for reply.
What is the alternative for this in oVirt?
Achieving Non-stop Service for legacy statefull application that doesn't 
support it by itself.
The virtualization layer is the perfect place to implement this functionality.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SGWPGKQJDMRISBFSZKDSRASQD264BLTY/


[ovirt-users] Re: COarse-grained LOck-stepping for oVirt

2021-10-05 Thread Harry O
Does anyone know how COLO would be implemented manually?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O4JKT5ASYL6TJGCAXONGCOXCWW7RWZVF/


[ovirt-users] oVirt nodes keeps updating and rebooting

2022-01-17 Thread Harry O
Hi,

After HE deployment my oVirt nodes keeps updating and rebooting, even though I 
disabled node update in the beginning of deployment wizard.
Why is this and how can I stop it form happening?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LDFICZ2R7WYN6UDD2SBC5PWTMLD7LSNU/


[ovirt-users] Deployment suddenly fails at engine check

2022-04-22 Thread Harry O
Hi,
After the new update, my deployment fails at engine check.
What can I do to debug?

[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page is up]
[ ERROR ] fatal: [localhost -> 192.168.222.12]: FAILED! => {"attempts": 30, 
"changed": false, "connection": "close", "content": 
"Error500 - Internal Server 
Error", "content_encoding": "identity", "content_length": "86", 
"content_type": "text/html; charset=UTF-8", "date": "Fri, 22 Apr 2022 16:02:04 
GMT", "elapsed": 0, "msg": "Status code was 500 and not [200]: HTTP Error 500: 
Internal Server Error", "redirected": false, "server": "Apache/2.4.37 (centos) 
OpenSSL/1.1.1k mod_auth_gssapi/1.6.1 mod_wsgi/4.6.4 Python/3.6", "status": 500, 
"url": "http://localhost/ovirt-engine/services/health"}
[ INFO ] TASK [ovirt.ovirt.engine_setup : Clean temporary files]
[ INFO ] changed: [localhost -> 192.168.222.12]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost -> 192.168.222.12]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush 
dirty buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a 
failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a 
failure deploying the engine on the local engine VM. The system may not be 
provisioned according to the playbook results: please check the logs for the 
issue, fix accordingly or re-deploy from scratch.\n"}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MDMO5CPVXFBXJPQYIL3FFNB4FVNSCLYJ/


[ovirt-users] Re: Deployment suddenly fails at engine check

2022-04-30 Thread Harry O
Hi Martin,
Thanks for your reply,
Here is some output from the process:


[root@hej ~]# dnf downgrade postgresql-jdbc
Last metadata expiration check: 0:37:09 ago on Sat 30 Apr 2022 09:42:39 AM CEST.
Dependencies resolved.
===
 Package   
Architecture Version
  Repository
   Size
===
Downgrading:
 postgresql-jdbc   
noarch   42.2.3-3.el8_2 
  appstream 
  710 k

Transaction Summary
===
Downgrade  1 Package

Total download size: 710 k
Is this ok [y/N]: y
Downloading Packages:
postgresql-jdbc-42.2.3-3.el8_2.noarch.rpm   


   795 kB/s | 710 kB 00:00
---
Total   


   710 kB/s | 710 kB 00:00 
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing:


   1/1 
  Downgrading  : postgresql-jdbc-42.2.3-3.el8_2.noarch  


   1/2 
  Cleanup  : postgresql-jdbc-42.2.14-1.el8.noarch   


   2/2 
  Verifying: postgresql-jdbc-42.2.3-3.el8_2.noarch  


   1/2 
  Verifying: postgresql-jdbc-42.2.14-1.el8.noarch   


   2/2 

Downgraded:
  postgresql-jdbc-42.2.3-3.el8_2.noarch 


   

Complete!
[root@hej ~]# systemctl restart ovirt-engine


[root@hej ~]# 
[root@hej ~]# curl http://127.0.0.1:8706/management
Error401 - 
Unauthorized[root@hej ~]# 
[root@hej ~]# 
[root@hej ~]# curl http://127.0.0.1:8706
[root@hej ~]#

[root@hej ~]# systemctl status ovirt-engine
● ovirt-engine.service - oVirt Engine
   Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled; 
vendor preset: disabled)
   Active: active (running) since S

[ovirt-users] Re: Deployment suddenly fails at engine check

2022-04-30 Thread Harry O
When I time the postgresql-jdbc downgrade right with the engine deployment, it 
runs perfect - thanks again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AF6HKJP2E23KVBN7WDUN3QN5QGZC46WF/


[ovirt-users] Re: Deployment suddenly fails at engine check

2022-05-16 Thread Harry O
I'm not doing a oVirt update, I'm just deploying a fresh.

I just ssh to the engine via the ip in /etc/hosts.
And then use the downgrade command after wizard has run upgrade.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQMC4MQE56PABXN6VQ7XBPXQ4NKYCWMK/