[ovirt-users] Resize hosted-engine disk

2022-09-15 Thread Jorge Visentini
Hi all.
Is it possible to resize the disk of the HostedEngine VM?
I resized the LUN on Storage, but I can't resize the disk/LVM.

How is it possible to resize the disk?

All the best!
-- 
Att,
Jorge Visentini
+55 55 98432-9868
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FA4KDICVP34KVHQTY2QXSQOP6GUVHDAZ/


[ovirt-users] Re: Error during deployment of ovirt-engine

2022-09-15 Thread Yedidyah Bar David
On Thu, Sep 15, 2022 at 10:46 PM Jonas  wrote:
>
> Ok, thanks for the info. Do you have any further information?

Not sure what you mean. How to deploy HE using the CLI? Here:

https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/index.html

Best regards,

>
> On 9/15/22 09:11, Yedidyah Bar David wrote:
> > On Wed, Sep 14, 2022 at 11:31 PM Jonas  wrote:
> >> Ok even after resetting the password through SSH it is not accepted on the 
> >> web page.
> >>
> >> [root@ovirt-engine-test ~]# ovirt-aaa-jdbc-tool user password-reset admin 
> >> --password-valid-to="-09-14 20:07:39Z" --password="interactive:" 
> >> --force
> >> Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
> >> Password:
> >> Reenter password:
> >> updating user admin...
> >> user updated successfully
> >>
> >> On 9/14/22 21:40, Jonas wrote:
> >>
> >> Hello all
> >>
> >> I'm trying to deploy an oVirt Engine through the cockpit interface. 
> >> Unfortunately the deployment fails with the following error:
> > Sorry, but the cockpit hosted-engine deployment is broken. Please use
> > the CLI. Thanks.
> >
> > Best regards,
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VT377NH2YGJJYKQBS66ZMOLZ26WH6GHW/


[ovirt-users] Re: Error during deployment of ovirt-engine

2022-09-15 Thread Jonas

Ok, thanks for the info. Do you have any further information?

On 9/15/22 09:11, Yedidyah Bar David wrote:

On Wed, Sep 14, 2022 at 11:31 PM Jonas  wrote:

Ok even after resetting the password through SSH it is not accepted on the web 
page.

[root@ovirt-engine-test ~]# ovirt-aaa-jdbc-tool user password-reset admin 
--password-valid-to="-09-14 20:07:39Z" --password="interactive:" --force
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Password:
Reenter password:
updating user admin...
user updated successfully

On 9/14/22 21:40, Jonas wrote:

Hello all

I'm trying to deploy an oVirt Engine through the cockpit interface. 
Unfortunately the deployment fails with the following error:

Sorry, but the cockpit hosted-engine deployment is broken. Please use
the CLI. Thanks.

Best regards,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NXQ4MRPKCWQIEVJYNA2BRBD5TCU6BDRH/


[ovirt-users] Re: Error during deployment of ovirt-engine

2022-09-15 Thread Jonas

Sure (it's running on the latest release of oVirt Node):
cockpit
273
cockpit-bridge
273
cockpit-ovirt-dashboard
0.16.2
cockpit-storaged
273
cockpit-system
273
cockpit-ws
273

main.yml:

---
# Default vars
# Do not change these variables
# Changes in this section are NOT supported

he_cmd_lang:
  LANGUAGE: en_US.UTF-8
  LANG: en_US.UTF-8
  LC_MESSAGES: en_US.UTF-8
  LC_ALL: en_US.UTF-8

he_vm_name: HostedEngine
he_data_center: Default
he_cluster: Default
he_local_vm_dir_path: /var/tmp
he_local_vm_dir_prefix: localvm
he_appliance_ova: ''
he_root_ssh_pubkey: ''
he_root_ssh_access: 'yes'
he_apply_openscap_profile: false
he_openscap_profile_name: stig
he_enable_fips: false
he_cdrom: ''
he_console_type: vnc
he_video_device: vga
he_graphic_device: vnc
he_emulated_machine: pc
he_minimal_mem_size_MB: 4096
he_minimal_disk_size_GB: 50
he_mgmt_network: ovirtmgmt
he_storage_domain_name: hosted_storage
he_ansible_host_name: localhost
he_ipv4_subnet_prefix: "192.168.222"
he_ipv6_subnet_prefix: fd00:1234:5678:900
he_webui_forward_port: 6900  # by default already open for VM console
he_reserved_memory_MB: 512
he_avail_memory_grace_MB: 200

engine_psql: /usr/share/ovirt-engine/dbscripts/engine-psql.sh

he_host_ip: null
he_host_name: null
he_host_address: null
he_cloud_init_host_name: null
he_cloud_init_domain_name: null

he_smtp_port: 25
he_smtp_server: localhost
he_dest_email: root@localhost
he_source_email: root@localhost

he_force_ip4: false
he_force_ip6: false

he_pause_before_engine_setup: false
he_pause_host: false
he_pause_after_failed_add_host: true
he_pause_after_failed_restore: true
he_debug_mode: false

## Mandatory variables:

he_bridge_if: null
he_fqdn: null
he_mem_size_MB: max
he_vcpus: max
he_disk_size_GB: 61

he_enable_libgfapi: false
he_enable_hc_gluster_service: false
he_vm_mac_addr: null
he_remove_appliance_rpm: true
he_pki_renew_on_restore: false
he_enable_keycloak: true

## Storage domain vars:
he_domain_type: null  # can be: nfs | iscsi | glusterfs | fc
he_storage_domain_addr: null

## NFS vars:
## Defaults are null, user should specify if NFS is chosen
he_mount_options: ''
he_storage_domain_path: null
he_nfs_version: auto  # can be: auto, v4 or v3
he_storage_if: null

## ISCSI vars:
## Defaults are null, user should specify if ISCSI is chosen
he_iscsi_username: null
he_iscsi_password: null
he_iscsi_discover_username: null
he_iscsi_discover_password: null
he_iscsi_target: null
he_lun_id: null
he_iscsi_portal_port: null
he_iscsi_portal_addr: null
he_iscsi_tpgt: null
he_discard: false

# Define if using STATIC ip configuration
he_vm_ip_addr: null
he_vm_ip_prefix: null
he_dns_addr: null  # up to 3 DNS servers IPs can be added
he_vm_etc_hosts: false  # user can add lines to /etc/hosts on the engine VM
he_gateway: null
he_network_test: 'dns'  # can be: 'dns', 'ping', 'tcp' or 'none'
he_tcp_t_address: null
he_tcp_t_port: null

# ovirt-hosted-engine-setup variables
he_just_collect_network_interfaces: false
he_libvirt_authfile: '/etc/ovirt-hosted-engine/virsh_auth.conf'
he_offline_deployment: false
he_additional_package_list: []

# *** Do Not Use On Production Environment ***
# ** Used for testing ONLY ***
he_requirements_check_enabled: true
he_memory_requirements_check_enabled: true



On 9/15/22 09:18, Ritesh Chikatwar wrote:

Hey Jonas,


What is the cockpit version you are using? And also can you share this 
file with me 
(/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml)?


On Thu, Sep 15, 2022 at 12:42 PM Yedidyah Bar David  
wrote:


On Wed, Sep 14, 2022 at 11:31 PM Jonas  wrote:
>
> Ok even after resetting the password through SSH it is not
accepted on the web page.
>
> [root@ovirt-engine-test ~]# ovirt-aaa-jdbc-tool user
password-reset admin --password-valid-to="-09-14 20:07:39Z"
--password="interactive:" --force
> Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
> Password:
> Reenter password:
> updating user admin...
> user updated successfully
>
> On 9/14/22 21:40, Jonas wrote:
>
> Hello all
>
> I'm trying to deploy an oVirt Engine through the cockpit
interface. Unfortunately the deployment fails with the following
error:

Sorry, but the cockpit hosted-engine deployment is broken. Please use
the CLI. Thanks.

Best regards,
-- 
Didi

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/MKXPPQJEFHKRJXFM56IULJ37K7JYSCWX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https:

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Strahil Nikolov via Users
Set it back to the original value .
The option picks the local brick for reading instead of picking the fastest one 
(which could be either a remote or a local one) which could help with bandwidth 
issues.

Can you provide details about the bricks like HW raid/JBOD, raid type 
(0,5,6,10), stripe size, stripe width , filesystem (I expect XFS but it's nice 
to know) ,etc.
Also share the gluster client log from the node where the backup proxy is. 
Should be something like: 
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-:_gv1.log
Best Regards,Strahil Nikolov  
 
  On Thu, Sep 15, 2022 at 17:01, Diego Ercolani wrote:  
 During this time (Hosted-Engine Hung, this appears in the host were it's 
supposed to have Hosted-Engine Running:
2022-09-15 13:59:27,762+ WARN  (Thread-10) [virt.vm] 
(vmId='8486ed73-df34-4c58-bfdc-7025dec63b7f') Shutdown by QEMU Guest Agent 
failed (agent probably inactive) (vm:5490)
2022-09-15 13:59:27,762+ WARN  (Thread-10) [virt.vm] 
(vmId='8486ed73-df34-4c58-bfdc-7025dec63b7f') Shutting down with guest agent 
FAILED (vmpowerdown:115)
2022-09-15 13:59:28,780+ ERROR (qgapoller/1) [virt.periodic.Operation] 
> 
operation failed (periodic:204)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py", line 202, in 
__call__
    self._func()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 
493, in _poller
    vm_id, self._qga_call_get_vcpus(vm_obj))
  File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 
814, in _qga_call_get_vcpus
    if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZMZ3V5E4ZFNWPW3R74ZXYFZA5RR3BV7R/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RANBGQMMFTPMRL4PIMFPGUBJPKGQ7CZ2/


[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Diego Ercolani
During this time (Hosted-Engine Hung, this appears in the host were it's 
supposed to have Hosted-Engine Running:
2022-09-15 13:59:27,762+ WARN  (Thread-10) [virt.vm] 
(vmId='8486ed73-df34-4c58-bfdc-7025dec63b7f') Shutdown by QEMU Guest Agent 
failed (agent probably inactive) (vm:5490)
2022-09-15 13:59:27,762+ WARN  (Thread-10) [virt.vm] 
(vmId='8486ed73-df34-4c58-bfdc-7025dec63b7f') Shutting down with guest agent 
FAILED (vmpowerdown:115)
2022-09-15 13:59:28,780+ ERROR (qgapoller/1) [virt.periodic.Operation] 
> 
operation failed (periodic:204)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/periodic.py", line 202, in 
__call__
self._func()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 
493, in _poller
vm_id, self._qga_call_get_vcpus(vm_obj))
  File "/usr/lib/python3.6/site-packages/vdsm/virt/qemuguestagent.py", line 
814, in _qga_call_get_vcpus
if 'online' in vcpus:
TypeError: argument of type 'NoneType' is not iterable
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZMZ3V5E4ZFNWPW3R74ZXYFZA5RR3BV7R/


[ovirt-users] Re: Certificate doesn't contain valid subject alternative name

2022-09-15 Thread Andrei Verovski


> On 15 Sep 2022, at 15:54, Strahil Nikolov  wrote:
> 
> Can you live migrate the VM?

Unfortunately, not, I have disabled this feature on all my VMs.

If this error is not critical, I’ll continue to use this node as is.

> 
> Best Regards, Strahil Nikolov 
> 
> On Thu, Sep 15, 2022 at 12:17, Andrei Verovski
>  wrote:
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XNFRWGBB4Y6ECIPSRRFRMGNAQBZDUISQ/
>  
> 
> 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2IRROTPCE3TME4X4D7B7LVTRWXKJNNIB/


[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Diego Ercolani
The current set is:
[root@ovirt-node2 ~]# gluster volume get glen cluster.choose-local| awk 
'/choose-local/ {print $2}'
off
[root@ovirt-node2 ~]# gluster volume get gv0 cluster.choose-local| awk 
'/choose-local/ {print $2}'
off
[root@ovirt-node2 ~]# gluster volume get gv1 cluster.choose-local| awk 
'/choose-local/ {print $2}'
off

Is stated in the "virt" group: 
/var/lib/glusterd/groups/virt:cluster.choose-local=off

I set the cluster.choose-local to true on every gluster volume and started 
migrating  Hosted Engine around... a bunch of vms freezed and after a while 
also the Hosted-Engine hung

To complete the environment, here it is the complete set for the glen (Hosted 
-Engine volume) gv0 and gv1 (volumes used by VMs):

[root@ovirt-node3 ~]# gluster volume info gv1
Volume Name: gv1
Type: Replicate
Volume ID: 863221f4-e11c-4589-95e9-aa3948e177f5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt-node2.ovirt:/brickgv1/gv1
Brick2: ovirt-node3.ovirt:/brickgv1/gv1
Brick3: ovirt-node4.ovirt:/dati/gv1 (arbiter)
Options Reconfigured:
storage.build-pgfid: off
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: true
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: on
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ARHV3AX7I7NZ5LYMZR7FHBXMENHSVVYN/


[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Strahil Nikolov via Users
Can you test the backup after setting:status=$(gluster volume get  
cluster.choose-local awk '/choose-local/ {print $2}')
gluster volume set  cluster.choose-local true
And after the test:gluster volume set  cluster.choose-local $status
Best Regards,Strahil Nikolov 
 
 
  On Thu, Sep 15, 2022 at 12:26, Diego Ercolani wrote:  
 Sorry, I see that the editor bring away all the head spaces that indent the 
timestamp.
I retried the test, hoping to find the same error, and I found it. On node3. I 
changed the code of the read routine:
cd /rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1; while sleep 0.1 ; do 
date +'Timestamp:%s.%N'; cat testfile  ; done

Also I have to point that in my gluster configuration: node2 and node3 are 
replicating while node4 is the arbiter.

I find this:
node2:
1663233449.088250919
1663233449.192357508
1663233449.296979848
1663233449.401279036
1663233449.504945285
1663233449.609107728
1663233449.713468581
1663233449.817435890
1663233449.922132348
1663233450.030449768
1663233450.134975317
1663233450.239171022
1663233450.342905278
1663233450.447466303
1663233450.551867180
1663233450.658387123
1663233450.762761972
1663233450.868063254
1663233450.973718716
1663233451.077074998
1663233451.181540916
1663233451.286831549
1663233451.393060700
1663233451.500488204
1663233451.606233103
1663233451.711308978
1663233451.816455012
1663233451.922142384
1663233452.028786138
1663233452.134080858
1663233452.239052098
1663233452.343540758
1663233452.449015706
1663233452.553832377
1663233452.658255495
1663233452.762774092
1663233452.866525770
1663233452.970784862
1663233453.075297458
1663233453.178379039
1663233453.281728609
1663233453.385722608
1663233453.489965321
1663233453.593885612
1663233453.698436388
1663233453.802415640
1663233453.906987275
1663233454.010658544
1663233454.114877122
1663233454.218459344
1663233454.322761948
1663233454.428025821
1663233454.533464752
1663233454.637652754
1663233454.741783087
1663233454.845600527
1663233454.950286885
1663233455.055143240
1663233455.161169524
1663233455.265582394
1663233455.369963173
1663233455.475453048
1663233455.580044209
1663233455.684503325
1663233455.788750947
1663233455.894135415
1663233455.998738750


node3:
Timestamp:1663233450.000172185
1663233449.296979848
Timestamp:1663233450.101871259
1663233449.296979848
Timestamp:1663233450.204006554
1663233449.296979848
Timestamp:1663233450.306014420
1663233449.296979848
Timestamp:1663233450.407890669
1663233450.342905278
Timestamp:1663233450.511435794
1663233450.342905278
Timestamp:1663233450.613144044
1663233450.342905278
Timestamp:1663233450.714936282
1663233450.342905278
Timestamp:1663233450.816689957
1663233450.342905278
Timestamp:1663233450.919563686
1663233450.342905278
Timestamp:1663233451.021558628
1663233450.342905278
Timestamp:1663233451.123617850
1663233450.342905278
Timestamp:1663233451.225769366
1663233450.342905278
Timestamp:1663233451.327726226
1663233450.342905278
Timestamp:1663233451.429934369
1663233451.393060700
Timestamp:1663233451.532945857
1663233451.393060700
Timestamp:1663233451.634935468
1663233451.393060700
Timestamp:1663233451.737058041
1663233451.393060700
Timestamp:1663233451.839167797
1663233451.393060700
Timestamp:1663233451.941486148
1663233451.393060700
Timestamp:1663233452.043288336
1663233451.393060700
Timestamp:1663233452.145090644
1663233451.393060700
Timestamp:1663233452.246825425
1663233451.393060700
Timestamp:1663233452.348501234
1663233451.393060700
Timestamp:1663233452.450351853
Timestamp:1663233452.553106458
Timestamp:1663233452.655222156
Timestamp:1663233452.757315704
Timestamp:1663233452.859298562
Timestamp:1663233452.961655817
Timestamp:1663233453.063383043
Timestamp:1663233453.165180993
Timestamp:1663233453.266883792
Timestamp:1663233453.368890215
Timestamp:1663233453.470586924
1663233453.385722608
Timestamp:1663233453.573171648
1663233453.385722608
Timestamp:1663233453.675160288
1663233453.385722608
Timestamp:1663233453.777281257
1663233453.385722608
Timestamp:1663233453.879306084
1663233453.385722608
Timestamp:1663233453.981588858
1663233453.385722608
Timestamp:1663233454.083371309
1663233453.385722608
Timestamp:1663233454.185268095
1663233453.385722608
Timestamp:1663233454.287256013
1663233453.385722608
Timestamp:1663233454.389068540
1663233453.385722608
Timestamp:1663233454.490809573
1663233454.428025821
Timestamp:1663233454.593597380
1663233454.428025821
Timestamp:1663233454.695329646
1663233454.428025821
Timestamp:1663233454.797029330
1663233454.428025821
Timestamp:1663233454.899000216
1663233454.428025821

node4:
Timestam:1663233450.043398632
1663233449.817435890
Timestam:1663233450.144889219
1663233449.817435890
Timestam:1663233450.246423969
1663233449.817435890
Timestam:1663233450.347730771
1663233449.817435890
Timestam:1663233450.449109919
1663233449.817435890
Timestam:1663233450.550659616
1663233449.817435890
Timestam:1663233450.652173237
1663233449.817435890
Timestam:1663233450.753610724
1663233449.817435890
Timestam:1663233450.855978621

[ovirt-users] Re: Certificate doesn't contain valid subject alternative name

2022-09-15 Thread Strahil Nikolov via Users
Can you live migrate the VM?
Best Regards, Strahil Nikolov 
 
 
  On Thu, Sep 15, 2022 at 12:17, Andrei Verovski wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XNFRWGBB4Y6ECIPSRRFRMGNAQBZDUISQ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J53X4ZYPJTPVBPVZO65RXEDZTQSK6DKZ/


[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Diego Ercolani
Sorry, I see that the editor bring away all the head spaces that indent the 
timestamp.
I retried the test, hoping to find the same error, and I found it. On node3. I 
changed the code of the read routine:
cd /rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1; while sleep 0.1 ; do 
date +'Timestamp:%s.%N'; cat testfile  ; done

Also I have to point that in my gluster configuration: node2 and node3 are 
replicating while node4 is the arbiter.

I find this:
node2:
1663233449.088250919
1663233449.192357508
1663233449.296979848
1663233449.401279036
1663233449.504945285
1663233449.609107728
1663233449.713468581
1663233449.817435890
1663233449.922132348
1663233450.030449768
1663233450.134975317
1663233450.239171022
1663233450.342905278
1663233450.447466303
1663233450.551867180
1663233450.658387123
1663233450.762761972
1663233450.868063254
1663233450.973718716
1663233451.077074998
1663233451.181540916
1663233451.286831549
1663233451.393060700
1663233451.500488204
1663233451.606233103
1663233451.711308978
1663233451.816455012
1663233451.922142384
1663233452.028786138
1663233452.134080858
1663233452.239052098
1663233452.343540758
1663233452.449015706
1663233452.553832377
1663233452.658255495
1663233452.762774092
1663233452.866525770
1663233452.970784862
1663233453.075297458
1663233453.178379039
1663233453.281728609
1663233453.385722608
1663233453.489965321
1663233453.593885612
1663233453.698436388
1663233453.802415640
1663233453.906987275
1663233454.010658544
1663233454.114877122
1663233454.218459344
1663233454.322761948
1663233454.428025821
1663233454.533464752
1663233454.637652754
1663233454.741783087
1663233454.845600527
1663233454.950286885
1663233455.055143240
1663233455.161169524
1663233455.265582394
1663233455.369963173
1663233455.475453048
1663233455.580044209
1663233455.684503325
1663233455.788750947
1663233455.894135415
1663233455.998738750


node3:
Timestamp:1663233450.000172185
1663233449.296979848
Timestamp:1663233450.101871259
1663233449.296979848
Timestamp:1663233450.204006554
1663233449.296979848
Timestamp:1663233450.306014420
1663233449.296979848
Timestamp:1663233450.407890669
1663233450.342905278
Timestamp:1663233450.511435794
1663233450.342905278
Timestamp:1663233450.613144044
1663233450.342905278
Timestamp:1663233450.714936282
1663233450.342905278
Timestamp:1663233450.816689957
1663233450.342905278
Timestamp:1663233450.919563686
1663233450.342905278
Timestamp:1663233451.021558628
1663233450.342905278
Timestamp:1663233451.123617850
1663233450.342905278
Timestamp:1663233451.225769366
1663233450.342905278
Timestamp:1663233451.327726226
1663233450.342905278
Timestamp:1663233451.429934369
1663233451.393060700
Timestamp:1663233451.532945857
1663233451.393060700
Timestamp:1663233451.634935468
1663233451.393060700
Timestamp:1663233451.737058041
1663233451.393060700
Timestamp:1663233451.839167797
1663233451.393060700
Timestamp:1663233451.941486148
1663233451.393060700
Timestamp:1663233452.043288336
1663233451.393060700
Timestamp:1663233452.145090644
1663233451.393060700
Timestamp:1663233452.246825425
1663233451.393060700
Timestamp:1663233452.348501234
1663233451.393060700
Timestamp:1663233452.450351853
Timestamp:1663233452.553106458
Timestamp:1663233452.655222156
Timestamp:1663233452.757315704
Timestamp:1663233452.859298562
Timestamp:1663233452.961655817
Timestamp:1663233453.063383043
Timestamp:1663233453.165180993
Timestamp:1663233453.266883792
Timestamp:1663233453.368890215
Timestamp:1663233453.470586924
1663233453.385722608
Timestamp:1663233453.573171648
1663233453.385722608
Timestamp:1663233453.675160288
1663233453.385722608
Timestamp:1663233453.777281257
1663233453.385722608
Timestamp:1663233453.879306084
1663233453.385722608
Timestamp:1663233453.981588858
1663233453.385722608
Timestamp:1663233454.083371309
1663233453.385722608
Timestamp:1663233454.185268095
1663233453.385722608
Timestamp:1663233454.287256013
1663233453.385722608
Timestamp:1663233454.389068540
1663233453.385722608
Timestamp:1663233454.490809573
1663233454.428025821
Timestamp:1663233454.593597380
1663233454.428025821
Timestamp:1663233454.695329646
1663233454.428025821
Timestamp:1663233454.797029330
1663233454.428025821
Timestamp:1663233454.899000216
1663233454.428025821

node4:
Timestam:1663233450.043398632
1663233449.817435890
Timestam:1663233450.144889219
1663233449.817435890
Timestam:1663233450.246423969
1663233449.817435890
Timestam:1663233450.347730771
1663233449.817435890
Timestam:1663233450.449109919
1663233449.817435890
Timestam:1663233450.550659616
1663233449.817435890
Timestam:1663233450.652173237
1663233449.817435890
Timestam:1663233450.753610724
1663233449.817435890
Timestam:1663233450.855978621
1663233450.762761972
Timestam:1663233450.958988505
1663233450.762761972
Timestam:1663233451.060495133
1663233450.762761972
Timestam:1663233451.162022459
1663233450.762761972
Timestam:1663233451.263371279
1663233450.762761972
Timestam:1663233451.364879118
1663233450.762761972
Timestam:1663233451.466311416
1663233450.762761972
T

[ovirt-users] Re: Certificate doesn't contain valid subject alternative name

2022-09-15 Thread Andrei Verovski
Hi,

I’m unexpectedly got same error on oVirt 4.5.2.4. 
Can’t stop node right now and enroll new certificates, its running essential 
VMs.


> On 1 Sep 2022, at 12:02, Ayansh Rocks  wrote:
> 
> Any one faced this issue or any solution from ovirt dev ?
> 
> On Mon, Aug 22, 2022 at 11:16 PM Ayansh Rocks  > wrote:
> Hi All,
> 
> I am getting below alert on my single node ovirt. Can anyone tell me more 
> about it and how I can fix it.I am using ovirt 4.3.8.2.
> 
> Certificate of host host-name.example.com  is 
> invalid. The certificate doesn't contain valid subject alternative name, 
> please enroll new certificate for the host.
> 
> Thank you
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAMMOIAQVPNZBES72TCCPTG6SAFKHVT5/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XNFRWGBB4Y6ECIPSRRFRMGNAQBZDUISQ/


[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Diego Ercolani
Thank you for the analisys:

The version is the last distributed in the ovirt@centos8 distribution:
[root@ovirt-node2 ~]# rpm -qa | grep '\(glusterfs-server\|ovirt-node\)'
ovirt-node-ng-image-update-placeholder-4.5.2-1.el8.noarch
glusterfs-server-10.2-1.el8s.x86_64
ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
python3-ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
ovirt-node-ng-image-update-4.5.2-1.el8.noarch

[root@ovirt-node3 ~]# rpm -qa | grep '\(glusterfs-server\|ovirt-node\)'
ovirt-node-ng-image-update-placeholder-4.5.2-1.el8.noarch
glusterfs-server-10.2-1.el8s.x86_64
ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
python3-ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
ovirt-node-ng-image-update-4.5.2-1.el8.noarch

[root@ovirt-node4 ~]# rpm -qa | grep '\(glusterfs-server\|ovirt-node\)'
ovirt-node-ng-image-update-placeholder-4.5.2-1.el8.noarch
glusterfs-server-10.2-1.el8s.x86_64
ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
python3-ovirt-node-ng-nodectl-4.4.2-1.el8.noarch
ovirt-node-ng-image-update-4.5.2-1.el8.noarch

Duiring backup (or when there is an input/output (even not to intensive looking 
at the SDD led) the only think I noticed is that sometimes there is a sort of 
lag:
I issue "gluster volume heal glen|gv0|gv1 info" and the answer wait 4-5 seconds 
before answer... even if the aswer give 0 object missing... I have ever 
connected nodes. eg.
Brick ovirt-node2.ovirt:/brickhe/_glen
Status: Connected
Number of entries: 0

Brick ovirt-node3.ovirt:/brickhe/glen
Status: Connected
Number of entries: 0

Brick ovirt-node4.ovirt:/dati/_glen
Status: Connected
Number of entries: 0

For hte "rate limit" I didn't work on the QOS, but the destination is an NFS 
sata raid5 NAS publisced via 1Gb link so I think I have a 20MB/s "cap" by 
architecture, the gluster bricks are all built by SSD SATA drives, I recorded a 
troughput of 200MB/s.
I also tried to monitor performace via iotop command but I didn't recorded a 
"band problem" and even monitored network via iftop recording no band 
saturation and no errors.

Searching in the gluster mailing list 
(https://lists.gluster.org/pipermail/gluster-users/2022-September/040063.html) 
I tried the same test but under 1/10 seconds write and read:
[root@ovirt-node2 ~]# su - vdsm -s /bin/bash
Last login: Wed Sep 14 15:33:45 UTC 2022 on pts/1
nodectl must be run as root!
nodectl must be run as root!
cd /rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1; while sleep 0.1; do 
date +'%s.%N' | tee testfile ; done

ovirt-node-ng-image-update-4.5.2-1.el8.noarch
[root@ovirt-node3 ~]# su - vdsm -s /bin/bash
nodectl must be run as root!
nodectl must be run as root!
[vdsm@ovirt-node3 ~]$ cd 
/rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1; while sleep 0.1 ; do 
date +' %s.%N'; cat testfile  ; done

[root@ovirt-node4 ~]# su - vdsm -s /bin/bash
Last login: Wed Aug 24 16:52:55 UTC 2022
nodectl must be run as root!
nodectl must be run as root!
[vdsm@ovirt-node4 ~]$ cd 
/rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1; while sleep 0.1 ; do 
date +' %s.%N'; cat testfile  ; done

Obtaining that for the nodes reading glusterfs I record only a 1 second 
update... more or less:
to report the test I selected timestamp for node2 (the write node) betweeen 
1663228352 and 1663228356, for node3 and 4 between 1663228353 and 1663228356:

node2:
1663228352.589998302
1663228352.695887198
1663228352.801681699
1663228352.907548634
1663228353.011931276
1663228353.115904115
1663228353.222383590
1663228353.329941123
1663228353.436480791
1663228353.540536995
1663228353.644858473
1663228353.749470221
1663228353.853969491
1663228353.958703186
1663228354.062732971
1663228354.166616934
1663228354.270398507
1663228354.373989214
1663228354.477149100
1663228354.581862187
1663228354.686177524
1663228354.790362507
1663228354.894673446
1663228354.999136257
1663228355.102889616
1663228355.207043913
1663228355.312522545
1663228355.416667384
1663228355.520897473
1663228355.624582255
1663228355.728590069
1663228355.832979634
1663228355.937309737
1663228356.042289521
1663228356.146565174
1663228356.250773672
1663228356.356361818
1663228356.460048755
1663228356.565054968
1663228356.669126850
1663228356.773807899
1663228356.878011739
1663228356.983842597

node3:
 1663228353.027991911
1663228352.064562785
 1663228353.129696675
1663228353.115904115
 1663228353.232351572
1663228353.115904115
 1663228353.334188748
1663228353.115904115
 1663228353.436208688
1663228353.115904115
 1663228353.538268493
1663228353.115904115
 1663228353.641266519
1663228353.115904115
 1663228353.743094997
1663228353.115904115
 1663228353.845244131
1663228353.115904115
 1663228353.947049766
1663228353.115904115
 1663228354.048876741
1663228353.115904115
 1663228354.150979017
1663228354.062732971
 1663228354.254198339
1663228354.062732971
 1663228354.356197640
1663228354.270398507
 1663228354.459541685
1663228354.270398507
 1663228354.561548541
1663228354.270398507
 1663228354.664280563
1663228354.270398507
 1663228354.766557007
1663228354.270398507
 1663228354.8

[ovirt-users] Re: Error during deployment of ovirt-engine

2022-09-15 Thread Ritesh Chikatwar
Hey Jonas,


What is the cockpit version you are using? And also can you share this file
with me (
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml
)?

On Thu, Sep 15, 2022 at 12:42 PM Yedidyah Bar David  wrote:

> On Wed, Sep 14, 2022 at 11:31 PM Jonas  wrote:
> >
> > Ok even after resetting the password through SSH it is not accepted on
> the web page.
> >
> > [root@ovirt-engine-test ~]# ovirt-aaa-jdbc-tool user password-reset
> admin --password-valid-to="-09-14 20:07:39Z" --password="interactive:"
> --force
> > Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
> > Password:
> > Reenter password:
> > updating user admin...
> > user updated successfully
> >
> > On 9/14/22 21:40, Jonas wrote:
> >
> > Hello all
> >
> > I'm trying to deploy an oVirt Engine through the cockpit interface.
> Unfortunately the deployment fails with the following error:
>
> Sorry, but the cockpit hosted-engine deployment is broken. Please use
> the CLI. Thanks.
>
> Best regards,
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MKXPPQJEFHKRJXFM56IULJ37K7JYSCWX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4LGDSR4LC5HTBQJLZVXCP3RLWJDJIEJJ/


[ovirt-users] Re: Error during deployment of ovirt-engine

2022-09-15 Thread Yedidyah Bar David
On Wed, Sep 14, 2022 at 11:31 PM Jonas  wrote:
>
> Ok even after resetting the password through SSH it is not accepted on the 
> web page.
>
> [root@ovirt-engine-test ~]# ovirt-aaa-jdbc-tool user password-reset admin 
> --password-valid-to="-09-14 20:07:39Z" --password="interactive:" --force
> Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
> Password:
> Reenter password:
> updating user admin...
> user updated successfully
>
> On 9/14/22 21:40, Jonas wrote:
>
> Hello all
>
> I'm trying to deploy an oVirt Engine through the cockpit interface. 
> Unfortunately the deployment fails with the following error:

Sorry, but the cockpit hosted-engine deployment is broken. Please use
the CLI. Thanks.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MKXPPQJEFHKRJXFM56IULJ37K7JYSCWX/