[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
New event: Mar 28 14:37:32 ovirt-node3.ovirt vdsm[4288]: WARN executor state: count=5 workers={, , , , at 0x7fcdc0010898> timeout=7.5, duration=7.50 at 0x7fcdc0010208> discarded task#=189 at 0x7fcdc0010390>} Mar 28 14:37:32 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:32 829 [7438]: s4

[ovirt-users] Re: Failing "change Master storage domain" from gluster to iscsi

2023-03-28 Thread Diego Ercolani
Worked I halted a node of the gluster cluster (that seemed to be problematic from the gluster point of view) and the change of the master storage domain worked ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
It's difficult to answer as the engine normally "freezes" or is taken down during events... I will try to get them next time ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] clock skew in hosted engine and VMs due to slow IO storage

2023-03-28 Thread Diego Ercolani
I don't know why (but I suppose is related to storage speed) the virtual machines tend to present a skew in the clock from some days to a century forward (2177) I see in the journal of the engine: Mar 28 13:19:40 ovirt-engine.ovirt NetworkManager[1158]: [1680009580.2045] dhcp4 (eth0): state

[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
No, now seem "stable" awaiting for next event ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Failing "change Master storage domain" from gluster to iscsi

2023-03-28 Thread Diego Ercolani
In the current release of ovirt (4.5.4) I'm currently experiencing a fail in change master storage domain from a gluster volume to everywhere. The GUI talk about a "general" error. watching the engine log: 2023-03-28 11:51:16,601Z WARN

[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
I record entry like this in the journal of everynode: Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 [4105511]: s9 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids Mar 28

[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
The scheduling policy was the "Suspend Workload if needed" and disabled parallel migration. The problem is that The Engine (mapped on external NFS domain implemented by a linux box without any other vm mapped) simply disappear. I have a single 10Gbps intel ethernet link that I use to distribute

[ovirt-users] Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
Hello, in my installation I have to use poor storage... the oVirt installation doesn't manage such a case and begin to "balance" and move VMs around... taking too many snapshots stressing a poor performance all the cluster mess up Why the vms don't go in "Pause" state but the cluster prefer

[ovirt-users] watchdog: BUG: soft lockup - CPU#3 stuck for XXXs! mitigations?

2023-03-14 Thread Diego Ercolani
Hello, I noticed that when you have poor "storage" performances, al the VMs are frustrated with entry like the one in the subject. Searching around there is a case under redhat: https://access.redhat.com/solutions/5427 that is suggesting to address the issue (if not possible to have rocket

[ovirt-users] Re: Self Hosted Engine in unaligned state: node are

2023-03-13 Thread Diego Ercolani
Finally it seem that the problem was in the external nfs server, it failed rpc.gsssd and the nfs service become unresponsive... so the hosted-engine configuration domain wasn't reacheable ___ Users mailing list -- users@ovirt.org To unsubscribe send an

[ovirt-users] Self Hosted Engine in unaligned state: node are

2023-03-13 Thread Diego Ercolani
Hello, ovirt-release-host-node-4.5.4-1.el8.x86_64 Today I found my cluster in an unconsinstent state I have three nodes: ovirt-node2 ovirt-node3 ovirt-node4 with self hosted engine deployed using external nfs storage My first attempt was to launch hosted-engine --vm-statos on three nodes and I

[ovirt-users] Re: Regenerate DWH ovirt_engine_history

2023-01-24 Thread Diego Ercolani
It's happened again there should be some glitch ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Re: Regenerate DWH ovirt_engine_history

2023-01-20 Thread Diego Ercolani
Hello, I reinstalled the hosted-engine (see https://lists.ovirt.org/archives/list/users@ovirt.org/thread/QOJYO43SVOMCX6NHDP2N6PF3EIXDTRLP/) I found a "glitch" in the grafana, somehow navigating the grafana environment (brand new) have put in inconsistent state the ovirt_history_db. The symptoms

[ovirt-users] Re: 4.5.4 Hosted-Engine: change hosted-engine storage

2023-01-19 Thread Diego Ercolani
I ended the process but I think there must be a global revision in the architecture tooo intricate and definitely not consumer ready This is what I did: -1. in my environment I have pacemaker cluster between nodes (as to overcome gluster I tried to implement also an ha nfs, and one node

[ovirt-users] Re: 4.5.4 Hosted-Engine: change hosted-engine storage

2023-01-18 Thread Diego Ercolani
Thank you very much. I think the process is very overcomplicated I successfully setup the engine installing a fresh engine and then restore the backup... but then, when I tryied to register the new storage... everything gone wrong. There shoud be some shortcircuits that permit to overcome

[ovirt-users] Re: 4.5.4 Hosted-Engine: change hosted-engine storage

2023-01-17 Thread Diego Ercolani
Thank you, I'm currently trying to accomplish what you reported. But I'm currently stuck: I launched this: hosted-engine --deploy --4 --restore-from-file=/root/deploy_hosted_engine_230117/230117-scopeall-backup.tar.gz

[ovirt-users] 4.5.4 Hosted-Engine: change hosted-engine storage

2023-01-17 Thread Diego Ercolani
ovirt-engine-appliance-4.5-20221206133948 Hello, I have some trouble with my Gluster instance where there is hosted-engine. I would like to copy data from that hosted engine and reverse to another hosted-engine storage (I will try NFS). I think the main method is to put ovirt in

[ovirt-users] Re: Regenerate DWH ovirt_engine_history

2023-01-02 Thread Diego Ercolani
Finally it worked: After the step previous described: 1. put cluster in global maintenance 2. stop ovirt-engine and ovirt-engine-dwhd 3. in the table dwh_history_timekeeping @enginedb I changed the dwhUuid 4. launched engine-setup, the engine-setup asked to disconnect a "fantomatic" DWH (I

[ovirt-users] Re: Regenerate DWH ovirt_engine_history

2023-01-02 Thread Diego Ercolani
I found the reference on that file: https://github.com/oVirt/ovirt-dwh/blob/master/docs/Notes-about-single-dwhd It's only to notice that I veryfied the contents of dwh_history_timeskeeping table @engine db and the dwhUuid it's consistent with the one in the 10-setup-uuid.conf file While

[ovirt-users] Re: Regenerate DWH ovirt_engine_history

2023-01-02 Thread Diego Ercolani
All the files seem to be correctly intializated. The only doubt is in the last directory you addressed: /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/ there is a file: [root@ovirt-engine ovirt-engine-dwhd.conf.d]# ls -ltr total 28 -rw-r--r--. 1 root root 223 Oct 5 09:17 README -rw---. 1

[ovirt-users] Re: Regenerate DWH ovirt_engine_history

2023-01-02 Thread Diego Ercolani
Thank you for your infos. > It's not the engine that is writing there, it's dwhd. The engine only > reads. Did you check /var/log/ovirt-engine-dwh/ ? What is confusing me are these line in /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log lastErrorSent|2011-07-03 12:46:47.00 etlVersion|4.5.7

[ovirt-users] Re: Regenerate DWH ovirt_engine_history

2023-01-01 Thread Diego Ercolani
Sorry, some environment: - ovirt hosted-engine (self hosted) - [root@ovirt-engine ~]# rpm -qa | grep engine ovirt-engine-setup-plugin-ovirt-engine-4.5.4-1.el8.noarch ovirt-engine-extension-aaa-ldap-1.4.6-1.el8.noarch ovirt-engine-backend-4.5.4-1.el8.noarch

[ovirt-users] Regenerate DWH ovirt_engine_history

2023-01-01 Thread Diego Ercolani
Hello to all and happy new year. My question is "simple". I need to "reset" the ovirt_engine_history database. I tried to use: engine-setup --reconfigure-optional-components after removing: - ovirt_engine_history - set to "False" OVESETUP_DWH_CORE/enable=bool:True

[ovirt-users] Re: ovirt 4.5.3.2 - remove snapshot and powerdown VM take the VM disk to unconsistent state

2022-11-16 Thread Diego Ercolani
I saw you're involved in resolution, If you need some information or if I have to raise to verbose log, please just ask. Thank you ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] ovirt 4.5.3.2 - remove snapshot and powerdown VM take the VM disk to unconsistent state

2022-11-14 Thread Diego Ercolani
I think I encountered another bug in the engine: I needed to remove a snapshot, and while I was removing the snapshot guest went down. What happened is that the snapshot remove failed and left "inconsistent". I think there is some issue to address. Here are the relevant log (the engine.log (see

[ovirt-users] Re: Storage domain deletion not possible because not possible to detach

2022-09-28 Thread Diego Ercolani
I tried to do what written in the list. This is what I did: [root@ovirt-node3 ~]# hosted-engine --set-maintenance --mode=global [root@ovirt-engine ~]# systemctl list-units | grep ovirt ovirt-engine-dwhd.service loaded active running

[ovirt-users] Storage domain deletion not possible because not possible to detach

2022-09-26 Thread Diego Ercolani
(I'm always talking about the last ovirt version that is currently engine: 4.5.2.5-1.el8, node: ovirt-node-ng-4.5.2-0.20220810.0) I tried to add an iSCSI storage domain adding a new vlan network where the storage is attached this gave a completely mess-up of the storage engine. I would like

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-22 Thread Diego Ercolani
I did it following the {read,write}-perf example reported in paragraph 12.6 and 12.7 https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-running_the_volume_top_command, here are the results: https://cloud.ssis.sm/index.php/s/9bncnNSopnFReRS

[ovirt-users] iSCSI domain not seen by a node if the topology differ from the topology of the other nodes?

2022-09-21 Thread Diego Ercolani
Hello, I think I0ve found another issue: I have the three node that are under heavy test and, after having problem with gluster I configured them to use with iSCSI (without multipath now) so I configured via the gui a new iscsi data domain using a single target under a single VLAN. I suspect

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-21 Thread Diego Ercolani
I tried to measure IOs using gluster volume top but its results seem very cryptic to me (I need a deep analyze and don't have the time now) Thank you very much for your analysis, if I understood the problem is that the consumer SSD cache is too weak to help in times under a smoll number ~15 not

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-17 Thread Diego Ercolani
Parameter cluster.choose-local set to off. I confirm the filesystem of the bricks are all XFS as required. I started the farm only to accomplish a test bench of oVirt implementation, so I used 3 hosts based on ryzen5 processor desktop environment equipped with 4 DDR (4 32GB modules) and 1 disk

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Diego Ercolani
During this time (Hosted-Engine Hung, this appears in the host were it's supposed to have Hosted-Engine Running: 2022-09-15 13:59:27,762+ WARN (Thread-10) [virt.vm] (vmId='8486ed73-df34-4c58-bfdc-7025dec63b7f') Shutdown by QEMU Guest Agent failed (agent probably inactive) (vm:5490)

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Diego Ercolani
The current set is: [root@ovirt-node2 ~]# gluster volume get glen cluster.choose-local| awk '/choose-local/ {print $2}' off [root@ovirt-node2 ~]# gluster volume get gv0 cluster.choose-local| awk '/choose-local/ {print $2}' off [root@ovirt-node2 ~]# gluster volume get gv1 cluster.choose-local|

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Diego Ercolani
Sorry, I see that the editor bring away all the head spaces that indent the timestamp. I retried the test, hoping to find the same error, and I found it. On node3. I changed the code of the read routine: cd /rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1; while sleep 0.1 ; do date

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-15 Thread Diego Ercolani
Thank you for the analisys: The version is the last distributed in the ovirt@centos8 distribution: [root@ovirt-node2 ~]# rpm -qa | grep '\(glusterfs-server\|ovirt-node\)' ovirt-node-ng-image-update-placeholder-4.5.2-1.el8.noarch glusterfs-server-10.2-1.el8s.x86_64

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-12 Thread Diego Ercolani
Hello. I did a full backup using veeam but I recorded many errors in the gluster log. This is the log (https://cloud.ssis.sm/index.php/s/KRimf5MLXK3Ds3d). The log is from the same node where veeam-proxy and the backupped VMs resides. Both are running in the gv1 storage domain. See that hours

[ovirt-users] vdsm.log full of "error" because node don't run the "engine"

2022-09-12 Thread Diego Ercolani
Hello, I don't know if it's normal but in all the nodes of the cluster (except the one that runs the engine) I have something like: 2022-09-12 15:41:54,563+ INFO (jsonrpc/0) [api.virt] START getStats() from=::1,57578, vmId=8486ed73-df34-4c58-bfdc-7025dec63b7f (api:48) 2022-09-12

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-11 Thread Diego Ercolani
Yes, it seem so, but I cannot record any "erroir" on the interface, I have 0 TX error and 0 RX error. all three nodes are connected through a single switch. I set the MTU to 9000 to help gluster transfers but I cannot record any error. In the /var/log/vdsm/vdsm.log I log periodically in all

[ovirt-users] Re: hosted-engine -vm-status show a ghost node that is not anymore in the cluster: how to remove?

2022-09-08 Thread Diego Ercolani
Just to follow that I had to run the same command on all the nodes because in nodes where I didn't run continued to appear. This should be resolved now. Thank you ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-06 Thread Diego Ercolani
I really don't understand, I was monitoring vdsm.log of one node (node2) And I saw a complain: 2022-09-06 14:08:27,105+ ERROR (check/loop) [storage.monitor] Error checking path /rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1/45b4f14c-8323-482f-90ab-99d8fd610018/dom_md/metadata

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-05 Thread Diego Ercolani
I don't have disk problems as I enabled smartd and I perform a periodic test (smartctl -t long ) but in sanlock I have some problems, and also in gluster glheal logs are not clean: The last event I recorded is today at 00:28 (22/09/4 22:28 GMTZ), this is the time when node3 sent mail:

[ovirt-users] Re: hosted-engine -vm-status show a ghost node that is not anymore in the cluster: how to remove?

2022-09-05 Thread Diego Ercolani
Perfect! It works Thank you Sandro. The help file is discouraging the use: [root@ovirt-node3 ~]# hosted-engine --clean-metadata --help Usage: /usr/sbin/hosted-engine --clean_metadata [--force-cleanup] [--host-id=] Remove host's metadata from the global status database. Available only in

[ovirt-users] Re: hosted-engine -vm-status show a ghost node that is not anymore in the cluster: how to remove?

2022-09-02 Thread Diego Ercolani
Anyone can give a hint please? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Re: how kill backup operation

2022-09-01 Thread Diego Ercolani
Thank you for the support, hoping to help improve the resiliance of the implementation. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt

[ovirt-users] Re: VMs hang periodically: gluster problem?

2022-09-01 Thread Diego Ercolani
Versions are last: ovirt-host-4.5.0-3.el8.x86_64 ovirt-engine-4.5.2.4-1.el8.noarch glusterfs-server-10.2-1.el8s.x86_64 ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] VMs hang periodically: gluster problem?

2022-09-01 Thread Diego Ercolani
Hello, I have a cluster made by 3 nodes in a "self-hosted-engine" topology. I implemented the storage with gluster implementation in 2 replica + arbiter topology. I have two gluster volumes glen - is the volume used by hosted-engine vm gv0 - is the volume used by VMs The physical disks are 4TB

[ovirt-users] Re: Moving SelfHosted Engine

2022-08-31 Thread Diego Ercolani
So you need to migrate the self-hosted engine between clusters, I think the only way is to backup hosted-engine configuration and restore it in the other cluster via a new hosted-engine deploy in the other cluster and a restore of the backup. But I'm very interested on how to manipulate hosted

[ovirt-users] hosted-engine -vm-status show a ghost node that is not anymore in the cluster: how to remove?

2022-08-31 Thread Diego Ercolani
engine 4.5.2.4 The issue is that in my cluster when I use the: [root@ovirt-node3 ~]# hosted-engine --vm-status --== Host ovirt-node3.ovirt (id: 1) status ==-- Host ID: 1 Host timestamp : 1633143 Score : 3400 Engine

[ovirt-users] Re: how kill backup operation

2022-08-31 Thread Diego Ercolani
This is the bug report I filled: https://bugzilla.redhat.com/show_bug.cgi?id=2123008 ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code

[ovirt-users] Re: how kill backup operation

2022-08-31 Thread Diego Ercolani
There is only one daemon.log per directory. Here it is the archive for the daemon.log https://cloud.ssis.sm/index.php/s/y6XxgH7CcrL5AC3 I will create the bug report referring this thread. Thank you ___ Users mailing list -- users@ovirt.org To

[ovirt-users] Re: how kill backup operation

2022-08-31 Thread Diego Ercolani
I add also that I upgraded the engine on 2022-08-22 so I have the last "stable" since then: [root@ovirt-engine dbutils]# rpm -qi ovirt-engine-4.5.2.4-1.el8.noarch Name: ovirt-engine Version : 4.5.2.4 Release : 1.el8 Architecture: noarch Install Date: Mon Aug 22 08:17:41 2022

[ovirt-users] Re: how kill backup operation

2022-08-31 Thread Diego Ercolani
One process that I killed was: [root@ovirt-node4 ~]# ps axuww | grep qemu-nbd vdsm 588156 0.0 0.0 308192 39840 ?Ssl Aug26 0:12 /usr/bin/qemu-nbd --socket /run/vdsm/nbd/c7653559-508b-4e4a-a591-32dec3e5a29d.sock --persistent --shared=8 --export-name= --cache=none --aio=native

[ovirt-users] Re: how kill backup operation

2022-08-31 Thread Diego Ercolani
Thanks Arik, we have tried your solution but with no successful results. we have gather also other infor and combined in this solution: we have deleted on DB the row on vm_backups and vm_disk_map related to the hanged backup. The we have tried to delete shapshot locked , after the row db

[ovirt-users] Re: how kill backup operation

2022-08-29 Thread Diego Ercolani
Thank you for your support, I'm councious about the difficulty to keep everything in-line. I currently try to find the correct workload to make backup (using CBR) of VMs. I tryied both vprotect (with current tecnology preview) and Veeam (community using RHV plugin) And I'm currently experiencing

[ovirt-users] how kill backup operation

2022-08-26 Thread Diego Ercolani
Hello I saw there are other thread asking how to delete disk snapshots from backup operation. We definitively need a tool to kill pending backup operations and locked snapshots. I Think this is very frustrating ovirt is a good piece of software but it's very immature in a dirty asyncronous

[ovirt-users] Re: oVirt 4.5.2 is now generally available

2022-08-13 Thread Diego Ercolani
Here is my bugreport: https://bugzilla.redhat.com/show_bug.cgi?id=2117917 ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Re: oVirt 4.5.2 is now generally available

2022-08-12 Thread Diego Ercolani
Hello, * the update missed kvdo module as kmod-kvdo-6.2.6.14-84.el8.x86_64 is not compatible with current kernel kernel-4.18.0-408.el8.x86_64 ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy

[ovirt-users] Re: disk pending in "finalizing" state

2022-08-02 Thread Diego Ercolani
Today Veem seem that have unlocked the snapshot an finally I can remove "autogenerated" snapshots. Anyway, I just extracted last day logs from the engine, so you can analize it if you want: https://cloud.ssis.sm/index.php/s/RT9EHBys5eZ87oo ___ Users

[ovirt-users] Re: disk pending in "finalizing" state

2022-08-01 Thread Diego Ercolani
Unfortunately no, Veeam said that have failed finalizing backup ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Re: disk pending in "finalizing" state

2022-08-01 Thread Diego Ercolani
Anyway, I removed the image from the image_transfer table, in the GUI the state changed to "OK" but when I try to remove the snapshots Veeam left it says thai engine cannot remove during backup: 2022-08-01 14:07:26,181Z INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]

[ovirt-users] Re: disk pending in "finalizing" state

2022-08-01 Thread Diego Ercolani
In data lunedì 1 agosto 2022 15:55:56 CEST, Benny Zlotnik ha scritto: > So looks like the transfer failed, but it was later finalized moving > it from FINISHED_FAILURE to FINALIZING_SUCCESS, we have a bug to > prevent clients from changing the transfer's status like this, fix > should land in

[ovirt-users] Re: disk pending in "finalizing" state

2022-08-01 Thread Diego Ercolani
Expanded display is on. -[ RECORD 1 ]-+ command_id| eecdc5fc-4b7a-44f4-afed-9abb0cd12534 command_type | 1024 phase | 7 last_updated | 2022-07-31 10:49:06.791+00 message

[ovirt-users] disk pending in "finalizing" state

2022-08-01 Thread Diego Ercolani
ovirt-engine-4.5.1.3-1.el8.noarch Hello I have a situation where a disk is stuck in finalizing state derived by trying to backup via veeam. backup process is interrupted and I have cleared the job states with the dbutils scripts (/usr/share/ovirt-engine/setup/dbutils/task_cleaner.sh) even if

[ovirt-users] Re: oVirt 4.5.1 is now generally available

2022-07-01 Thread Diego Ercolani
usr/share/man/man8/vdoformat.8.gz - /usr/share/man/man8/vdoprepareforlvm.8.gz /usr/share/man/man8/vdosetuuid.8.gz /usr/share/man/man8/vdostats.8.gz --- 31,39 -- Ing. Diego Ercolani S.S.I.S. s.p.a. T. 0549-875910 ___ Users mailing list -- us

[ovirt-users] Re: oVirt 4.5.1 is now generally available

2022-07-01 Thread Diego Ercolani
n dropped in > el9. also kvdo tools aren't present I rolled back to el8 -- Ing. Diego Ercolani S.S.I.S. s.p.a. T. 0549-875910 ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Stat

[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-29 Thread Diego Ercolani
Cross posted here: https://lists.gluster.org/pipermail/gluster-users/2022-June/039957.html ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt

[ovirt-users] Re: oVirt 4.5.1 is now generally available

2022-06-29 Thread Diego Ercolani
As there is Centos8 and Centos9 4.5.1 is it possible to mix centos8 and Centos9 in a single cluster? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-28 Thread Diego Ercolani
I've done something but the problem remain: [root@ovirt-node2 ~]# gluster volume heal glen info Brick ovirt-node2.ovirt:/brickhe/glen /3577c21e-f757-4405-97d1-0f827c9b4e22/master/tasks Status: Connected Number of entries: 1 Brick ovirt-node3.ovirt:/brickhe/glen

[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-24 Thread Diego Ercolani
Anyone can me address to somewhere where I can read some "in deep" throubleshots for Glusterfs? I cannot find a "quick" manual ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: Gluster Volume cannot be activated Ovirt 4.5 Centos 8 Stream

2022-06-21 Thread Diego Ercolani
see this if it's the case: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RL73Z7MEKENSEON5F7PKQL5KJYAWO3LS/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: failed to mount hosted engine gluster storage - how to debug?

2022-06-21 Thread Diego Ercolani
Same in 4.5.0.3, workaround seem to work event in this version ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-20 Thread Diego Ercolani
My Environment is ovirt-host-4.5.0-3.el8.x86_64 and glusterfs-server-10.2-1.el8s.x86_64 ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt

[ovirt-users] gluster heal success but a directory doesn't heal

2022-06-20 Thread Diego Ercolani
Hello list, I have a problem derived from some hangs in ovirt during upgrade procedures, I have a gluster based self hosted engine deploy with "glen" as the gluster based hosted engine volume: This is the situation I'm facing: [root@ovirt-node3 master]# gluster volume heal glen info Brick

[ovirt-users] Re: failed to mount hosted engine gluster storage - how to debug?

2022-05-16 Thread Diego Ercolani
Same problem after upgrade to 4.5.0.2 I've reapplied the patch to make it work againt so please tell upstream yum --enablerepo=baseos install patch cd / patch -p0

[ovirt-users] Re: grafana @ ovirt 4.5.0.6 "origin not allowed"

2022-05-12 Thread diego . ercolani
in ovirt-engine-4.5.0.8 I had to reapply the workaround so the problem seem to be here ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt

[ovirt-users] Re: grafana @ ovirt 4.5.0.6 "origin not allowed"

2022-05-10 Thread diego . ercolani
Yes, that worked, I saw this solution but was referred to an old ovirt-engine version, is this a regression issue? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: grafana @ ovirt 4.5.0.6 "origin not allowed"

2022-05-10 Thread diego . ercolani
I tried but obtaining the same message. Keep in mind that I cannot see any data from grafana wichever dashboard I try to visualize I obtain a plenty of popup saying: "Error updating options: origin not allowed" ___ Users mailing list -- users@ovirt.org

[ovirt-users] grafana @ ovirt 4.5.0.6 "origin not allowed"

2022-05-10 Thread diego . ercolani
Hello, in my current new installation ovirt-engine-4.5.0.6-1.el8.noarch I have a problem setting up the grafana monitoring portal. I'm supposing that there is some problem regarding connection to ovirt DWH DB. I tested the credentials stored in

[ovirt-users] Re: Self-hosted engine 4.5.0 deployment fails

2022-05-04 Thread diego . ercolani
Please follow the instruction here: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/N6MNSV4ZK26V5NVPBFAMHQPAQAWUR2OE/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: hosted engine ovirt-engine-appliance-4.5-20220419162115.1.el8.x86_64 doesn't deploy

2022-04-30 Thread diego . ercolani
Yes. The problem is that hosted-engine --deploy --4 --ansible-extra-vars=he_pause_host=true pause the deploy far after the problem so you have to intercept the engine start then, as I have an installation behind proxy: echo "proxy=http://192.168.9.149:3128; >> /etc/yum.conf then waiting the

[ovirt-users] hosted engine ovirt-engine-appliance-4.5-20220419162115.1.el8.x86_64 doesn't deploy

2022-04-30 Thread diego . ercolani
I have a full installation of ovirt hosted-engine but it always stop telling: [ ERROR ] fatal: [localhost -> 192.168.222.15]: FAILED! => {"attempts": 30, "changed": false, "connection": "close", "content_encoding": "identity", "content_length": "86", "content_type": "text/html; charset=UTF-8",

[ovirt-users] Re: failed to mount hosted engine gluster storage - how to debug?

2022-04-25 Thread diego . ercolani
yes it seem it worked thank you very much ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct:

[ovirt-users] Re: AMD Ryzen 5600G unsupported?

2022-04-25 Thread diego . ercolani
Please remember to enable tha AMD-V support in the bios, every time you upgrade bios they normally reset to "disabled". You can find this issue when in logs appears something like: Apr 20 11:45:11 ovirt-node3 journal[2011]: Unable to open /dev/kvm: No such file or directory to resolve remember

[ovirt-users] Re: failed to mount hosted engine gluster storage - how to debug?

2022-04-25 Thread diego . ercolani
I saw your report infact, they suggested to downgrade jdbc, for completeness I found also error report in vdsm.log while issuing "hosted-engine --connect-storage" corresponding to what you are noticing. I report the log except here if it can be useful. by the way, why vdsm it's searching for

[ovirt-users] Re: AMD Ryzen 5600G unsupported?

2022-04-25 Thread diego . ercolani
In my system is correctly recognized: Manufacturer: Gigabyte Technology Co., Ltd. Family: B550 MB Product Name: B550 AORUS ELITE V2 Version: Default string UUID: 03C00218-044D-05AE-3406-0D0700080009 Serial Number: Default string CPU Model Name: AMD Ryzen 7 5700G with Radeon Graphics Tipo CPU:

[ovirt-users] failed to mount hosted engine gluster storage - how to debug?

2022-04-25 Thread diego . ercolani
Hello, I have an issue probably related to my particular implementation but I think some controls are missing Here is the story. I have a cluster of two nodes 4.4.10.3 with an upgraded kernel as the cpu (Ryzen 5) suffer from an incompatibility issue with the kernel provided by 4.4.10.x series.

[ovirt-users] Lack of attribute "decode" in v2v module

2021-12-30 Thread Diego Ercolani
: 'description': job.description.decode('utf-8'), [root@ovirt-node2 ~]# rpm -qf /usr/lib/python3.6/site-packages/vdsm/v2v.py -- Ing. Diego Ercolani S.S.I.S. s.p.a. T. 0549-875910 ___ Users mailing list -- users@ovirt.org To unsubscri