[ovirt-users] Re: ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[XML error]". HTTP response code is 400.

2022-05-06 Thread Jonas Liechti
I had the same problem and found the solution here: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IKZ45B2TUCQB6WXZ3B4AFVU2RXZXJQQ/

Find cli.py on the host and comment out the line with `value ['stripeCount'] = 
el.find('stripeCount').text`
Am 06.05.2022 19:01 schrieb yp...@163.com:
>
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set 
> of steps] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage 
> interface to be up] 
> [ INFO ] skipping: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir 
> existence] 
> [ INFO ] skipping: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using 
> username/password credentials] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster name] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster version] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster major 
> version] 
> [ INFO ] skipping: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce cluster minor 
> version] 
> [ INFO ] skipping: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set storage_format] 
> [ INFO ] ok: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain] 
> [ INFO ] skipping: [localhost] 
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage 
> domain] 
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail 
> is "[XML error]". HTTP response code is 400. 
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[XML error]\". HTTP 
> response code is 400."}
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XN3H2YZZ56K7PDHBWOHYHAQIZXO5N3IO/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EVFXS5G25WHPAE7M5YNXH7BIYANHH3LM/


[ovirt-users] Re: Ovirt engine Isuue

2022-05-05 Thread Jonas Liechti
Try downgrading postgresql-jdbc 
(https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6MNSV4ZK26V5NVPBFAMHQPAQAWUR2OE/)

dnf downgrade postgresql-jdbc
systemctl restart ovirt-engine

Am 05.05.2022 08:18 schrieb sachendra.shu...@yagnaiq.com:
>
> We have installed ovirt engine on centos 8 by follow the below commond 
>
> nano /etc/hosts 
> your-server-ip centos.example.com 
> dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm 
> dnf module enable javapackages-tools -y 
> dnf module enable pki-deps -y 
> dnf module enable postgresql:12 -y 
> dnf update -y 
> dnf install ovirt-engine -y 
> engine-setup (when we require latest version 
> setsebool -P httpd_can_network_connect 1 
> firewall-cmd --permanent --zone public --add-port 80/tcp 
> firewall-cmd --permanent --zone public --add-port 443/tcp 
> firewall-cmd --reload 
>
>
> But we are unable to connect by browser shwing below error. 
>
> 500 - Internal Server Error
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N7FTY5ZY6SWJHDNIK27G5YVQ4QTW5D6Z/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FA6BLIAIYY7GOXJLLYNFSZB73JB7EKCP/


[ovirt-users] Hosted Engine Deployment timeout waiting for VM

2022-04-18 Thread Jonas Liechti

Hello users,

I am currently trying to deploy the self-hosted engine via the web 
interface but it seems stuck at the task "Wait for the local VM" 
(https://github.com/oVirt/ovirt-ansible-collection/blob/master/roles/hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml). 
I am unsure what to look at for getting more info as I haven't worked a 
lot with Ansible before. Do you have any idea how to debug?



The temporary IP is added to /etc/hosts and I can also login to the VM 
via SSH:


[root@server-005 ~]# cat /etc/hosts
192.168.1.97 ovirt-engine-test.admin.int.rabe.ch # temporary entry added 
by hosted-engine-setup for the bootstrap VM
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 
localhost6.localdomain6

10.128.16.5 server-005.admin.int.rabe.ch
10.128.16.6 server-006.admin.int.rabe.ch
10.128.16.7 server-007.admin.int.rabe.ch
#10.128.32.2 ovirt-engine-test.admin.int.rabe.ch
10.132.16.5 server-005.storage.int.rabe.ch
10.132.16.6 server-006.storage.int.rabe.ch
10.132.16.7 server-007.storage.int.rabe.ch
[root@server-005 ~]# ssh ovirt-engine-test.admin.int.rabe.ch
r...@ovirt-engine-test.admin.int.rabe.ch's password:
Web console: https://ovirt-engine-test.admin.int.rabe.ch:9090/ or 
https://192.168.1.97:9090/


Last login: Mon Apr 18 11:33:53 2022 from 192.168.1.1
[root@ovirt-engine-test ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc fq_codel state 
UP group default qlen 1000

    link/ether 00:16:3e:58:7a:a3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.97/24 brd 192.168.1.255 scope global dynamic 
noprefixroute eth0

   valid_lft 2313sec preferred_lft 2313sec
    inet6 fe80::216:3eff:fe58:7aa3/64 scope link
   valid_lft forever preferred_lft forever


Thank you for any tips for debugging.
Jonas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWZJN4AZFS3IMKNBQ4BTB5JUOUL4BWAT/


[ovirt-users] Re: mdadm vs. JBOD

2022-03-14 Thread Jonas Liechti

Thank you for the confirmation Strahil!

As our current environment is more or less the same (except the Hardware 
RAID, which is not possible with NVMe disks), we planned to use Gluster. 
I guess we will proceed as we originally planned as we are satisfied 
with the performance.


On 3/11/22 07:08, Strahil Nikolov via Users wrote:
Red Hat Gluster Storage is discontinued, but the Gluster (upstream) is 
pretty active and as Sandro Bonazzola (RH) confirmed -> there are no 
plans to remove support for Gluster.
I think it's still a good choice, especially if you don't have SAN/ 
Higly-Available NFS.


Also, storage migration is transparent for the VMs, so you can add SAN 
on a later stage and move all VMs from Gluster to SAN without 
disruption* .


Keep in mind that Gluster is a tier2 storage and if you really need a 
lot of IOPS, CEPH might be suitable.



Best Regards,
Strahil Nikolov

*: Note that this is valid if the FUSE client is used. Other oVirt 
users report huge performance increase when using libgfapi interface, 
but it has drawbacks like storage migration can happen only when you 
switch off libgfapi, power off the VM (on a scheduled basis), power on 
the VM, live migrate the VM to other storage type, enable libgfapi 
again for the rest of the VMs.




Thanks to Nikolov and Strahil for the valuable input! I was off
for a few weeks, so I would like to apologize if I'm potentially
reviving a zombie thread.

I am a bit confused about where to go with this environment after
the discontinuation of the hyperconverged setup. What alternative
options are there for us? Or do you think going the Gluster way
would still be advisable, even though it seems as it is being
discontinued over time?

Thanks for any input on this!

Best regards,
Jonas

On 1/22/22 14:31, Strahil Nikolov via Users wrote:

Using the wizzard is utilizing the Gluster Andible roles.
I would highly recommend using it, unless you know what you are
doing (for example storage alignment when using Hardware raid).

Keep in mind that the DHT xlator (the logic in distributed
volumes) is shard aware, so your shards are spread between
subvolumes and additional performance can be gained.So using
replicated-distributed volumes have their benefits.

If you decide to avoid the software raid, use only replica3
volumes as with SSDs/NVMEs usually the failures are not physical,
but logical (maximum writes reached -> predictive failure ->
total failure).

Also, consider mounting via noatime/relatime and
context="system_u:object_r:glusterd_brick_t:s0" for your gluster
bricks.

Best Regards,
Strahil Nikolov

On Fri, Jan 21, 2022 at 11:00, Gilboa Davara
  wrote:
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/U2ZEWLRF5D6FENQEI5QXL77CMWB7XF32/





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/26AHNDSOJSIVTGYOEUFOY444YYBZCAIW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JFFS2T25TIIHFEQMS2Y3BU4DARSIDE3U/


[ovirt-users] Re: Backup prozess

2022-02-14 Thread Jonas Liechti
Would you mind sharing a link to this script? I would be interested in how it 
works.Am 14.02.2022 19:45 schrieb marcel d'heureuse :
>
> Moin,
>
> We have in our Environment 12 servers managed by one self hosted engine. It 
> is ovirt 4.3.9. We are Freizeit on that Version.
>
> How did you make Backups? We use a github Script which generate via api a 
> snapshot and Mount this disk into the vm where the backup Script is running. 
> The backup Script Export this additional Hard disk to a storage and 
> disconnect the disk and remove the snapshot.
>
> This works on Linux vm fine if they have Medium load. If we try this with a 
> Windows 10 vm or Windows Server vm or a Linux vm with high load or with a 
> very big Hard drive this will not work. Disk can't Export and snapshot will 
> not delete. 
>
> I have found winchin and bareos but I have not start to check this first I 
> want to have some more possible options. 
>
> Thanks
>
> Marcel
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPDKXQ747ZNTZPP4WYQCK3FV3TPNKTAQ/