[ovirt-users] Gluster with two ovirt nodes

2018-12-12 Thread Stefan Wolf
Hello, i like to set up glusterfs with two ovirt nodes and on more "normal" node is this possible? i 've setup glusterfs in cli on two ovirt nodes and 3rd network storage. glusterfs is up and running. But now i like to get something like VIP with ctdb for example. is there any possibility to set

[ovirt-users] Hyperconvergend Setup stuck

2018-12-18 Thread Stefan Wolf
Hello I like to setup hyperconvergend I ve 3 hosts, everyone is fresh installed Kvm320 has one additional harddrive with 1TB SATA And kvm360 and kvm380 with two additional harddrives with 300gb and 600gb SAS #gdeploy configuration generated by cockpit-gluster plugin [hosts]

[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Stefan Wolf
yes i think this too, but as you see at the top >[root@kvm380 ~]# gluster volume info >... > performance.strict-o-direct: on ... it was already set i did a one cluster setup with ovirt and I uses this result Volume Name: engine Type: Distribute Volume ID: a40e848b-a8f1-4990-9d32-133b46db6f1d

[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Stefan Wolf
here is what i found in the logs of the hosts 2018-12-20 12:34:04,824+0100 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=09235382-a5b5-48da-853d-f94cae092684 (api:46) 2018-12-20 12:34:04,825+0100 INFO (periodic/0) [vdsm.api] FINISH repoStats

[ovirt-users] Active Storage Domains as Problematic

2018-12-20 Thread Stefan Wolf
Hello, I ,ve setup a test lab with 3 nodes installed with centos 7 I configured manualy gluster fs. Glusterfs is up and running [root@kvm380 ~]# gluster peer status Number of Peers: 2 Hostname: kvm320.durchhalten.intern Uuid: dac066db-55f7-4770-900d-4830c740ffbf State: Peer in

[ovirt-users] Re: Hyperconvergend Setup stuck

2018-12-20 Thread Stefan Wolf
It is gdeploy 2.0.2 rpm -qa |grep gdeploy gdeploy-2.0.8-1.el7.noarch ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct:

[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Stefan Wolf
i 've mounted it during the hosted-engine --deploy process I selected glusterfs and entered server:/engine I dont enter any mount options yes it is enabled for both. I dont got errors for the second one, but may it doesn't check after the first fail

[ovirt-users] Install additional software on ovirt node

2018-12-11 Thread Stefan Wolf
Hello, if I install on an ovirt node additional software, will it removed after updating the ovirt node or run in to trouble? thx shb ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: To much hosts in this cluster

2018-12-10 Thread Stefan Wolf
undeploy before removing? > > Luca > > On Sun, Dec 9, 2018 at 12:19 PM Stefan Wolf wrote: > >> Hello, >> >> >> >> i had 3 hosts with ovirt running, with one of this 3 host i had problems >> during boot up. >> >> I decied to remove the

[ovirt-users] Network Setup after ovirt node Install and before add node to cluster

2018-12-06 Thread Stefan Wolf
Hello, I ve downloaded an installed OVIRT NODE 4.2.7.1. During the installation I ‘ve setup a static ip (I ‘ve also tried dhcp, both work) The installation passes and after the reboot ovirt node starts without any problems. BUT the network is not configured. I know

[ovirt-users] To much hosts in this cluster

2018-12-09 Thread Stefan Wolf
Hello, i had 3 hosts with ovirt running, with one of this 3 host i had problems during boot up. I decied to remove the host from the cluster. Now i ve two hosts But if I take a look at Hosted Engine in cockpit i see all three hosts Why ist he kvm380 not removed? How can I remove it?

[ovirt-users] Try to add Host to cluster: Command returned failure code 1 during SSH session

2018-12-09 Thread Stefan Wolf
Hello, I try to add a new installed ovirt node and get the error message during adding to cluster Host kvm380 installation failed. Command returned failure code 1 during SSH session 'root@kvm380.durchhalten.intern'. Maybe someone can help me Thx Here is the logfile 2018-12-09

[ovirt-users] hosted engine does not start

2019-04-15 Thread Stefan Wolf
Hello all, after a powerloss the hosted engine won’t start up anymore. I ‘ve the current ovirt installed. Storage is glusterfs und it is up and running It is trying to start up hosted engine but it does not work, but I can’t see where the problem is. [root@kvm320 ~]# hosted-engine

[ovirt-users] Re: Cannot start VM 2 hosts

2019-12-17 Thread Stefan Wolf
Problem found. on these 2 hosts was the firewall disabled bye shb ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct:

[ovirt-users] Cannot start VM 2 hosts

2019-12-17 Thread Stefan Wolf
I ve got 4 Hosts after changeing the harddrive on every Host and normal updates I am not able anymore to start VM's on 2 of these 4 hosts. I am also not able to migrate a running vm to these hosts. This is the error log for start up 2019-12-17 16:01:28,326+01 INFO

[ovirt-users] ovirt-ha-agent not running

2019-12-07 Thread Stefan Wolf
hello, since some days ovirt-ha-agent is not running anymore i ve 4 ovirt hosts and only on one host the agent is running. maybe it was from an update, because i lost one agent after an other. i ve done a complete fresh install for the host with the latest ovirt node. I ve got on tree hosts

[ovirt-users] Re: ovirt-ha-agent not running

2019-12-07 Thread Stefan Wolf
and here is the broker.log MainThread::INFO::2019-12-07 15:20:03,563::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.3.6 started MainThread::INFO::2019-12-07

[ovirt-users] Re: ovirt-ha-agent not running

2019-12-07 Thread Stefan Wolf
you are right, thank you now it is up and running again [root@kvm380 ~]# ls /rhev/data-center/mnt/glusterSD/kvm380.durchhalten.intern\:_engine/36663740-576a-4498-b28e-0a402628c6a7/ha_agent/ -lha insgesamt 0 drwxr-xr-x. 2 vdsm kvm 67 1. Jan 1970 . drwxr-xr-x. 6 vdsm kvm 64 7. Dez 12:08 ..

[ovirt-users] Re: ovirt-ha-agent not running

2019-12-07 Thread Stefan Wolf
the contet is [root@kvm380 ~]# ls /rhev/data-center/mnt/glusterSD/kvm380.durchhalten.intern:_engine/36663740-576a-4498-b28e-0a402628c6a7/ha_agent/ -lha ls: Zugriff auf

[ovirt-users] Increase memory of hosted engine

2019-12-08 Thread Stefan Wolf
hello, I ve decrease the memory of the hosted engine. now I am not able to increase the memory permantly right now the memory has 4096 MB Max Memory is 7936 MB and guaranted memory is 7936MB I can increase the memory up to 7936 MB in the manager, it changes immediately. I can not increase to

[ovirt-users] Re: Increase memory of hosted engine

2019-12-08 Thread Stefan Wolf
thank you for your replay, I am happy if it works 1. check 2. check 3.1 check -> I cant see where is use this config file later, or it is just a backup? 4.1 check i changed to 16384 16384 and 15625000 15625000 15625000 after write, quite und open these values have changed

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Stefan Wolf
Hi Strahil, yes it is a replica 4 set I ve tried to stop and stop every gluster server, and Ive rebooted every server. or should I remove the brick and add it again? bye stefan ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Memory problem

2020-01-22 Thread Stefan Wolf
Hi to all, I ve a memory problem I got this error: Used memory of host kvm380.durchhalten.intern in cluster Default [96%] exceeded defined threshold [95%]. after reviewing the server with top command, I found ovn-controller with heavy memory usage: 45055 root 10 -10 46,5g 45,4g 2400

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Stefan Wolf
yes, I ve already tried a full heal a week a go. how do i perform a manual heal? I only have this gfid: Status: Connected Number of entries: 868 I ve tried to heal it with: [root@kvm10 ~]# gluster volume heal data split-brain latest-mtime gfid:c2b47c5c-89b6-49ac-bf10-1733dd8f0902

[ovirt-users] Gluster: a lof of Number of ntries in heal pending

2020-01-20 Thread Stefan Wolf
Hello to all, I ve a problem with gluster [root@kvm10 ~]# gluster volume heal data info summary Brick kvm10:/gluster_bricks/data Status: Connected Total Number of entries: 868 Number of entries in heal pending: 868 Number of entries in split-brain: 0 Number of entries possibly healing: 0 Brick

[ovirt-users] Re: Gluster: a lof of Number of ntries in heal pending

2020-01-21 Thread Stefan Wolf
Hello >I hope you plan to add another brick or arbiter, as you are now prone to >split-brain and other issues. Yes I will add an other one, but I think this is not a problem. I ve set cluster.server-quorum-ratio to 51% to avid the split brain problem. of course I know I just have failure

[ovirt-users] Node not starting | blk_cloned_rq_check_limits: over max size limit

2019-12-31 Thread Stefan Wolf
hi all, i ve 4 nodes running with current ovirt. I ve only a problem on one host even after a fresh installation . I ve installed the latest image. Than I add the node to the cluster Everything is working good. After this I configure the network. BUT, after a restart the host does not come up

[ovirt-users] Host after update NonResponsive

2020-03-13 Thread Stefan Wolf
Hello to all, I ve done a normal host updadet at the webfrontend. After the reboot the host is NonResponsive. It is a HCI setup with glusterfs But mount shows on this host that only engine is mountet. The data volume is not mounted. [root@kvm320 ~]# mount|grep _engine

[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Stefan Wolf
Hello, >https://github.com/silverorange/ovirt_ansible_backup I am also still using 4.3. In my opinion this is by far the best and easiest solution for disaster recovery. No need to install an appliance, and if there is a need to recover, you can import the ova in every hypervisor - no

[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Stefan Wolf
yes you are right, I ve already found. But this was not realy my problem. It causes from the HostedEngine. Long time ago I ve decreased the memory. It seems that this was the problem. now it is seems to be working pretty well. ___ Users mailing list

[ovirt-users] Re: How to Backup a VM

2020-08-31 Thread Stefan Wolf
I think, I found the problem. It is case sensitive. For the export it is NOT case sensitive but for the step "wait for export" it is. I ve changed it and now it seems to be working ___ Users mailing list -- users@ovirt.org To unsubscribe send an email

[ovirt-users] How to Backup a VM

2020-08-29 Thread Stefan Wolf
Hello to all I try to backup a normal VM. But It seems that I don't really understand the concept. At first I found the possibility to backup with the api https://www.ovirt.org/documentation/administration_guide/#Setting_a_storage_domain_to_be_a_backup_domain_backup_domain. Create a snapshot

[ovirt-users] Re: How to Backup a VM

2020-08-30 Thread Stefan Wolf
OK, I ve run the backup three times . I still have two machines, where it still fails on TASK [Wait for export] I think the Problem is not the timeout, in oVirt engine the export has already finished : " Exporting VM VMName as an OVA to /home/backup/in_progress/VMName.ova on Host kvm360" But

[ovirt-users] Re: oVirt 4.4.1 HCI single server deployment failed nested-kvm

2020-07-30 Thread Stefan Wolf
Hello, I ve the same problem, I ve already set up glusterfs and like to deplay a self hosted engine. I ve allready ovirt self hosted engine deployed, with no problems. But here I get the same error with hosted-engine --deploy and in the web frontend. [ INFO ] TASK [ovirt.hosted_engine_setup

[ovirt-users] Can not connect to gluster storage

2020-11-27 Thread Stefan Wolf
Hello, I ve a host that can not connet to gluster storage. It has worked since I ve set up the environment, and today it stoped working this are the error messages in the webui The error message for connection kvm380.durchhalten.intern:/data returned by VDSM was: Failed to fetch Gluster Volume