[ovirt-users] Re: static IP assignment to VM

2021-01-23 Thread Strahil Nikolov via Users
I guess the name in the NIC can be used for that purpose.
Best Regards,Strahil Nikolov
and in order to assign IP using cloud-init, "In-guest Network Interface Name" 
field should be filled but how to know that name in advance?  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZMRIAKCQTZOGP7HQXJ4BA5VCAH4ISRQN/


[ovirt-users] Re: CentOS 8 is dead

2021-01-23 Thread Strahil Nikolov via Users
В 14:41 + на 23.01.2021 (сб), Florian Schmid via Users написа:
> Hi Strahil,
> 
> thank you very much for the information.
> 
> Now the question is, will oVirt stay 100 % compatible to RH?
It should, but it it might have issues like we got with ovirt 4.4
(cluster compatibility 4.5) and CentOS 8.3

> As I understood it, is that oVirt will be developed for CentOS Stream
> and will be tested against it.
> RH doesn't have the same application versions than CentOS Stream,
> because Stream is newer and a way ahead RH, so is oVirt then.
I would rather pick RHEL 8 than Stream. It has so much troubles right
now and I doubt that it will be as stable as I wish. I just want to 
update, reboot and forget about the nodes.

> I think, we will have then the same problems with oVirt and CentOS
> had, where RH 8.3 was already released and CentOS 8.3 not. Now it is
> vice versa. Stream is first and RH later.
> BR Florian
Most probably. But I think upgrading from RHEL 8 to CentOS Stream 8
will be easy in case something goes bad.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2DXLM6Z4JURLN7SWNUGSNS52CQ6G4WJZ/


[ovirt-users] Re: CentOS 8 is dead

2021-01-23 Thread Strahil Nikolov via Users
For anyone interested ,

RH are extending the developer subscription for production use of up to 16 
systems [1].
For me , it's completely enough to run my oVirt nodes on EL 8.


https://www.redhat.com/en/blog/new-year-new-red-hat-enterprise-linux-programs-easier-ways-access-rhel

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S5SG7IXZYSCA6MVFV6MIPJZFVDAFQ3EU/


[ovirt-users] Re: Ovirt reporting dashboard not working

2021-01-23 Thread Strahil Nikolov via Users
Try using the 'source /opt/rh/rh-postgresql95/enable'
Best Regards,Strahil Nikolov
В 19:53 + на 22.01.2021 (пт), José Ferradeira via Users написа:
> The postgres is older than 10:
> postgresql-jdbc-9.2.1002-6.el7_5.noarch
> postgresql-libs-9.2.23-3.el7_4.x86_64
> rh-postgresql95-postgresql-server-9.5.9-4.el7.x86_64
> rh-postgresql95-runtime-2.2-2.el7.x86_64
> rh-postgresql95-postgresql-9.5.9-4.el7.x86_64
> rh-postgresql95-postgresql-contrib-9.5.9-4.el7.x86_64
> rh-postgresql95-postgresql-libs-9.5.9-4.el7.x86_64
> collectd-postgresql-5.8.0-3.el7.x86_64
> 
> Should I use this instead:
> su - postgresscl enable rh-postgresql95 bashpsql
> Thanks
> 
> José 
> De: supo...@logicworks.pt
> Para: "Strahil Nikolov" 
> Cc: "users" 
> Enviadas: Sexta-feira, 22 De Janeiro de 2021 16:39:04
> Assunto: Re: [ovirt-users] Ovirt reporting dashboard not working
> 
> Hello,
> 
> # su - postgres
> -bash-4.2$ source /opt/rh/rh-postgresql10/enable
> -bash: /opt/rh/rh-postgresql10/enable: No such file or directory
> -bash-4.2$ psql engine
> -bash: psql: command not found
> -bash-4.2$
> 
> 
> What am I doing wrong?
> 
> 
> 
> Thanks
> 
> 
> 
> José
> 
> 
> De: "Strahil Nikolov" 
> Para: "users" , supo...@logicworks.pt
> Enviadas: Segunda-feira, 18 De Janeiro de 2021 19:16:17
> Assunto: Re: [ovirt-users] Ovirt reporting dashboard not working
> 
> Most probably the dwh is far in the future.
> 
> The following is not the correct procedure , but it works:
> 
> ssh root@engine
> su - postgres
> source /opt/rh/rh-postgresql10/enable
> psql engine
> 
> engine=# select * from dwh_history_timekeeping ;
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 18 януари 2021 г., 19:22:51 Гринуич+2, José Ferradeira
> via Users  написа: 
> 
> 
> 
> 
> 
> Hello,
> 
> Had a problem with the engine server, the clock changed to 2026 and
> now I don't have any report on the dashboard.
> The version is 4.2.3.8-1.el7
> 
> Any idea?
> 
> Thanks
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YETINUGJEPAZSFTK35XVZ3SM5GH2OPU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EQ4XYEALJTHX6CGUINJLHYDFVX6RKQZ4/


[ovirt-users] Re: Ovirt reporting dashboard not working

2021-01-23 Thread Strahil Nikolov via Users
It worked for me , but my HE is 4.3.10.
Best Regards,Strahil Nikolov
В 16:39 + на 22.01.2021 (пт), José Ferradeira via Users написа:
> Hello,
> 
> # su - postgres
> -bash-4.2$ source /opt/rh/rh-postgresql10/enable
> -bash: /opt/rh/rh-postgresql10/enable: No such file or directory
> -bash-4.2$ psql engine
> -bash: psql: command not found
> -bash-4.2$
> 
> 
> What am I doing wrong?
> 
> 
> 
> Thanks
> 
> 
> 
> José
> 
> 
> De: "Strahil Nikolov" 
> Para: "users" , supo...@logicworks.pt
> Enviadas: Segunda-feira, 18 De Janeiro de 2021 19:16:17
> Assunto: Re: [ovirt-users] Ovirt reporting dashboard not working
> 
> Most probably the dwh is far in the future.
> 
> The following is not the correct procedure , but it works:
> 
> ssh root@engine
> su - postgres
> source /opt/rh/rh-postgresql10/enable
> psql engine
> 
> engine=# select * from dwh_history_timekeeping ;
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 18 януари 2021 г., 19:22:51 Гринуич+2, José Ferradeira
> via Users  написа: 
> 
> 
> 
> 
> 
> Hello,
> 
> Had a problem with the engine server, the clock changed to 2026 and
> now I don't have any report on the dashboard.
> The version is 4.2.3.8-1.el7
> 
> Any idea?
> 
> Thanks
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XFLUOHFPYPYRID2EMMTZFE6KIYIVWB4M/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4QK5EA22DGJRRSPWVUEIBCTNPLMCYGK/


[ovirt-users] Re: For data integrity make sure that the server is configured with Quorum (both client and server Quorum)

2021-01-23 Thread Strahil Nikolov via Users
> Then I enable the quorum on the server side:
>  
> [root@gluster1 ~]# gluster volume set all cluster.server-quorum-ratio 
> 51%
> volume set: success
> [root@gluster1 ~]#
> [root@gluster1 ~]# gluster volume set volume1 cluster.server-quorum-
> type server
> volume set: success
> [root@gluster1 ~]#
> 
You don't need to do it manually, there is a an option in Admin portal
-> Storage -> Volumes -> Select Volume -> "three dots in upper right"
-> Optimize for Virt Store

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7XNLI57LE2Z3K5XP2WTVJKIUUDCR2U67/


[ovirt-users] Re: VMware Fence Agent

2021-01-23 Thread Strahil Nikolov via Users
I think it's easier to get the Vmware's CA certificate and import it on
all hosts + engine and trust it.By default you should put it at
/etc/pki/ca-trust/source/anchors/ and then use "update-ca-trust" to
make all certs signed by the Vmware vCenter's CA trusted.
Best Regards,Strahil Nikolov

В 06:44 + на 21.01.2021 (чт), Robert Tongue написа:
> Greetings all, I am new to oVirt, and have a proof of concept setup
> with a 3-node oVirt cluster nested inside of VMware VCenter to learn
> it, so then I can efficiently migrate that back out to the physical
> nodes to replace VCenter.   I have gotten all the way
>  to a working cluster setup, with the exception of fencing.  I used
> engine-config to pull in the vmware_soap fence agent, and enabled all
> the options, however there is one small thing I cannot figure out. 
> The connection uses a self-signed certificate on the
>  vcenter side, and I cannot figure out the proper combination of
> engine-config -s commands to get the script to be called with the
> "ssl-insecure" option, which does contain a value.  It just needs the
> option passed.   Is there anyone out there in the ether
>  that can help me out? I can provide any information you request. 
> Thanks in advance.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The fence agent script is called with the following syntax in my
> tests, and returned the proper status:
> 
> 
> 
> 
> 
> 
> 
> [root@cluster2-vm ~]# /usr/sbin/fence_vmware_soap -o status -a
> vcenter.address --username="administrator@vsphere.local" --password="
> 0bfusc@t3d" --ssl-insecure -n cluster1-vm
> 
> 
> 
> 
> 
> 
> 
> Status: ON
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -phunyguy
> 
> 
> 
> 
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3PTMUPHR3ZOSQL3SEMTJPAWOAFL5ZUY2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZRCRRKYO5IM3P4LYEORJRAKIVIAYUYP/


[ovirt-users] Re: Guest don't start : Cannot access backing file

2021-01-18 Thread Strahil Nikolov via Users
Hm... this sounds bad . If it was deleted by oVirt, it would ask you whether to 
remove the disk or not and would wipe the VM configuration.

Most probably you got a data corruption there . Are you using TrueNAS ?


Best Regards,
Strahil Nikolov






В вторник, 19 януари 2021 г., 00:06:15 Гринуич+2, Lionel Caignec 
 написа: 





Hi,

i've a big problem, i juste shutdown (power off completely) a guest to make a 
cold restart. And at startup the guest say : "Cannot access backing file 
'/rhev/data-center/mnt/blockSD/69348aea-7f55-41be-ae4e-febd86c33855/images/8224b2b0-39ba-44ef-ae41-18fe726f26ca/ca141675-c6f5-4b03-98b0-0312254f91e8'"
When i look from shell on Hypervisor the device file is red blinking...

Trying to change SPM, look device into all hosts, copy disk etc... no way to 
get my disk back online. It seems ovirt completely lost (delete?) the block 
device.

There is a way to manually dump (dd) the device in command line in order to 
import it inside ovirt?

My environment 
Storage : SAN (managed by ovirt) 
Ovirt-engine 4.4.3.12-1.el8
Host centos 8.2
Vdsm 4.40.26

Thanks for help i'm stuck and it's really urgent

--
Lionel Caignec 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VKTXX3DP5T7AXIKJJC4ZUW65N5JVXFID/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSD3EXJSZ5GKAYQR7UYQ72GJEI7L2T3X/


[ovirt-users] Re: Ovirt reporting dashboard not working

2021-01-18 Thread Strahil Nikolov via Users
Most probably the dwh is far in the future.

The following is not the correct procedure , but it works:

ssh root@engine
su - postgres
source /opt/rh/rh-postgresql10/enable
psql engine

engine=# select * from dwh_history_timekeeping ;

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 19:22:51 Гринуич+2, José Ferradeira via Users 
 написа: 





Hello,

Had a problem with the engine server, the clock changed to 2026 and now I don't 
have any report on the dashboard.
The version is 4.2.3.8-1.el7

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPW5FFKG3AI6EINW4G74IKTYB2E4A5DT/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QIRHU7XXB4LUCKJ53A4TDUGLUGAICIHT/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread Strahil Nikolov via Users
I think that it's complaining for the firewall. Try to restore with running 
firewalld.

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 17:52:04 Гринуич+2, penguin pages 
 написа: 







Following document to redploy engine...

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/cleaning_up_a_failed_self-hosted_engine_deployment

 From Host which had listed engine as in its inventory ### 
[root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup
This will de-configure the host to run ovirt-hosted-engine-setup from scratch.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Destroy hosted-engine VM ===-
error: failed to get domain 'HostedEngine'

  -=== Stop HA services ===-
  -=== Shutdown sanlock ===-
shutdown force 1 wait 0
shutdown done 0
  -=== Disconnecting the hosted-engine storage domain ===-
  -=== De-configure VDSM networks ===-
ovirtmgmt
A previously configured management bridge has been found on the system, this 
will try to de-configure it. Under certain circumstances you can loose network 
connection.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Stop other services ===-
Warning: Stopping libvirtd.service, but it can still be activated by:
  libvirtd.socket
  libvirtd-ro.socket
  libvirtd-admin.socket
  -=== De-configure external daemons ===-
Removing database file /var/lib/vdsm/storage/managedvolume.db
  -=== Removing configuration files ===-
? /etc/init/libvirtd.conf already missing
- removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
? /etc/ovirt-hosted-engine/answers.conf already missing
- removing /etc/ovirt-hosted-engine/hosted-engine.conf
- removing /etc/vdsm/vdsm.conf
- removing /etc/pki/vdsm/certs/cacert.pem
- removing /etc/pki/vdsm/certs/vdsmcert.pem
- removing /etc/pki/vdsm/keys/vdsmkey.pem
- removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-key.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-key.pem
- removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-key.pem
- removing /etc/pki/CA/cacert.pem
- removing /etc/pki/libvirt/clientcert.pem
- removing /etc/pki/libvirt/private/clientkey.pem
? /etc/pki/ovirt-vmconsole/*.pem already missing
- removing /var/cache/libvirt/qemu
? /var/run/ovirt-hosted-engine-ha/* already missing
? /var/tmp/localvm* already missing
  -=== Removing IP Rules ===-
[root@medusa ~]# 
[root@medusa ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
          During customization use CTRL-D to abort.
          Continuing will configure this host for serving as hypervisor and 
will create a local VM with a running engine.
          The locally running engine will be used to configure a new storage 
domain and create a VM there.


1) Error about firewall
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 
'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'' failed. The error was: error while evaluating conditional 
(firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be 
in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml':
 line 8, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n    register: 
firewalld_s\n  - name: Enforce firewalld status\n    ^ here\n"}

###  Hmm.. that is dumb.. its disabled to avoid issues
[root@medusa ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
  Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor 
preset: enabled)
  Active: inactive (dead)
    Docs: man:firewalld(1)


2) Error about ssh to host ovirte01.penguinpages.local 
[ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed 
to connect to the host via ssh: ssh: connect to host 
ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host 
localhost is unreachable", "unreachable": true}

###.. Hmm.. well.. no kidding.. it is suppose to deploy the engine so IP should 
be offline till it does it.  And as VMs to run DNS are down.. I am using hosts 
file to ignite the enviornment.  Not sure what it expects 
[root@medusa ~]# cat /etc/hosts |grep ovir
172.16.100.31 ovirte01.penguinpages.local ovirte01



Did not go well. 

Attached is deployment details as well as logs. 

Maybe someone can point out what I am doing wrong.  Last time I did this I did 
the HCI 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Strahil Nikolov via Users
Most probably it will be easier if you stick with full-blown distro.

@Sandro Bonazzola can help with CEPH status.

Best Regards,Strahil Nikolov






В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore 
 написа: 





Thanks Strahil for your reply.

Sorry just to confirm,

1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph changes?

Thanks,
Shantur

On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
wrote:
> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>> Hi Strahil,
>> 
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>> 
>> The reason why Ceph appeals me over Gluster because of the following reasons.
>> 
>> 1. I have more experience with Ceph than Gluster.
> That is a good reason to pick CEPH.
>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>> software to offload storage related tasks. 
>> 3. Adding Gluster storage limits to 3 hosts at a time.
> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
> as many as you wish as a compute node (won't be part of Gluster) and later 
> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>> such limitation if I go via Ceph.
> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
> As both oVirt and Gluster ,that are used, are upstream projects, support is 
> on best effort from the community.
>> In my initial testing I was able to enable Centos repositories in Node Ng 
>> but if I remember correctly, there were some librbd versions present in Node 
>> Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
> with some of the devs on the list - as there were some changes recently in 
> oVirt's support for CEPH.
> 
>> Regards
>> Shantur
>> 
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
>> wrote:
>>> Hi Shantur,
>>> 
>>> the main question is how many nodes you have.
>>> Ceph integration is still in development/experimental and it should be wise 
>>> to consider Gluster also. It has a great integration and it's quite easy to 
>>> work with).
>>> 
>>> 
>>> There are users reporting using CEPH with their oVirt , but I can't tell 
>>> how good it is.
>>> I doubt that oVirt nodes come with CEPH components , so you most probably 
>>> will need to use a full-blown distro. In general, using extra software on 
>>> oVirt nodes is quite hard .
>>> 
>>> With such setup, you will need much more nodes than a Gluster setup due to 
>>> CEPH's requirements.
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
>>>  написа: 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Hi all,
>>> 
>>> I am planning my new oVirt cluster on Apple hosts. These hosts can only 
>>> have one disk which I plan to partition and use for hyper converged setup. 
>>> As this is my first oVirt cluster I need help in understanding few bits.
>>> 
>>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only 
>>> Centos?
>>> 3. Can I install cinderlib on oVirt Node Next hosts?
>>> 4. Are there any pit falls in such a setup?
>>> 
>>> 
>>> Thanks for your help
>>> 
>>> Regards,
>>> Shantur
>>> 
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community

[ovirt-users] Re: VG issue / Non Operational

2021-01-18 Thread Strahil Nikolov via Users
Are you sure that ovirt doesn't still use it (storage domains)?

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 09:11:18 Гринуич+2, Christian Reiss 
 написа: 





Update:

I found out that 4a62cdb4-b314-4c7f-804e-8e7275518a7f is an iscsi target 
outside of gluster. It is a test that we do not need anymore but we cant 
remove. According to

[root@node03 ~]# iscsiadm -m session
tcp: [1] 10.100.200.20:3260,1 iqn.2005-10.org.freenas.ctl:ovirt-data 
(non-flash)

its attached, but something is still missing...


On 17/01/2021 11:45, Strahil Nikolov via Users wrote:
> What is the output of 'lsblk -t' on all nodes ?
> 
> Best Regards,
> Strahil NIkolov692371
> 
> В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:
>> Hey folks,
>>
>> quick (I hope) question: On my 3-node cluster I am swapping out all
>> the
>> SSDs with fewer but higher capacity ones. So I took one node down
>> (maintenance, stop), then removed all SSDs, set up a new RAID, set
>> up
>> lvm and gluster, let it resync. Gluster health status shows no
>> unsynced
>> entries.
>>
>> Uppon going from maintenance to online from ovirt management It goes
>> into non-operational status, vdsm log on the node shows:
>>
>> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] START
>> getAllVmStats() from=::1,48580 (api:48)
>> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] FINISH
>> getAllVmStats return={'status': {'message': 'Done', 'code': 0},
>> 'statsList': (suppressed)} from=::1,48580 (api:54)
>> 2021-01-17 11:13:29,052+0100 INFO  (jsonrpc/6)
>> [jsonrpc.JsonRpcServer]
>> RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
>> 2021-01-17 11:13:30,420+0100 WARN  (monitor/4a62cdb) [storage.LVM]
>> Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f']
>> rc=5
>> out=[] err=['  Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f"
>> not
>> found', '  Cannot process volume group
>> 4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
>> 2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb)
>> [storage.Monitor]
>> Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed
>> (monitor:330)
>> Traceback (most recent call last):
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 327, in _setupLoop
>>      self._setupMonitor()
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 349, in _setupMonitor
>>      self._produceDomain()
>>    File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159,
>> in
>> wrapper
>>      value = meth(self, *a, **kw)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line
>> 367, in _produceDomain
>>      self.domain = sdCache.produce(self.sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 110, in produce
>>      domain.getRealDomain()
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 51,
>> in getRealDomain
>>      return self._cache._realProduce(self._sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 134, in _realProduce
>>      domain = self._findDomain(sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 151, in _findDomain
>>      return findMethod(sdUUID)
>>    File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
>> 176, in _findUnfetchedDomain
>>      raise se.StorageDomainDoesNotExist(sdUUID)
>>
>>
>> I assume due to the changed LVM UUID it fails, right? Can I someone
>> fix/change the UUID and get the node back up again? It does not seem
>> to
>> be a major issue, to be honest.
>>
>> I can see the gluster mount (what ovirt mounts when it onlines a
>> node)
>> already, and gluster is happy too.
>>
>> Any help is appreciated!
>>
>> -Chris.
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS4OEXZOE6UA4CDQYXFKC3TZCCO42SU4/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-l

[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Strahil Nikolov via Users
В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
> Hi Strahil,
> Thanks for your reply, I have 16 nodes for now but more on the way.
> 
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
> 
> 1. I have more experience with Ceph than Gluster.
That is a good reason to pick CEPH.
> 2. I heard in Managed Block Storage presentation that it leverages
> storage software to offload storage related tasks. 
> 3. Adding Gluster storage limits to 3 hosts at a time.
Only if you wish the nodes to be both Storage and Compute. Yet, you can
add as many as you wish as a compute node (won't be part of Gluster)
and later you can add them to the Gluster TSP (this requires 3 nodes at
a time).
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup.
> No such limitation if I go via Ceph.
Actually , it's about Red Hat support for RHHI and not for Gluster +
oVirt.  As  both oVirt and Gluster ,that are used, are upstream
projects, support is on best effort from the community.
> In my initial testing I was able to enable Centos repositories in
> Node Ng but if I remember correctly, there were some librbd versions
> present in Node Ng which clashed with the version I was trying to
> install.
> Does Ceph hyperconverge still make sense?
Yes it is. You got the knowledge to run the CEPH part, yet consider
talking with some of the devs on the list - as there were some changes
recently in oVirt's support for CEPH.
> Regards
> Shantur
> 
> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <
> users@ovirt.org> wrote:
> > Hi Shantur,
> > 
> > 
> > 
> > the main question is how many nodes you have.
> > 
> > Ceph integration is still in development/experimental and it should
> > be wise to consider Gluster also. It has a great integration and
> > it's quite easy to work with).
> > 
> > 
> > 
> > 
> > 
> > There are users reporting using CEPH with their oVirt , but I can't
> > tell how good it is.
> > 
> > I doubt that oVirt nodes come with CEPH components , so you most
> > probably will need to use a full-blown distro. In general, using
> > extra software on oVirt nodes is quite hard .
> > 
> > 
> > 
> > With such setup, you will need much more nodes than a Gluster setup
> > due to CEPH's requirements.
> > 
> > 
> > 
> > Best Regards,
> > 
> > Strahil Nikolov
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> > shantur.rath...@gmail.com> написа: 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > Hi all,
> > 
> > 
> > 
> > I am planning my new oVirt cluster on Apple hosts. These hosts can
> > only have one disk which I plan to partition and use for hyper
> > converged setup. As this is my first oVirt cluster I need help in
> > understanding few bits.
> > 
> > 
> > 
> > 1. Is Hyper converged setup possible with Ceph using cinderlib?
> > 
> > 2. Can this hyper converged setup be on oVirt Node Next hosts or
> > only Centos?
> > 
> > 3. Can I install cinderlib on oVirt Node Next hosts?
> > 
> > 4. Are there any pit falls in such a setup?
> > 
> > 
> > 
> > 
> > 
> > Thanks for your help
> > 
> > 
> > 
> > Regards,
> > 
> > Shantur
> > 
> > 
> > 
> > ___
> > 
> > Users mailing list -- users@ovirt.org
> > 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> > 
> > ___
> > 
> > Users mailing list -- users@ovirt.org
> > 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
> > 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/


[ovirt-users] Re: VG issue / Non Operational

2021-01-17 Thread Strahil Nikolov via Users
What is the output of 'lsblk -t' on all nodes ?

Best Regards,
Strahil NIkolov692371

В 11:19 +0100 на 17.01.2021 (нд), Christian Reiss написа:
> Hey folks,
> 
> quick (I hope) question: On my 3-node cluster I am swapping out all
> the 
> SSDs with fewer but higher capacity ones. So I took one node down 
> (maintenance, stop), then removed all SSDs, set up a new RAID, set
> up 
> lvm and gluster, let it resync. Gluster health status shows no
> unsynced 
> entries.
> 
> Uppon going from maintenance to online from ovirt management It goes 
> into non-operational status, vdsm log on the node shows:
> 
> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] START 
> getAllVmStats() from=::1,48580 (api:48)
> 2021-01-17 11:13:29,051+0100 INFO  (jsonrpc/6) [api.host] FINISH 
> getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 
> 'statsList': (suppressed)} from=::1,48580 (api:54)
> 2021-01-17 11:13:29,052+0100 INFO  (jsonrpc/6)
> [jsonrpc.JsonRpcServer] 
> RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
> 2021-01-17 11:13:30,420+0100 WARN  (monitor/4a62cdb) [storage.LVM] 
> Reloading VGs failed (vgs=[u'4a62cdb4-b314-4c7f-804e-8e7275518a7f']
> rc=5 
> out=[] err=['  Volume group "4a62cdb4-b314-4c7f-804e-8e7275518a7f"
> not 
> found', '  Cannot process volume group 
> 4a62cdb4-b314-4c7f-804e-8e7275518a7f']) (lvm:470)
> 2021-01-17 11:13:30,424+0100 ERROR (monitor/4a62cdb)
> [storage.Monitor] 
> Setting up monitor for 4a62cdb4-b314-4c7f-804e-8e7275518a7f failed 
> (monitor:330)
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 
> 327, in _setupLoop
>  self._setupMonitor()
>File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 
> 349, in _setupMonitor
>  self._produceDomain()
>File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159,
> in 
> wrapper
>  value = meth(self, *a, **kw)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
> line 
> 367, in _produceDomain
>  self.domain = sdCache.produce(self.sdUUID)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 
> 110, in produce
>  domain.getRealDomain()
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line
> 51, 
> in getRealDomain
>  return self._cache._realProduce(self._sdUUID)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 
> 134, in _realProduce
>  domain = self._findDomain(sdUUID)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 
> 151, in _findDomain
>  return findMethod(sdUUID)
>File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 
> 176, in _findUnfetchedDomain
>  raise se.StorageDomainDoesNotExist(sdUUID)
> 
> 
> I assume due to the changed LVM UUID it fails, right? Can I someone 
> fix/change the UUID and get the node back up again? It does not seem
> to 
> be a major issue, to be honest.
> 
> I can see the gluster mount (what ovirt mounts when it onlines a
> node) 
> already, and gluster is happy too.
> 
> Any help is appreciated!
> 
> -Chris.
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS4OEXZOE6UA4CDQYXFKC3TZCCO42SU4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IYV67D7JBE3CK6SQ7SEUUNRVWU5QNYK/


[ovirt-users] Re: Problem with Ovirt

2021-01-17 Thread Strahil Nikolov via Users
Hi,

can you share what procedure/steps you have implemented and when the issue 
occurs ?

Best Regards,
Strahil Nikolov






В неделя, 17 януари 2021 г., 10:40:57 Гринуич+2, Keith Forman via Users 
 написа: 





Hi
Need help with setting up Ovirt.
engine-setup run successfully on CentOS 7, with PostgreSQL 12.

Now when running systemctl start ovirt-engine , it starts successfully, but in 
the frontend, it says 404 Not Found. 

In engine.log, I see the following 2 errors:
Error initializing: Unable to determine the correct call signature - no 
procedure/function/signature for 'getallfrommacpools'
and
Error in getting DB connection, database is inaccessible: Unable to determine 
the correct call signature - no procedure/function/signature for 
'checkdbconnection'

Any tips as to the way forward would be much appreciated
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4LUKTJ2NCYMKPXYCQHQQTEFVQVESAZVY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5HZZDZKGKAFYTNVNMIYA44SFZ3F3ATSZ/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Strahil Nikolov via Users
Hi Shantur,

the main question is how many nodes you have.
Ceph integration is still in development/experimental and it should be wise to 
consider Gluster also. It has a great integration and it's quite easy to work 
with).


There are users reporting using CEPH with their oVirt , but I can't tell how 
good it is.
I doubt that oVirt nodes come with CEPH components , so you most probably will 
need to use a full-blown distro. In general, using extra software on oVirt 
nodes is quite hard .

With such setup, you will need much more nodes than a Gluster setup due to 
CEPH's requirements.

Best Regards,
Strahil Nikolov






В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
 написа: 





Hi all,

I am planning my new oVirt cluster on Apple hosts. These hosts can only have 
one disk which I plan to partition and use for hyper converged setup. As this 
is my first oVirt cluster I need help in understanding few bits.

1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pit falls in such a setup?


Thanks for your help

Regards,
Shantur

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/


[ovirt-users] Re: ovirt-ha-agent

2021-01-16 Thread Strahil Nikolov via Users
В 02:29 + на 16.01.2021 (сб), Ariez Ahito написа:
> last dec i installed hosted-engine seems to be working, we can
> migrate the engine to different host. but we need to reinstall
> everything because of gluster additional configuration.
> so we did installed hosted-engine. but as per checking. we cannot
> migrate the engine to other hosts. and the ovirt-ha-agent and ovirt-
> ha-broker is status is inactive (dead) what are we missing?

First fix the 2 services before you can move it. Check /var/log/ovirt-
hosted-engine-ha/{agent,broker}.log for clues. Start with the broker
first.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7L4J2LFJKWNB3F5IKXNV4JJYG2MKEEAF/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-16 Thread Strahil Nikolov via Users

> [root@medusa qemu]# virsh define /tmp/ns01.xml
> Please enter your authentication name: admin
> Please enter your password:
> Domain ns01 defined from /tmp/ns01.xml
> 
> [root@medusa qemu]# virsh start /tmp/ns01.xml
> Please enter your authentication name: admin
> Please enter your password:
> error: failed to get domain '/tmp/ns01.xml'
> 
> [root@medusa qemu]#

When you define the file, you start the VM by name.

After defining run 'virsh list'.
Based on your xml you should use 'virsh start ns01'.

Notice:As you can see my HostedEngine uses '/var/run/vdsm/' instead
of '/rhev/data-center/mnt/glusterSD/...' which is actually just a
symbolic link.



  
  

  
  
  
  8ec7a465-151e-4ac3-92a7-
965ecf854501
  
  




When you start the HE, it might complain of that missing so you have to
create it.

If it complains about network vdsm-ovirtmgmt missing, you can also
define it  via virsh:
# cat vdsm-ovirtmgmt.xml  


  vdsm-ovirtmgmt

  8ded486e-e681-4754-af4b-5737c2b05405

  

  



Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SOMUSIHCIKJRQDIUEL5VUZHFJOXH4BV6/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread Strahil Nikolov via Users

> 
> Questions:
> 1) I have two important VMs that have snapshots that I need to boot
> up.  Is their a means with an HCI configuration to manually start the
> VMs without oVirt engine being up?
What it worked for me was:
1) Start a VM via "virsh" 
define a virsh alias:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-
engine/virsh_auth.conf'
Check the host's vdsm.log ,where the VM was last started - you will
find the VM's xml inside .
Copy the whole xml and use virsh to define the VM "virsh define
myVM.xml && virsh start myVM" 

2) vdsm-client most probably can start VMs even when the engine is down
> 2) Is their a means to debug what is going on with the engine failing
> to start to repair (I hate reloading as the only fix for systems)
You can use "hosted-engine" to start the HostedEngine VM in paused mode
. Then you can connect over spice/vnc and then unpause the VM. Booting
the HostedEngine VM from DVD is a little bit harder. You will need to
get the HE's xml and edit it to point to the DVD. Once you got the
altered HE config , you can define and start.
> 3) Is their a means to re-deploy HCI setup wizard, but use the
> "engine" volume and so retain the VMs and templates?
You are not expected to mix HostedEngine and other VMs on the same
storage domain (gluster volume).

Best Regards,
Strahil Nikolov
> 
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPURBXOULJ7NPFS7LTTXQI3O5QRVHHY3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5ZKGRY63OZSEIQVSZAKTFX4EX4EJOI3/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-14 Thread Strahil Nikolov via Users
Can you share both ovirt and gluster logs ?

Best Regards,
Strahil Nikolov






В четвъртък, 14 януари 2021 г., 20:18:03 Гринуич+2, Charles Lam 
 написа: 





Thank you Strahil.  I have installed/updated:

dnf install --enablerepo="baseos" --enablerepo="appstream" 
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus" 
centos-release-gluster8.noarch centos-release-storage-common.noarch

dnf upgrade --enablerepo="baseos" --enablerepo="appstream" 
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus"

Cleaned and re-ran Ansible.  Still receiving the same (below).  As always, if 
you or anyone else has any ideas for troubleshooting - 

Gratefully,
Charles

TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine', 'brick': 
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "engine", 
"granular-entry-heal", "enable"], "delta": "0:00:10.100254", "end": "2021-01-14 
18:07:16.192067", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "non-zero return 
code", "rc": 107, "start": "2021-01-14 18:07:06.091813", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick': 
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "data", 
"granular-entry-heal", "enable"], "delta": "0:00:10.103147", "end": "2021-01-14 
18:07:31.431419", "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", 
"volname": "data"}, "msg": "non-zero return code", "rc": 107, "start": 
"2021-01-14 18:07:21.328272", "stderr": "", "stderr_lines": [], "stdout": "One 
or more bricks could be down. Please execute the command again after bringing 
all bricks online and finishing any pending heals\nVolume heal failed.", 
"stdout_lines": ["One or more bricks could be down. Please execute the command 
again after bringing all bricks online and finishing any pending heals", 
"Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore', 'brick': 
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
"item", "changed": true, "cmd": ["gluster", "volume", "heal", "vmstore", 
"granular-entry-heal", "enable"], "delta": "0:00:10.102582", "end": "2021-01-14 
18:07:46.612788", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "non-zero 
return code", "rc": 107, "start": "2021-01-14 18:07:36.510206", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDLFPRPYPAY3UH2R4PVFL5XG4IKOERYP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TPWJ2ZWBNAFLHRRG62QSIZOYH6NWQJQ/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-13 Thread Strahil Nikolov via Users
As those are brand new,

try to install the gluster v8 repo and update the nodes to 8.3  and
then rerun the deployment:

yum install centos-release-gluster8.noarch
yum update

Best Regards,
Strahil Nikolov

В 23:37 + на 13.01.2021 (ср), Charles Lam написа:
> Dear Friends:
> 
> I am still stuck at
> 
> task path:
> /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volum
> es.yml:67
> "One or more bricks could be down. Please execute the command again
> after bringing all bricks online and finishing any pending heals",
> "Volume heal failed."
> 
> I refined /etc/lvm/lvm.conf to:
> 
> filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-F1kxJk-F1wV-QqOR-Tbb1-Pefh-
> 4vod-IVYaz6$|", "a|^/dev/nvme.n1|", "a|^/dev/dm-1.|", "r|.*|"]
> 
> and have also rebuilt the servers again.  The output of gluster
> volume status shows bricks up but no ports for self-heal daemon:
> 
> [root@fmov1n2 ~]# gluster volume status data
> Status of volume: data
> Gluster process TCP Port  RDMA
> Port  Online  Pid
> ---
> ---
> Brick host1.company.com:/gluster_bricks
> /data/data  49153 0  Y   
> 244103
> Brick host2.company.com:/gluster_bricks
> /data/data  49155 0  Y   
> 226082
> Brick host3.company.com:/gluster_bricks
> /data/data  49155 0  Y   
> 225948
> Self-heal Daemon on
> localhost   N/A   N/AY   224255
> Self-heal Daemon on
> host2.company.com   N/A   N/AY   233992
> Self-heal Daemon on
> host3.company.com   N/A   N/AY   224245
> 
> Task Status of Volume data
> ---
> ---
> There are no active volume tasks
> 
> The output of gluster volume heal  info shows connected to
> the local self-heal daemon but transport endpoint is not connected to
> the two remote daemons.  This is the same for all three hosts.
> 
> I have followed the solutions here: 
> https://access.redhat.com/solutions/5089741
> and also here: https://access.redhat.com/solutions/3237651
> 
> with no success.
> 
> I have changed to a different DNS/DHCP server and still have the same
> issues.  Could this somehow be related to the direct cabling for my
> storage/Gluster network (no switch)?  /etc/nsswitch.conf is set to
> file dns and pings all work, but dig and does not for storage (I
> understand this is to be expected).
> 
> Again, as always, any pointers or wisdom is greatly appreciated.  I
> am out of ideas.
> 
> Thank you!
> Charles
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OE7EUSWMBTRINHCSBQAXCI6L25K6D2OY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LJSJ74C5RV5EZ62GZUWGVQAXODJHK2TA/


[ovirt-users] Re: Q: New oVirt Node - CentOS 8 or Stream ?

2021-01-13 Thread Strahil Nikolov via Users
В 17:50 +0200 на 13.01.2021 (ср), Andrei Verovski написа:
> Hi,
> 
> 
> I’m currently adding new oVirt node to existing 4.4 setup.
> Which underlying OS version would you recommend for long-term
> deployment - CentOS 8 or Stream ?

Stream is not used by all RH teams, while CentOS 8 will be dead soon.
Both cases it's not nice. If you need to add the node now, use CentOS 8
and later convert it to Stream.

> I don’t use pre-built node ISO since I have a number of custom
> scripts running on node host OS.
> 
> Thanks in advance.
> Andrei

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5RXCJ6T4C3BJWWEEHD72YQOJZ2MJPPSY/


[ovirt-users] Re: VM console does not work with new cluster.

2021-01-13 Thread Strahil Nikolov via Users
bmmjx2pqw
> LrZXX8Q5NO2MRKOTs0Dtg16Q6z+a3cXLIffVJfhPGS3AkIh6nznNaDeH5gFZZbd\nr3DK
> E4xrpdw/7y6CgjmHe4vwGxOIyE+gElZ/lVtqznLMwohz7wgtgsDA36277mujNyMjMbrSF
> heu\n5WfbIa9VVSZWEkISVq6eswLOQ1IRaFyJsFN9AgMBAAGjga0wgaowHQYDVR0OBBYE
> FDYEqJOMqN8+\nQhCP7DAkqF3RZMFdMGgGA1UdIwRhMF+AFDYEqJOMqN8+QhCP7DAkqF3
> RZMFdoUOkQTA/MQswCQYD\nVQQGEwJVUzERMA8GA1UECgwIdGx0ZC5jb20xHTAbBgNVBA
> MMFG9vZW5nLnRsdGQuY29tLjE3NzMw\nggIQADAPBgNVHRMBAf8EBTADAQH/MA4GA1UdD
> wEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEA\nAKs0/yQWkoOkGcL0PjF9ijekdMmj
> rLZGyh5uLot7h9s/Y2+5l9n9IzEjjx9chi8xwt6MBsR6/nBT\n/skcciv2veM22HwNGjd
> rHvhfbZFnZsGe2TU60kGzKjlv1En/8Pgd2aWBcwTlr+SErBXkehNEJRj9\n1saycPgwS4
> pHS04c2+4JMhpe+hxgsO2+N/SYkP95Lf7ZQynVsN/SKx7X3cWybErCqoB7G7McqaHN\nV
> Ww+QNXo5islWUXqeDc3RcnW3kq0XUEzEtp6hoeRcLKO99QrAW31zqU/QY+EeZ6Fax1O/j
> rDafZn\npTs0KJFNgeVnUhKanB29ONy+tmnUmTAgPMaKKw==\n-END
> CERTIFICATE-\n
> 
> 
> the firewall list of the host ohost1.tltd.com(192.168.10.160) is:
> 
> [root@ohost1 ~]# firewall-cmd --list-all public (active)
>   target: default
>   icmp-block-inversion: no
>   interfaces: bond0 ovirtmgmt
>   sources:
>   services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-
> vmconsole snmp ssh vdsm
>   ports: 22/tcp 6081/udp
>   protocols:
>   masquerade: no
>   forward-ports:
>   source-ports:
>   icmp-blocks:
>   rich rules:
> 
> 
> Please give me some advice,thanks.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: users-boun...@ovirt.org  On Behalf Of
> Strahil Nikolov via Users
> Sent: Wednesday, January 13, 2021 3:15 AM
> To: matthew.st...@fujitsu.com; eev...@digitaldatatechs.com; 
> users@ovirt.org
> Subject: [ovirt-users] Re: VM console does not work with new cluster.
> 
> 
> > It’s just that once the VM has been moved to the new cluster, 
> > selecting console results in the same behavior, but that virt-
> > viewer 
> > starts and stops within a second.
> 
> In order to debug, you will need to compare the files provided when
> you press the "console" button from both clusters and identify the
> problem.
> 
> Have you compared the firewalld ports on 2 nodes (old and new
> cluster) ?
> 
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3U5ZIELTUSPKT6KZ7UZWWFCDRNCF5YLN/
> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2VHPU7BMV4FBQ5QTEFXT7GICGUOMXWQ/


[ovirt-users] Re: VM console does not work with new cluster.

2021-01-12 Thread Strahil Nikolov via Users

> It’s just that once the VM has been moved to the new cluster,
> selecting console results in the same behavior, but that virt-viewer
> starts and stops within a second.

In order to debug, you will need to compare the files provided when you
press the "console" button from both clusters and identify the problem.

Have you compared the firewalld ports on 2 nodes (old and new cluster)
?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3U5ZIELTUSPKT6KZ7UZWWFCDRNCF5YLN/


[ovirt-users] Re: potential split-brain after upgrading Gluster version and rebooting one of three storage nodes

2021-01-12 Thread Strahil Nikolov via Users
В 12:13 + на 12.01.2021 (вт), user-5...@yandex.com написа:
> > newest file (there is a timestamp inside it) and then rsync that
> > file
> They are binary files and I can't seem to find a timestamp. The file
> consists of 2000 lines, where would I find this timestamp and what
> does it look like? Or do you mean the linux mtime?
I got confused by the name... I just checked on my cluster and it's
binary.
Just "stat" the file from the mount point in /rhev/data-
center/mnt/glusterSD/:_ mount point and it should get
healed.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L6M2DER4QFYVSL4DM32UA5BKHEE5RJ4N/


[ovirt-users] Re: v2v-conversion-host-wrapper binary missing, unable to perform host conversion task

2021-01-12 Thread Strahil Nikolov via Users
В 09:41 + на 12.01.2021 (вт), dhanaraj.ramesh--- via Users написа:
> v2v-conversion-host-wrapper

I think the package names are virt-v2v and virt-v2v-wrapper.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZOSGZFJICJ4W6JLK7M3REM26HVHJNMX/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Strahil Nikolov via Users
> I tried Gluster deployment after cleaning within the Cockpit web
> console, using the suggested ansible-playbook and fresh image with
> oVirt Node v4.4 ISO.  Ping from each host to the other two works for
> both mgmt and storage networks.  I am using DHCP for management
> network, hosts file for direct connect storage network.
> 
I've tested the command on a test Gluster 8.3 cluster and it
passes.Have you checked the gluster logs in '/var/log/gluster' ?
I know that there is LVM filtering on oVirt 4.4 enabled, so can you
take a look in the lvm conf :
grep -Ev "^$|#" /etc/lvm/lvm.conf  | grep filter


VDO is using lvm.conf too, so it could cause strange issues. What
happens when the deployment fails and you rerun (ansible should be
idempotent) ?


Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3VUNWID2KFMFHOUTSMEE7OZZROQYSK3A/


[ovirt-users] Re: potential split-brain after upgrading Gluster version and rebooting one of three storage nodes

2021-01-11 Thread Strahil Nikolov via Users
В 14:48 -0400 на 11.01.2021 (пн), Jayme написа:
> Correct me if I'm wrong but according to the docs, there might be a
> more elegant way of doing something similar with gluster cli ex:
> gluster volume heal  split-brain latest-mtime  --
> although I have never tried it myself.
True... yet rsync is far simpler for most users ;)

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O67I3V7Z3YFG7KYWNAU3MB4CJT2FCORK/


[ovirt-users] Re: potential split-brain after upgrading Gluster version and rebooting one of three storage nodes

2021-01-11 Thread Strahil Nikolov via Users

> Is this a split brain situation and how can I solve this? I would be
> very grateful for any help.
I've seen it before. Just check on the nodes which brick contains the
newest file (there is a timestamp inside it) and then rsync that file
from the node with newest version to the rest.
If gluster keeps showing that the file is still needing heal - just
"cat" it from the FUSE client (the mountpoint in /rhev/).

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2GLJIZQLFUZFSIVAVFFNG4CJZHNY7HFP/


[ovirt-users] Re: VM console does not work with new cluster.

2021-01-10 Thread Strahil Nikolov via Users
This sounds like firewall issue.

Best Regards,
Strahil Nikolov






В понеделник, 11 януари 2021 г., 01:05:47 Гринуич+2, Matthew Stier 
 написа: 





I've added several new hosts to my data center, and instead of adding them to 
my 'default' cluster, I created a new cluster ('two').

I can access the VM console (virt-viewer) when the VM is on the 'default' 
cluster, but once moved to the the new cluster, virt-viewer pops up and 
disappears in sub-second fashion.  Moving the VM back to 'default', restores 
access.

I'm running virt-viewer 9.0-256 on my Windows 10 Pro laptop. (for months)

I'm running Oracle Linux Virtualization Manager on Oracle Linux 7u9 hosts.

The new ('two') cluster hosts are connected to the same networks as the 
original systems ('default'). (They have the same assigned IP address.)

Firewalld is reporting the same ports and services enabled on 'default' and 
'two' hosts.  The VM's are not running firewalls.

Apparently I'm missing something.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AAKYLXLBV2FCL5TLYT32IY6IT4G43WLH/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XJZFU2ZLOYMFRALXP6D2ILYRNUBYVKTO/


[ovirt-users] Re: [ovirt-announce] [ANN] oVirt 4.4.4 is now generally available

2021-01-10 Thread Strahil Nikolov via Users
Hi Bernardo,

I think that when CentOS Stream 9 (and all EL 9 clones) come up - oVirt will 
switch , so I think it's worth trying the Stream (but no earlier than April).

Best Regards,
Strahil Nikolov






В неделя, 10 януари 2021 г., 09:32:01 Гринуич+2, Bernardo Juanicó 
 написа: 





Hello, considering we want to do a new oVirt hyper-converged deployment 
on CentOS for a production environment what should we do regarding the OS?
Considering Centos Stream is on tech preview and Centos 8 is near EOL, what is 
the correct path?
Install CentOS 8.3 and use it until EOL and then update the hosts to centos 
stream when it becomes stable?
install CentOS Stream even though it is on tech preview?

Regards!
Bernardo
PGP Key


On Mon, Dec 21, 2020 at 10:32 AM Sandro Bonazzola  wrote:
> 
> oVirt 4.4.4 is now generally available
> 
> The oVirt project is excited to announce the general availability of oVirt 
> 4.4.4 , as of December 21st, 2020.
> 
> This release unleashes an altogether more powerful and flexible open source 
> virtualization solution that encompasses hundreds of individual changes and a 
> wide range of enhancements across the engine, storage, network, user 
> interface, and analytics, as compared to oVirt 4.3.
> Important notes before you install / upgrade
> Please note that oVirt 4.4 only supports clusters and data centers with 
> compatibility version 4.2 and above. If clusters or data centers are running 
> with an older compatibility version, you need to upgrade them to at least 4.2 
> (4.3 is recommended).
> 
> Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7 are 
> no longer supported.
> For example, the megaraid_sas driver is removed. If you use Enterprise Linux 
> 8 hosts you can try to provide the necessary drivers for the deprecated 
> hardware using the DUD method (See the users’ mailing list thread on this at 
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXEFJNPHOXUH4HOOWRIRSB4/
>  )
> Documentation
> * If you want to try oVirt as quickly as possible, follow the 
> instructions on the Download page.
> * For complete installation, administration, and usage instructions, see 
> the oVirt Documentation.
> * For upgrading from a previous version, see the oVirt Upgrade Guide.
> * For a general overview of oVirt, see About oVirt.
> What’s new in oVirt 4.4.4 Release?
> This update is the fourth in a series of stabilization updates to the 4.4 
> series.
> 
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 8.3
> * CentOS Linux (or similar) 8.3
> * CentOS Stream (tech preview)
> 
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures 
> for:
> * Red Hat Enterprise Linux 8.3
> * CentOS Linux (or similar) 8.3
> * oVirt Node (based on CentOS Linux 8.3)
> * CentOS Stream (tech preview)
> 
> 
> oVirt Node and Appliance have been updated, including:
> * oVirt 4.4.4: https://www.ovirt.org/release/4.4.4/
> * Ansible 2.9.16: 
> https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v2.9.rst#v2-9-16
>  
> * CentOS Linux 8 (2011): 
> https://lists.centos.org/pipermail/centos-announce/2020-December/048207.html
> * Advanced Virtualization 8.3
> 
> 
> See the release notes [1] for installation instructions and a list of new 
> features and bugs fixed.
> 
> Notes:
> * oVirt Appliance is already available for CentOS Linux 8
> * oVirt Node NG is already available for CentOS Linux 8
> 
> Additional resources:
> * Read more about the oVirt 4.4.4 release highlights: 
> https://www.ovirt.org/release/4.4.4/ 
> * Get more oVirt project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog: 
> https://blogs.ovirt.org/
> 
> [1] https://www.ovirt.org/release/4.4.4/ 
> [2] https://resources.ovirt.org/pub/ovirt-4.4/iso/ 
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA
> 
> sbona...@redhat.com   
>   
> 
> 
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
> 
> ___
> Announce mailing list -- annou...@ovirt.org
> To unsubscribe send an email to announce-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/annou...@ovirt.org/message/KMYD2GAHZXWLE45SZWAMOXN4WYKV54MK/
> 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-08 Thread Strahil Nikolov via Users
What is the output of 'rpm -qa | grep vdo' ?
Most probably the ansible flow is not deploying kvdo , but it's necessary at a 
later stage.Try to overcome via "yum search kvdo" and then 'yum install 
kmod-kvdo" (replace kmod-kvdo with the package for EL8).

Also, I think that you can open a github issue at 
https://github.com/oVirt/ovirt-ansible-engine-setup/issues

Best Regards,
Strahil Nikolov 






В събота, 9 януари 2021 г., 01:24:56 Гринуич+2, Charles Lam 
 написа: 





Dear Strahil,

I have rebuilt everything fresh, switches, hosts, cabling - PHY-SEC shows 512 
for all nvme drives being used as bricks.  Name resolution via /etc/hosts for 
direct connect storage network works for all hosts to all hosts.  I am still 
blocked by the same 

"vdo: ERROR - Kernel module kvdo not installed\nvdo: ERROR - modprobe: FATAL: 
Module
kvdo not found in directory /lib/modules/4.18.0-240.1.1.el8_3.x86_64\n"

Any further suggestions are MOST appreciated.

Thank you and respectfully,

Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4ZPHST3IYCIPYEXZO27QUOGSLQIRX6K/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZCBLZGNYBDOB5BQJR2I372LOEJYR4RGA/


[ovirt-users] Re: how to fix time-drift on el8 hosts?

2021-01-07 Thread Strahil Nikolov via Users
Are you sure that the problem is not in the HostedEngine ?

Sync your host and then verify the situation on the HE

Best Regards,
Strahil Nikolov






В четвъртък, 7 януари 2021 г., 11:46:09 Гринуич+2, Nathanaël Blanchet 
 написа: 







Hello,

Engine complains about my fresh el8 ovirt 4.4.3 node host:


"Host hadriacus has time-drift of 3202 seconds while maximum configured value 
is 300 seconds."

I tried to fix it with these commands:

sudo chronyc -a makestep
chronyc sources
chronyc tracking

... but the host is still not synchronized.

How should I proceed?

-- 
Nathanaël Blanchet

Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5 
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P46GEHJ45PUZZHK2AY6A5ZH3AFBWUPCA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXOFZICIIEQHBHB6D3LBB3NRNI6ESV6W/


[ovirt-users] Re: Spliting login pages

2021-01-06 Thread Strahil Nikolov via Users
Sharon , what about a nginx/apache listening on different virtual hosts
and redirecting (actually proxying) to the correct portal ?
Do you think that it could work  (the certs will not be trusted, but
they can take the exception) ?
Best Regards,Strahil Nikolov
В 17:24 +0200 на 06.01.2021 (ср), Sharon Gratch написа:
> 
> On Wed, Jan 6, 2021 at 11:15 AM  wrote:
> > Hi,
> > 
> > Yes, we can.
> > 
> > But we want to separate them by FQDN
> > 
> > for example;
> > 
> > http:/virtualpc.example.com --> VM Portal
> > 
> > http:/virtualpc-admin.example.com --> Admin Portal
> > 
> > http:/virtualpc-grafana.example.com --> Grafana Portal
> 
> AFAIK this is not supported by oVirt since all should use the engine
> FQDN for accessing the engine.
> You can probably use your own customized URL redirection or set a few
> FQDNs, but It won't solve the problem for separating and it might
> cause other issues, so I won't recommend it.
> 
> > 
> > and normal user must don't have permission access to admin or
> > grafana portal web page
> 
> Sure, so login will fail if a regular user won't have permissions to
> login to webadmin or grafana. This deals with user permissions
> management that should be set correctly. It's not related to FQDNs...
> > ___
> > 
> > Users mailing list -- users@ovirt.org
> > 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/2J7GQ6ZDXQMGZ3USBJM7HYP5W3JRJLV5/
> > 
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS4QUWCNVXDL7J623EZPFEZUFO7IUMGQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BO5LT7L4U5OC7O57TQFLW65LZLDWWXFE/


[ovirt-users] Re: Hosts are non responsive Ovirt 3.6

2021-01-06 Thread Strahil Nikolov via Users
Have you tried to put a host into maintenance, remove and then readd it ?

You can access all Red Hat solutions with their free developer subscription.



Best Regards,
Strahil Nikolov






В сряда, 6 януари 2021 г., 13:17:42 Гринуич+2, Gary Lloyd  
написа: 





  


Hi please could someone point me in the right direction to renew ssl 
certificates for vdsm to communicate with ovirt 3.6 ?

I’m aware that this version hasn’t been supported for some time, this is a 
legacy environment which we are working towards decommissioning.

 

There seems to be a fix article for RHEV but we don’t have a subscription to 
view this information:

How to update expired RHEV certificates when all RHEV hosts got 
'Non-responsive' - Red Hat Customer Portal

 

These are what the vdsm hosts are showing:

Reactor thread::ERROR::2021-01-06 
11:04:59,505::m2cutils::337::ProtocolDetector.SSLHandshakeDispatcher::(handle_read)
 Error during handshake: sslv3 alert certificate expired

 

I have rerun engine-setup but this only seems to have fixed one of the vdsm 
hosts and the others are non responsive.

The others are in different clusters and we have some important services still 
running on these. 

 

Thanks

  

  
  


Gary Lloyd 
  
IT Infrastructure Manager 
  
Information and Digital Services 
  
01782 733063 
  
Innovation Centre 2 | Keele University | ST5 5NH 
  


  

 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HT6Z3WHPFTBIVKQYQZBZEXHSO24LPKVS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ROKPGJAM2TFAYFQPA6SGJTHJ5KP73INJ/


[ovirt-users] Re: Ovirt and Vagrant

2021-01-05 Thread Strahil Nikolov via Users
В 10:41 -0400 на 05.01.2021 (вт), Gervais de Montbrun написа:
Thanks for the feedback. Are you using ansible to launch the vm from the 
template, or to provision the template once it is up?
I was cloning VMs from a template, but as I'm still on oVirt 4.3 - I
cannot use this approach with EL8 (only oVirt 4.4 can seal EL8
Templates). I'm now building VMs and creating snapshots, as I can
easily revert back any changes and start new stuff.

I think Ansible is the most popular and supported choice for managing oVirt. 
Yet, I like the idea for Terraform.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HFM6DESX7CHTCHG37PIHJBSLPT46XAOX/


[ovirt-users] Re: [ovirt-devel] CentOS Stream support

2021-01-05 Thread Strahil Nikolov via Users
В 16:09 + на 05.01.2021 (вт), lejeczek via Users написа:
> Hi guys,
> 
> Is supported and save to transition with > 4.4 to Centos 
> Stream, now when "Stream" is the only way to the future? Any 
> knows for certain?

Stream is not used by all Red Hat teams (yet) , thus it might be a
little bit unstable. I would recommend you to wait till the begining of
Q2 2021 before you evaluate it.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OUABZTAB5VQJKFIE2UPC5QAMRG46APXZ/


[ovirt-users] Re: How to cancel a longtime running task on ovirt web interface ?

2021-01-05 Thread Strahil Nikolov via Users
It's about launching a VM , so just try to power off and it should cancel the 
launch.

Best Regards,
Strahil Nikolov






В вторник, 5 януари 2021 г., 06:52:24 Гринуич+2, tommy  
написа: 






Hi, everyone:
 
How to cancel a longtime running task on ovirt web interface ?
 
Task looks can not be cancelled.
 

 
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EX2W4JEKNPRA3IL3Q35R72YPUOWLKJ6T/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FSKGDNSMJSYJLKX2GWENVAJODQ46A7C5/


[ovirt-users] Re: Ovirt and Vagrant

2021-01-04 Thread Strahil Nikolov via Users
> I wonder what other folks are using or if someone has any suggestions
> to offer.

I'm using Ansible do deploy some stuff from templates.
I think that terraform is also used with oVirt, so you can give it a
try.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUM7MMOZTU54HSGAEOME7PDW4FMA7QQW/


[ovirt-users] Re: Move HostedEngine to new data domain.

2021-01-04 Thread Strahil Nikolov via Users
В 17:16 + на 04.01.2021 (пн), Diggy Mc написа:
> Running oVirt 4.3.9 and would like to move the HostedEngine VM in its
> entirety (disks, templates, etc) from its current data domain to a
> new one.  How can this be done?
The most tested way is using backup & restore.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42W4IM6XCZDZUNONLUPWFMIOAT6B4FBH/


[ovirt-users]Re: 回复: what these providers can do in ovirt envirment such as KVM or XEN ?

2021-01-04 Thread Strahil Nikolov via Users
Sadly my oriental language skills are close to "0".
Can you share the screenshot with English menu or to provide the steps which 
you have executed (menu after menu) to reach that state ?

Best Regards,
Strahil Nikolov






В понеделник, 4 януари 2021 г., 12:48:55 Гринуич+2, tommy  
написа: 






Now , there is a new question:
 
It raises error: invalid iso mirror path.
 
 

 
 
 
 
 
From: users-boun...@ovirt.org  On Behalf Of Arik Hadas
Sent: Friday, December 25, 2020 4:36 PM
To: tommy 
Cc: users 
Subject: [ovirt-users]Re: 回复: what these providers can do in ovirt envirment 
such as KVM or XEN ?
 
 
 
On Thu, Dec 24, 2020 at 5:12 AM tommy  wrote:
> But there are some errors when I import vm from KVM to oVirt on the potal 
> like this: 
>  
>  
> Failed to import Vm 1.ovmm to Data Center Default, Cluster cluster1
>  
>  
> Dec 24 11:05:04 enghost1 python3[283881]: detected unhandled Python exception 
> in '/usr/libexec/vdsm/kvm2ovirt'
> Dec 24 11:05:04 enghost1 systemd[1]: Started Session c966 of user root.
> Dec 24 11:05:04 enghost1 abrt-server[292991]: Deleting problem directory 
> Python3-2020-12-24-11:05:04-283881 (dup of Python3-2020-12-23-23:21:17-11693)
> Dec 24 11:05:04 enghost1 systemd[1]: Started Session c967 of user root.
> Dec 24 11:05:05 enghost1 vdsm[2112]: ERROR Job 
> '1c84b9de-b154-4f40-9ba7-9fe9109acfe4' failed#012Traceback (most recent call 
> last):#012  File "/usr/lib/python3.6/site-packages/vdsm/v2v.py", line 861, in 
> _run#012    self._import()#012  File 
> "/usr/lib/python3.6/site-packages/vdsm/v2v.py", line 886, in _import#012    
> self._proc.returncode))#012vdsm.v2v.V2VProcessError: Job 
> '1c84b9de-b154-4f40-9ba7-9fe9109acfe4' process failed exit-code: 1
> Dec 24 11:05:05 enghost1 dbus-daemon[958]: [system] Activating service 
> name='org.freedesktop.problems' requested by ':1.11820' (uid=0 pid=293006 
> comm="/usr/libexec/platform-python /usr/bin/abrt-action-" 
> label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper)
> Dec 24 11:05:05 enghost1 dbus-daemon[293014]: [system] Failed to reset fd 
> limit before activating service: org.freedesktop.DBus.Error.AccessDenied: 
> Failed to restore old fd limit: Operation not permitted
>  
 
It seems the execution of virt-v2v failed.
You can check if vdsm.log reveals more information on the cause or invoke 
virt-v2v manually on this host (optionally with debug logs enabled) to see what 
it fails on
 
>  
>  
>  
>  
>  
>  
> 发件人: users-boun...@ovirt.org  代表 tommy
> 发送时间: 2020年12月23日 23:00
> 收件人: 'Arik Hadas' 
> 抄送: 'users' 
> 主题: [ovirt-users]回复: what these providers can do in ovirt envirment such as 
> KVM or XEN ?
>  
> You are right!
>  
> Thanks.
>  
> 
>  
>  
>  
> 发件人: Arik Hadas  
> 发送时间: 2020年12月23日 22:41
> 收件人: tommy 
> 抄送: users 
> 主题: Re: [ovirt-users] what these providers can do in ovirt envirment such as 
> KVM or XEN ?
>  
>  
>  
> On Wed, Dec 23, 2020 at 2:31 PM tommy  wrote:
>> I have add the KVM server on the Potal, like the picture,but nothing 
>> changed, where to find the KVM vms on the potal?
>  
> Compute -> Virtual Machines -> Import -> set Source to KVM (via Libvirt) -> 
> set External provider to your provider -> Load
>  
>>  
>>  
>>  
>>  
>> 发件人: Arik Hadas  
>> 发送时间: 2020年12月23日 19:42
>> 收件人: tommy 
>> 抄送: users 
>> 主题: Re: [ovirt-users] what these providers can do in ovirt envirment such as 
>> KVM or XEN ?
>>  
>>  
>>  
>> On Mon, Dec 21, 2020 at 11:30 AM tommy  wrote:
>>> If I add such providers like KVM and XEN, what can I do using oVirt?
>>>  
>>> 
>>>  
>>>  
>>> To manage XEN or KVM usning oVirt?
>>  
>> You can then import VMs from those environments to your data center that is 
>> managed by oVirt.
>>  
>>>  
>>> ___Users mailing list -- 
>>> users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy 
>>> Statement: https://www.ovirt.org/privacy-policy.htmloVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JRF32YJP5RBUHCRFW7URQ3LTLIJ6XTKL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/44SBXMOOUDGEG4TTQ4BH7YCNO5CBAT5C/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQ5F55AVGVFUIPXMVCON2NSHF4DZ6IJF/


[ovirt-users] RHSC upstream version

2021-01-03 Thread Strahil Nikolov via Users
Hi All,

I noticed that Red Hat Gluster Storage Console  is based on oVirt's web 
interface.
Does anyone know how to deploy it on CentOS 8 for managing Gluster v8.X ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/POF7O5HURCZIVQ5JFWRQVWNXSDQN4CFU/


[ovirt-users] Re: upload_disk.py - CLI Upload Disk

2021-01-01 Thread Strahil Nikolov via Users
Hi Ilan,

Do you know how to use the config for disk_upload.py (or the --engine-url, 
--username, --password-file or --cafile params) on 4.3.10 as I had to directly 
edit the script  ?



Best Regards,
Strahil Nikolov






В четвъртък, 31 декември 2020 г., 00:07:06 Гринуич+2, Jorge Visentini 
 написа: 





Hi Nikolov.

I uploaded to 4.4.4

Em qua., 30 de dez. de 2020 às 14:14, Strahil Nikolov  
escreveu:
> Are you uploading to 4.4 or to the old 4.3 ?
> I'm asking as there should be an enhancement that makes a checksum on the 
> uploads in order to verify that the upload was successfull.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В сряда, 30 декември 2020 г., 18:37:52 Гринуич+2, Jorge Visentini 
>  написа: 
> 
> 
> 
> 
> 
> Hi Ilan.
> 
> Ohh sorry I didn't understand.
> 
> I created the file and executed the script as you said and it worked.
> 
> python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py -c 
> engine disk01-SO.qcow2 --disk-format qcow2 --disk-sparse --sd-name ISOs
> [   0.0 ] Checking image...
> [   0.4 ] Image format: qcow2
> [   0.4 ] Disk format: cow
> [   0.4 ] Disk content type: data
> [   0.4 ] Disk provisioned size: 32213303296
> [   0.4 ] Disk initial size: 12867010560
> [   0.4 ] Disk name: disk01-SO.qcow2
> [   0.4 ] Disk backup: False
> [   0.4 ] Connecting...
> [   0.4 ] Creating disk...
> [  19.4 ] Disk ID: 9328e954-9307-420f-b5d7-7b81071f88a5
> [  19.4 ] Creating image transfer...
> [  22.5 ] Transfer ID: 16236f8e-79af-4159-83f7-8331a2f25919
> [  22.5 ] Transfer host name: kcmi1kvm08.kosmo.cloud
> [  22.5 ] Uploading image...
> [ 100.00% ] 30.00 GiB, 197.19 seconds, 155.79 MiB/s
> [ 219.7 ] Finalizing image transfer...
> [ 227.0 ] Upload completed successfully
> 
> Thank you for the explanation!
> 
> Em qua., 30 de dez. de 2020 às 00:54, Ilan Zuckerman  
> escreveu:
>> Hi Jorge,
>> 
>> Lately DEV started to implement the capability of passing the basic 
>> arguments for the SDK example scripts with a help of configuration file.
>> I assume that not all of the example scripts support this new feature yet.
>> 
>> Until now you had to run the examples with:
>> 
>>     --engine-url https://engine1
>>     --username admin@internal
>>     --password-file /path/to/engine1-password
>>     --cafile /path/to/engine1.pem
>> 
>> This is very painful when running the example a lot.
>> Now you can run them with:
>> 
>>     -c engine1
>> 
>> Assuming that you have a configuration file with all the
>> details at:
>> 
>>     ~/.config/ovirt.conf
>> 
>> Here is an example for you of how this file should look like:
>> 
>> [engine1]
>> engine_url = https://
>> username = admin@internal
>> password = xx
>> cafile = /path/to/engine1.pem
>> 
>> On Wed, Dec 30, 2020 at 4:18 AM Jorge Visentini  
>> wrote:
>>> Hi All.
>>> 
>>> I'm using version 4.4.4 (latest stable version - 
>>> ovirt-node-ng-installer-4.4.4-2020122111.el8.iso)
>>> 
>>> I tried using the upload_disk.py script, but I don't think I knew how to 
>>> use it.
>>> 
>>> When I try to use it, these errors occur:
>>> 
>>> python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py 
>>> disk01-SO.qcow2 --disk-format qcow2 --sd-name ISOs
>>> usage: upload_disk.py [-h] -c CONFIG [--debug] [--logfile LOGFILE]
>>>                       [--disk-format {raw,qcow2}] [--disk-sparse]
>>>                       [--enable-backup] --sd-name SD_NAME [--use-proxy]
>>>                       [--max-workers MAX_WORKERS] [--buffer-size 
>>> BUFFER_SIZE]
>>>                       filename
>>> upload_disk.py: error: the following arguments are required: -c/--config
>>> 
>>> 
>>> Using the upload_disk.py help:
>>> 
>>> python3 upload_disk.py --help
>>> -c CONFIG, --config CONFIG
>>>                         Use engine connection details from [CONFIG] section 
>>> in
>>>                         ~/.config/ovirt.conf.
>>> 
>>> This CONFIG, is the API access configuration for authentication? Because 
>>> analyzing the script I did not find this information.
>>> 
>>> Does this new version work differently or am I doing something wrong?  
>>> 
>>> In the sdk_4.3 version of upload_disk.py I had to change the script to add 
>>> the access information, but it worked.
>>> 
>>> [root@engineteste01 ~]# python3 upload_disk.py disk01-SO.qcow2Checking 
>>> image...Disk format: qcow2Disk content type: dataConnecting...Creating 
>>> disk...Creating transfer session...Uploading image...Uploaded 
>>> 20.42%Uploaded 45.07%Uploaded 68.89%Uploaded 94.45%Uploaded 2.99g in 42.17 
>>> seconds (72.61m/s)Finalizing transfer session...Upload completed 
>>> successfully[root@engineteste01 ~]#
>>> 
>>> Thank you all!!
>>> 
>>> -- 
>>> Att,
>>> Jorge Visentini
>>> +55 55 98432-9868
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> 

[ovirt-users] Re: Windows 7 vm lost network connection under heavy network load

2020-12-30 Thread Strahil Nikolov via Users
Are you using E1000 on the VMs or on the Host ?
If it's the second , you should change the hardware .

I have never used e1000 for VMs as it is an old tech. Better to install the 
virtio drivers and then use the virtio type of NIC.

Best Regards,
Strahil Nikolov






В четвъртък, 31 декември 2020 г., 04:06:50 Гринуич+2, Joey Ma 
 написа: 





Hi folks,

Happy holidays.

I'm having an urgent problem :smile: .

I've installed oVirt 4.4.2 on CentOS 8.2 and then created several Windows 7 vms 
for stress testing. I found that the heavy network load would lead to the e1000 
net card NOT able to receive packets, it seemed totally blocked. In the 
meantime, packet sending was fine. 

Only re-enabling the net card can restore the network. Has anyone also had this 
problem? Looking forward to your insights. Much appreciated.


Best regards,
Joey

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FALLTVEFQ2FDWJXMVD3CKLHHB2VV6FRA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAUKCNGSNV3ZBOXLJYGMVWF7ASESD4IQ/


[ovirt-users] Re: Breaking up a oVirt cluster on storage domain boundary.

2020-12-30 Thread Strahil Nikolov via Users
> Can I migrate storage domains, and thus all the VMs within that
> storage domain?
> 
>  
> 
> Or will I need to build new cluster, with new storage domains, and
> migrate the VMs?
> 
> 
Actually you can create a new cluster and ensure that the Storage
domains are accessible by that new cluster.
Then to migrate, you just need to power off the VM, Edit -> change
cluster, network, etc and power it up.
It will start on the hosts in the new cluster and then you just need to
verify that the application is working properly.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFPYY6AKXTYKN4RMQ6MY532WXKQ6MC4Z/


[ovirt-users] Re: Best Practice? Affinity Rules Enforcement Manager or High Availability?

2020-12-30 Thread Strahil Nikolov via Users

> What is the best solution for making your VMs able to automatically
> boot up on another working host when something goes wrong (gluster
> problem, non responsive host etc)? Would you enable the Affinity
> Manager and enforce some policies or would you set the VMs you want
> as Highly Available?

High Availability and Host fencing are the 2 parts that you need to
ensure that the VM will be restarted after a failure.
If you storage domain goes bad, the VMs will be paused and
theoretically they will be resumed automatically when the storage
domain is back.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E7BSYJSU57OIUFS4R3NIBBGOSYAABAZ2/


[ovirt-users] Re: upload_disk.py - CLI Upload Disk

2020-12-30 Thread Strahil Nikolov via Users
Are you uploading to 4.4 or to the old 4.3 ?
I'm asking as there should be an enhancement that makes a checksum on the 
uploads in order to verify that the upload was successfull.

Best Regards,
Strahil Nikolov






В сряда, 30 декември 2020 г., 18:37:52 Гринуич+2, Jorge Visentini 
 написа: 





Hi Ilan.

Ohh sorry I didn't understand.

I created the file and executed the script as you said and it worked.

python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py -c 
engine disk01-SO.qcow2 --disk-format qcow2 --disk-sparse --sd-name ISOs
[   0.0 ] Checking image...
[   0.4 ] Image format: qcow2
[   0.4 ] Disk format: cow
[   0.4 ] Disk content type: data
[   0.4 ] Disk provisioned size: 32213303296
[   0.4 ] Disk initial size: 12867010560
[   0.4 ] Disk name: disk01-SO.qcow2
[   0.4 ] Disk backup: False
[   0.4 ] Connecting...
[   0.4 ] Creating disk...
[  19.4 ] Disk ID: 9328e954-9307-420f-b5d7-7b81071f88a5
[  19.4 ] Creating image transfer...
[  22.5 ] Transfer ID: 16236f8e-79af-4159-83f7-8331a2f25919
[  22.5 ] Transfer host name: kcmi1kvm08.kosmo.cloud
[  22.5 ] Uploading image...
[ 100.00% ] 30.00 GiB, 197.19 seconds, 155.79 MiB/s
[ 219.7 ] Finalizing image transfer...
[ 227.0 ] Upload completed successfully

Thank you for the explanation!

Em qua., 30 de dez. de 2020 às 00:54, Ilan Zuckerman  
escreveu:
> Hi Jorge,
> 
> Lately DEV started to implement the capability of passing the basic arguments 
> for the SDK example scripts with a help of configuration file.
> I assume that not all of the example scripts support this new feature yet.
> 
> Until now you had to run the examples with:
> 
>     --engine-url https://engine1
>     --username admin@internal
>     --password-file /path/to/engine1-password
>     --cafile /path/to/engine1.pem
> 
> This is very painful when running the example a lot.
> Now you can run them with:
> 
>     -c engine1
> 
> Assuming that you have a configuration file with all the
> details at:
> 
>     ~/.config/ovirt.conf
> 
> Here is an example for you of how this file should look like:
> 
> [engine1]
> engine_url = https://
> username = admin@internal
> password = xx
> cafile = /path/to/engine1.pem
> 
> On Wed, Dec 30, 2020 at 4:18 AM Jorge Visentini  
> wrote:
>> Hi All.
>> 
>> I'm using version 4.4.4 (latest stable version - 
>> ovirt-node-ng-installer-4.4.4-2020122111.el8.iso)
>> 
>> I tried using the upload_disk.py script, but I don't think I knew how to use 
>> it.
>> 
>> When I try to use it, these errors occur:
>> 
>> python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py 
>> disk01-SO.qcow2 --disk-format qcow2 --sd-name ISOs
>> usage: upload_disk.py [-h] -c CONFIG [--debug] [--logfile LOGFILE]
>>                       [--disk-format {raw,qcow2}] [--disk-sparse]
>>                       [--enable-backup] --sd-name SD_NAME [--use-proxy]
>>                       [--max-workers MAX_WORKERS] [--buffer-size BUFFER_SIZE]
>>                       filename
>> upload_disk.py: error: the following arguments are required: -c/--config
>> 
>> 
>> Using the upload_disk.py help:
>> 
>> python3 upload_disk.py --help
>> -c CONFIG, --config CONFIG
>>                         Use engine connection details from [CONFIG] section 
>> in
>>                         ~/.config/ovirt.conf.
>> 
>> This CONFIG, is the API access configuration for authentication? Because 
>> analyzing the script I did not find this information.
>> 
>> Does this new version work differently or am I doing something wrong?  
>> 
>> In the sdk_4.3 version of upload_disk.py I had to change the script to add 
>> the access information, but it worked.
>> 
>> [root@engineteste01 ~]# python3 upload_disk.py disk01-SO.qcow2Checking 
>> image...Disk format: qcow2Disk content type: dataConnecting...Creating 
>> disk...Creating transfer session...Uploading image...Uploaded 20.42%Uploaded 
>> 45.07%Uploaded 68.89%Uploaded 94.45%Uploaded 2.99g in 42.17 seconds 
>> (72.61m/s)Finalizing transfer session...Upload completed 
>> successfully[root@engineteste01 ~]#
>> 
>> Thank you all!!
>> 
>> -- 
>> Att,
>> Jorge Visentini
>> +55 55 98432-9868
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PYNCYYFNT42NEHBXD57V3MMMUP3LZRZA/
>> 
> 
> 
> -- 
> Thanks
> -- 
> With Respect,
> 
> Ilan Zuckerman, ISTQB, ANSIBLE SPECIALIST, RHCE
> RHV-M
> STORAGE AUTOMATION QE
> Red Hat EMEA
> 
> Israel
> izuck...@redhat.com    T: +972-9-7692000    
> M: 054-9956728     IM: izuckerm
>  
> 
> 
> 


-- 
Att,
Jorge Visentini
+55 55 98432-9868
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy 

[ovirt-users] Re: Cannot upgrade cluster to v4.5 (All hosts are CentOS 8.3.2011)

2020-12-30 Thread Strahil Nikolov via Users
Maybe there is a missing package that is preventing that.
Let's see what the devs will find out next year (thankfully you wpn't have to 
wait much).

Best Regards,
Strahil Nikolov






В сряда, 30 декември 2020 г., 16:30:37 Гринуич+2, Gilboa Davara 
 написа: 





Short update.
1. Ran ovirt-hosted-engine-cleanup on all 3 nodes.
2. Installed hosted engine on the first engine via hosted-engine --deploy (w/ 
gluster storage).
3. Added the two remaining hosts using host -> new -> w/ hosted engine -> 
deploy.
4. All machines are up and running and I can see theme in hosted-engine 
--vm-status and they all have a valid score (3400).
5. But... Tried upgrading the cluster to 4.5, no errors, but cluster is still 
in 4.4 :/

- Gilboa

On Mon, Dec 28, 2020 at 4:29 PM Gilboa Davara  wrote:
> 
> 
> On Fri, Dec 25, 2020 at 7:41 PM Strahil Nikolov  wrote:
>> Any hints in vdsm logs on the affected host or on the broker.log/agent.log ?
>> 
>> Happy Hollidays to everyone!
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
> 
> Hello,
> 
> Sorry for the late reply. 
> One of the nodes suffered a major failure and I'm waiting for the tech 
> support to ship replacement parts.
> Once it's up and running, I'll send the logs.
> 
> Happy holidays!
> - Gilboa
> 
> 
>  
>> 
>> В 14:33 +0200 на 25.12.2020 (пт), Gilboa Davara написа:
>>> Hello,
>>> 
>>> Reinstall w/ redeploy produced the same results.
>>> 
>>> - Gilboa
>>> 
>>> On Thu, Dec 24, 2020 at 8:07 PM Strahil Nikolov  
>>> wrote:
> 
> I "forked" this email into a new subject ("Manually deploying a 
> hyperconverged setup with an existing gluster bricks")
> But nevertheless, no idea if its intended or not, but it seems that 
> adding a host via "host->computer->create" doesn't create the necessary 
> /etc/ovirt-hosted-engine/hosted-engine.conf configuration on the new 
> hosts, preventing ovirt-ha-* services from starting.
> 
> 
 
 So , set the host into maintenance and 
 then select installation -> reinstall -> Hosted Engine -> Deploy
 
 Best Regards,
 Strahil Nikolov
 
>>> ___Users mailing list -- 
>>> users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy 
>>> Statement: https://www.ovirt.org/privacy-policy.htmloVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHBX2LRHIPHRJFAPEX334PRUUYXRCCDB/
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HR422FBFGLSXTHMT3STFVC6UNRXUTNMM/


[ovirt-users] Re: The he_fqdn proposed for the engine VM resolves on this host - error?

2020-12-28 Thread Strahil Nikolov via Users
There is some issue with the dns. Check the A/ and PTR records are correct 
for the Hosted Engine .


Best Regards,
Strahil Nikolov






В понеделник, 28 декември 2020 г., 22:15:14 Гринуич+2, lejeczek via Users 
 написа: 





hi chaps,

a newcomer here. I use cockpit to deploy hosted engine and I 
get this error/warning message:

"The he_fqdn proposed for the engine VM resolves on this host"

I should mention that if I remove the IP to which FQDN 
resolves off that iface(plain eth no vlans) then I get this:

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, 
"msg": "The selected network interface is not valid"}

All these errors seem bit too cryptic to me.
Could you shed bit light on what is oVirt saying exactly and 
why it's not happy that way?

many thanks, L.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTAYI3GYI2ANRKWMME62BJ4DY2XEGFJM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7WHOTFHECMDQXDDHY3VPEXSCNJNMEVK/


[ovirt-users] Re: New to oVirt - HE and Metric Store Question

2020-12-28 Thread Strahil Nikolov via Users
I send it unfinished.
Another one from Red Hat's Self-Hosted Engine Recommendations:

A storage domain dedicated to the Manager virtual machine is created during 
self-hosted engine deployment. Do not use this storage domain for any other 
virtual machines.

If it's a fresh deployment , it's easier to create a new LUN/NFS share/Gluster 
Volume and use that.
The following RH solution describes how to move HostedEngine to another storage 
domain: https://access.redhat.com/solutions/2998291

In summary: Migrate the VMs to another storage domain, set 1 host in 
maintenance , create a backup and then restore that backup using the new 
storage domain and the node that was in maintenance...

As I have never restored my oVirt Manager , I can't provide more help.


Best Regards,
Strahil Nikolov



В понеделник, 28 декември 2020 г., 19:40:15 Гринуич+2, Nur Imam Febrianto 
 написа: 








 

What kind of situation is that ? If so, how can I migrate my hosted engine into 
another storage domain ?

 

Regards,

 

Nur Imam Febrianto

 


From: Strahil Nikolov
Sent: 29 December 2020 0:31
To: oVirt Users;  Nur Imam Febrianto
Subject: Re: [ovirt-users] New to oVirt - HE and Metric Store Question


 

     
>    1. Right now we are using one SAN with 4 LUN (each mapped into 1 >specific 
>volume) and configure the storage domain for each LUn (1 LUN = 1 >Storage 
>Domain). Is this configuration are good ? One more, about Hosted >Engine, when 
>we setup the cluster, it provision one storage domain, but >the storage domain 
>is not exlusively used by Hosted Engine, we use it too >for other VM. Are this 
>OK or it have a side impact ?

Avoid using HostedEngine's storage domain for other VMs.You might get into 
situation that you want to avoid.

Best Regards,
Strahil Nikolov

 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WRTO7ATVQG4YRQ4OMW3Q6LJQMRGTS6XT/


[ovirt-users] Re: New to oVirt - HE and Metric Store Question

2020-12-28 Thread Strahil Nikolov via Users
I can't recall the exact issue that was reported in the mailing list, but I 
remember that the user had to power off the engine and the VMs... the Devs can 
clearly indicate the risks of running the HostedEngine with other VMs on the 
same storage domain.

Based on Red Hat's RHV documentation the following warning is clearly 
indicating some of the reasons:

Creating additional data storage domains in the same data center as the 
self-hosted engine storage domain is highly recommended. If you deploy the 
self-hosted engine in a data center with only one active data storage domain, 
and that storage domain is corrupted, you will not be able to add new storage 
domains or remove the corrupted storage domain; you will have to redeploy the 
self-hosted engine.

Another one from Red Hat's Self-Hosted Engine Recommendations:







В понеделник, 28 декември 2020 г., 19:40:15 Гринуич+2, Nur Imam Febrianto 
 написа: 








 

What kind of situation is that ? If so, how can I migrate my hosted engine into 
another storage domain ?

 

Regards,

 

Nur Imam Febrianto

 


From: Strahil Nikolov
Sent: 29 December 2020 0:31
To: oVirt Users;  Nur Imam Febrianto
Subject: Re: [ovirt-users] New to oVirt - HE and Metric Store Question


 

     
>    1. Right now we are using one SAN with 4 LUN (each mapped into 1 >specific 
>volume) and configure the storage domain for each LUn (1 LUN = 1 >Storage 
>Domain). Is this configuration are good ? One more, about Hosted >Engine, when 
>we setup the cluster, it provision one storage domain, but >the storage domain 
>is not exlusively used by Hosted Engine, we use it too >for other VM. Are this 
>OK or it have a side impact ?

Avoid using HostedEngine's storage domain for other VMs.You might get into 
situation that you want to avoid.

Best Regards,
Strahil Nikolov

 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GNCZUW75PSUI6XWXJ542DC3L25M46VQE/


[ovirt-users] Re: New to oVirt - HE and Metric Store Question

2020-12-28 Thread Strahil Nikolov via Users
     
>    1. Right now we are using one SAN with 4 LUN (each mapped into 1 >specific 
>volume) and configure the storage domain for each LUn (1 LUN = 1 >Storage 
>Domain). Is this configuration are good ? One more, about Hosted >Engine, when 
>we setup the cluster, it provision one storage domain, but >the storage domain 
>is not exlusively used by Hosted Engine, we use it too >for other VM. Are this 
>OK or it have a side impact ?

Avoid using HostedEngine's storage domain for other VMs.You might get into 
situation that you want to avoid.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BH7UZKP4PSCNCWRVSQEZPKEIX5P5HOSR/


[ovirt-users] Re: Moving VMs from 4.3.9 to 4.4.4

2020-12-28 Thread Strahil Nikolov via Users
I'm not sure if the templates are automatically transferred , but it's worth 
checking before detaching the storage.

Best Regards,
Strahil Nikolov






В понеделник, 28 декември 2020 г., 18:53:27 Гринуич+2, Diggy Mc 
 написа: 





Templates?  Aren't the VM's templates automatically copied to the export domain 
when the VM is copied to the export domain as part of the Export operation?  
Maybe I'm confused about how the export/import operations work and the purpose 
of the export domain.

And noted, I will be sure to only have one environment at a time attached to 
the export domain.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IZ6OEZ7NODHNFAU77YHQS7FLAT5F3JY6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I363KPJPQUKYJ6LJZKPHTM4CAQEVOXQD/


[ovirt-users] Re: Shrink iSCSI Domain

2020-12-28 Thread Strahil Nikolov via Users
Vinius,

does your storage provide dedpulication ? If yes, then you can provide a new 
thin-provisioned LUN and migrate the data from the old LUN to the new one.

Best Regards,
Strahil Nikolov






В понеделник, 28 декември 2020 г., 18:27:38 Гринуич+2, Vinícius Ferrão via 
Users  написа: 





Hi Shani, thank you! 



It’s only one LUN :(




So it may be a best practice to split an SD in multiple LUNs?




Thank you.



>  
> On 28 Dec 2020, at 09:08, Shani Leviim  wrote:
> 
> 
>  
>  
>  Hi,
> 
>  You can reduce LUNs from an iSCSI storage domain once it's in maintenance. 
>[1]
> 
> 
>  On the UI, after putting the storage domain in maintenance > Manage Domain > 
>select the LUNs to be removed from the storage domain.
> 
>  
> 
> 
>  Note that reducing LUNs is applicable in case the storage domain has more 
>than 1 LUN.
> 
>  (Otherwise, removing the single LUN means removing the whole storage domain).
> 
> 
>  
> 
> 
>  [1]  
>https://www.ovirt.org/develop/release-management/features/storage/reduce-luns-from-sd.html
> 
>  
> 
> 
>  
>  
>  
>  
>  
> Regards,
> Shani Leviim
> 
> 
> 
> 
> 
> 
> 
> 
>  
> On Sun, Dec 27, 2020 at 8:16 PM Vinícius Ferrão via Users  
> wrote:
> 
> 
>>  Hello,
>> 
>> Is there any way to reduce the size of an iSCSI Storage Domain? I can’t seem 
>> to figure this myself. It’s probably unsupported, and the path would be 
>> create a new iSCSI Storage Domain with the reduced size and move the virtual 
>> disks to there and them delete the old one.
>> 
>> But I would like to confirm if this is the only way to do this…
>> 
>> In the past I had a requirement, so I’ve created the VM Domains with 10TB, 
>> now it’s just too much, and I need to use the space on the storage for other 
>> activities.
>> 
>> Thanks all and happy new year.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to  users-le...@ovirt.org
>> Privacy Statement:  https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:  
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:  
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4B26ZBZUMRXZ6MLJ6YQTK26SZNZOYQLF/
>> 
> 
> 
> 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWQ2WQZ35U3XEU67MWKPB7CJK7YMNTTG/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6L5XYJNZBSZIORFYBLLRDYOE7A5XMJTT/


[ovirt-users] Re: Moving VMs from 4.3.9 to 4.4.4

2020-12-28 Thread Strahil Nikolov via Users
Keep the export domain attached only to 1 environment at a time... it's way 
more safer. Usually each engine updates some metafiles on the storage domain 
and when both try to do it ... you got a bad situation there.

Attach to 4.3 , move VMs, power off them , detach the storage domain - so you 
can attach it to 4.4 and migrate those VMs. And don't forget the templates the 
VMs were created from.

Best Regards,
Strahil Nikolov






В понеделник, 28 декември 2020 г., 13:10:01 Гринуич+2, Diggy Mc 
 написа: 





Is it safe to attach my new 4.4 environment to an export domain at the same 
time my 4.3 environment is attached to it?  I need to start moving VMs from my 
current 4.3 to my new 4.4 environment.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E4FQFIPVKJUGK56BROXZGCLHHVRXSMZM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFKJPMGV6WHQTQ7OPYCKX3CU3VGLZOTE/


[ovirt-users] Re: CentOS 8 is dead

2020-12-25 Thread Strahil Nikolov via Users
> I do not expect particular issues; for OL we’re also working to the
> pure OL7 to OL8 upgrade process.
> BTW, I tested more times the moving from CentOS 7 to OL7 / oVirt to
> OLVM on 4.3 release.
> 
> > I saw 
> > https://blogs.oracle.com/virtualization/getting-started-with-oracle-linux-virtualization-manager
> >  , but it's a little bit outdated and is about OLVM 4.2 + OEL 7 .
> 
> We’re now on OLVM 4.3.6+ (
> https://blogs.oracle.com/virtualization/announcing-oracle-linux-virtualization-manager-43
> ) and working on latest maintenance release (on 4.3.10+).
> The plan is to then work on OLVM 4.4 with OL8.

Simon, 

How can I track the progress ? It seems that I need to migrate in 2
steps :
1) CentOS7/oVIrt 4.3.10 to OEL7 + OLVM 4.3.10
2) At a later stage to OEL8 + OLVM 4.4.+

I know that there is a centos to OEL script for the migration , but as
oVirt (and most probably OLVM) requires some external repos - it will
take some preparation.
Do you think that it's more reasonable to just reinstall one of the
nodes to OEL 8 and then deploy the HE from backup (once 4.4 is
available) ?
Then migrating the rest should be easier.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RF6XNFVKOXLPWX5S3VYXBH6DOORYEXRA/


[ovirt-users] Re: CentOS 8 is dead

2020-12-25 Thread Strahil Nikolov via Users
В 22:32 +0100 на 25.12.2020 (пт), marcel d'heureuse написа:
> I think oracle is not a solution. if they do the same with oracle
> Linux like java one year ago. that you can't use it in company's it
> will be a wast of time to move to oracle. 
> 
> or am I wrong?
> 
> br
> marcel
I admit that Oracle's reputation is not best ... but I guess if they do
take such approach - we can always migrate to Springade/Rocky
Linux/Lenix ..
After all , OLVM is new tech for Oracle and they want more users to
start using it (and later switch to paid subscription). 

I think that it's worth testing.

@Simon,

do you have any idea if there will be any issues migrating from CentOS
7 + oVirt 4.3.10 to OEL 8 + OLVM  (once the 4.4 is available ) ?
I saw 
https://blogs.oracle.com/virtualization/getting-started-with-oracle-linux-virtualization-manager
, but it's a little bit outdated and is about OLVM 4.2 + OEL 7 .


Happy Hollidays! 
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VS6GH7FKBLWLDHJ6JUPENKNQEPWN2KL/


[ovirt-users] Re: CentOS 8 is dead

2020-12-25 Thread Strahil Nikolov via Users
В 19:35 + на 25.12.2020 (пт), James Loker-Steele via Users написа:
> Yes.
> We use OEL and have setup oracles branded ovirt as well as test ovirt
> on oracle and it works a treat.
Hey James,

is Oracle 's version of oVirt paid ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N37Q6Y2PVGELQ3DKJ7CZ6H4TPSEGG3IH/


[ovirt-users] Re: CentOS 8 is dead

2020-12-25 Thread Strahil Nikolov via Users
В 18:22 + на 25.12.2020 (пт), Diggy Mc написа:
> Is Oracle Linux a viable alternative for the oVirt project?  It is,
> after all, a rebuild of RHEL like CentOS.  If not viable, why not?  I
> need to make some decisions posthaste about my pending oVirt 4.4
> deployments.

It should be ,as Oracle has their own OLVM: 
https://blogs.oracle.com/virtualization/announcing-oracle-linux-virtualization-manager-43

Merry Christmas !

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HDQ3P6U7NJGTMWLV7ZUBC2AU56EPVCGR/


[ovirt-users] Re: Cannot upgrade cluster to v4.5 (All hosts are CentOS 8.3.2011)

2020-12-25 Thread Strahil Nikolov via Users
Any hints in vdsm logs on the affected host or on the
broker.log/agent.log ?
Happy Hollidays to everyone!
Best Regards,Strahil Nikolov
В 14:33 +0200 на 25.12.2020 (пт), Gilboa Davara написа:
> Hello,
> 
> Reinstall w/ redeploy produced the same results.
> 
> - Gilboa
> 
> 
> On Thu, Dec 24, 2020 at 8:07 PM Strahil Nikolov <
> hunter86...@yahoo.com> wrote:
> > > I "forked" this email into a new subject ("Manually deploying a
> > > hyperconverged setup with an existing gluster bricks")But
> > > nevertheless, no idea if its intended or not, but it seems that
> > > adding a host via "host->computer->create" doesn't create the
> > > necessary /etc/ovirt-hosted-engine/hosted-engine.conf
> > > configuration on the new hosts, preventing ovirt-ha-* services
> > > from starting.
> > > 
> > 
> > So , set the host into maintenance and 
> > then select installation -> reinstall -> Hosted Engine -> Deploy
> > 
> > Best Regards,
> > Strahil Nikolov
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHBX2LRHIPHRJFAPEX334PRUUYXRCCDB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6I3UC5QUDDOPTCBMWWNWZNDJV2QJCPD4/


[ovirt-users] Re: engine storage fail after upgrade

2020-12-24 Thread Strahil Nikolov via Users
Can you enable debug logs on the host hosting the Hosted Engine ?
Details can be found on 
https://www.ovirt.org/develop/developer-guide/vdsm/log-files.html

Merry Christmas to all !


Best Regards,
Strahil Nikolov






В петък, 25 декември 2020 г., 07:24:32 Гринуич+2, ozme...@hotmail.com 
 написа: 





Hi,
After upgrade from 4.3 to 4.4 pop-ups some error on engine.
It becomes unavailable several times for a 2-3 minutes and cames back in a day
after some research on the system, found some logs
On THE hosted_storage there are 2 events

1- Failed to update VMs/Templates OVF data for Storage Domain hosted_storage in 
Data Center XXX
2- Failed to update OVF disks 9cbb34d0-06b0-4ce7-a3fa-7dfed689c442, OVF data 
isn't updated on those OVF stores (Data Center XXX, Storage Domain 
hosted_storage).

Is there any idea how can i fix this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSIVPT4AGAWXZTZZAXMW7DJ6NK5TSOAG/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7PIL44ENMBQ6Y6AKUQCNGP3Y7P7TA7U/


[ovirt-users] Re: Cannot upgrade cluster to v4.5 (All hosts are CentOS 8.3.2011)

2020-12-24 Thread Strahil Nikolov via Users
> I "forked" this email into a new subject ("Manually deploying a
> hyperconverged setup with an existing gluster bricks")But
> nevertheless, no idea if its intended or not, but it seems that
> adding a host via "host->computer->create" doesn't create the
> necessary /etc/ovirt-hosted-engine/hosted-engine.conf configuration
> on the new hosts, preventing ovirt-ha-* services from starting.
> 

So , set the host into maintenance and 
then select installation -> reinstall -> Hosted Engine -> Deploy

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QBJAISOFA2DIRXKIC6IW4USUJMVY244V/


[ovirt-users] Re: FW: oVirt 4.4 and Active directory

2020-12-24 Thread Strahil Nikolov via Users
Hi Latcho,
would you mind to open a bug in bugzilla.redhat.com   next year ?

Best Regards,Strahil Nikolov
В 11:09 + на 24.12.2020 (чт), Latchezar Filtchev написа:
>  
> Hello ,
>  
> I think I resolved this issue. It is dig response when resolving the
> domain name!
>  
> CentOS-7 – bind-utils-9.11.4-16.P2.el7_8.6.x86_64; Windows AD level
> 2008R2; in my case dig returns answer with
>  
> ;; ANSWER SECTION:
> mb118.local.   600 IN   A 192.168.1.7
>  
> IP address returned is address of DC
>  
> CentOS-8 - bind-utils-9.11.20-5.el8.x86_64; Same Domain Controller;
> dig returns answer without ;;ANSWER SECTION e.g. IP address of DC
> cannot be identified.
>  
> The solution is to add directive ‘+nocookie’, after ‘+tcp’  in the
> file /usr/share/ovirt-engine-extension-aaa-ldap/setup/plugins/ovirt-
> engine-extension-aaa-ldap/ldap/common.py
>  
> The section starts at line 144:
>  
> @staticmethod
> def _resolver(plugin, record, what):
> rc, stdout, stderr = plugin.execute(
> args=(
> (
> plugin.command.get('dig'),
> '+noall',
> '+answer',
> '+tcp',
> 
> '+nocookie',
> what,
> record
> )
> ),
> )
> return stdout
>  
> With this change execution of 
> ovirt-engine-extension-aaa-ldap-setup completes successfully and
> joins fresh install of oVirt 4.4 to Active Directory.
>  
> If level of AD is 2016  ‘+nocookie’ change is not needed.
>  
> Happy holydays to all of you!
> Stay safe!
>  
> Thank you!
> Best,
> Latcho
>  
>  
>  
> 
> 
> From: Latchezar Filtchev 
> 
> Sent: Tuesday, November 24, 2020 10:31 AM
> 
> To: users@ovirt.org
> 
> Subject: oVirt 4.4 and Active directory
> 
> 
>  
> Hello All,
>  
> Fresh standalone installation of oVirt 4.3 (CentOS 7) . Execution of
> ovirt-engine-extension-aaa-ldap-setup completes normally and DC is
> connected to AD (Domain functional level: Windows Server 2008 ).
>  
> On the same hardware fresh standalone installation of oVirt 4.4.
> 
> Installation of engine completed with warning:
>  
> 2020-11-23 14:50:46,159+0200 WARNING
> otopi.plugins.ovirt_engine_common.base.network.hostname
> hostname._validateFQDNresolvability:308 Failed to resolve 44-
> 8.mb118.local using DNS, it can be resolved only locally
>  
> Despite warning engine portal is resolvable after installation.
>  
> Execution of ovirt-engine-extension-aaa-ldap-setup ends with:
>  
> [ INFO  ] Stage: Environment customization
>   Welcome to LDAP extension configuration program
>   Available LDAP implementations:
>1 - 389ds
>2 - 389ds RFC-2307 Schema
>3 - Active Directory
>4 - IBM Security Directory Server
>5 - IBM Security Directory Server RFC-2307 Schema
>6 - IPA
>7 - Novell eDirectory RFC-2307 Schema
>8 - OpenLDAP RFC-2307 Schema
>9 - OpenLDAP Standard Schema
>   10 - Oracle Unified Directory RFC-2307 Schema
>   11 - RFC-2307 Schema (Generic)
>   12 - RHDS
>   13 - RHDS RFC-2307 Schema
>   14 - iPlanet
>   Please select: 3
>   Please enter Active Directory Forest name: mb118.local
> [ INFO  ] Resolving Global Catalog SRV record for mb118.local
> [WARNING] Cannot resolve Global Catalog SRV record for mb118.local.
> Please check you have entered correct Active Directory forest name
> and check that forest is resolvable by your system DNS servers
> [ ERROR ] Failed to execute stage 'Environment customization': Active
> Directory forest is not resolvable, please make sure you've entered
> correct forest name. If for some reason you can't use forest and you
> need some special configuration
>  instead, please refer to examples directory provided by ovirt-
> engine-extension-aaa-ldap package.
> [ INFO  ] Stage: Clean up
>   Log file is available at /tmp/ovirt-engine-extension-aaa-
> ldap-setup-20201123113909-bj749k.log:
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
>  
> Can someone advise on this? 
>  
> Thank you!
> Best,
> Latcho
>  
> 
> 
> 
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XLPLDG4SH7HDY2F5C62ILUZX5ZDTGKEA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Cannot upgrade cluster to v4.5 (All hosts are CentOS 8.3.2011)

2020-12-24 Thread Strahil Nikolov via Users
Are you sure you have installed them with HE support ?
Best Regards,Strahil NikolovВ 19:06 +0200 на 23.12.2020 (ср), Gilboa
Davara написа:
> On Wed, Dec 23, 2020 at 6:28 PM Gilboa Davara 
> wrote:
> > On Wed, Dec 23, 2020 at 6:20 PM Gilboa Davara 
> > wrote:
> > > On Tue, Dec 22, 2020 at 11:45 AM Sandro Bonazzola <
> > > sbona...@redhat.com> wrote:
> > > > Il giorno mer 16 dic 2020 alle ore 17:43 Gilboa Davara <
> > > > gilb...@gmail.com> ha scritto:
> > > > > On Wed, Dec 16, 2020 at 6:21 PM Martin Perina <
> > > > > mper...@redhat.com> wrote:
> > > > > > 
> > > > > > On Wed, Dec 16, 2020 at 4:59 PM Gilboa Davara <
> > > > > > gilb...@gmail.com> wrote:
> > > > > > > Thanks for the prompt reply.
> > > > > > > I assume I can safely ignore the "Upgrade cluster
> > > > > > > compatibility" warning until libvirt 6.6 gets pushed to
> > > > > > > CentOS 8.3?
> > > > > > 
> > > > > > We are working on releasing AV 8.3, hopefully it will be
> > > > > > available soon, but until that happen you have no way how
> > > > > > to upgrade to CL 4.5 and you just need to stay in 4.4
> > > > > 
> > > > > Understood.
> > > > > 
> > > > > Thanks again.
> > > > > - Gilboa
> > > > >  
> > > > 
> > > > Just updating that oVirt 4.4.4 released yesterday comes with
> > > > Advanced Virtualization 8.3 so you can now enable CL 4.5.
> > > 
> > > Sadly enough, even post-full-upgrade (engine + hosts) something
> > > seems to be broken.
> > > 
> > > In the WebUI, I see all 3 hosts marked as "up".
> > > But when I run hosted-engine --vm-status (or migrate the hosted
> > > engine), only the first (original deployed) host is available.
> > > I tried "reinstalling" (from the WebUI) the two hosts, no errors,
> > > no change.
> > > I tried upgrading the cluster again, host 2 / 3 (the "missing"
> > > hosts) upgrade is successful; hosts1 fails (cannot migrate the
> > > hosted engine).
> > > 
> > > Any idea what's broken?
> > > 
> > > - Gilboa
> > > 
> > > 
> > 
> > I'll remove the problematic hosts, re-add them, and reconfigure the
> > network(s). Let's see if it works.
> > 
> > - Gilboa
> >  
> 
> Sorry for the noise. No go.
> Cleaned up the hosts via ovirt-hosted-engine-cleanup + reboot, tried
> "adding" them again, to no avail.
> Host marked as "up" in WebUI, network correctly configured, however,
> hosted-engine.conf isn't being created (see log below), ovirt-ha-
> broker/agent services cannot start and vm-status only shows one host.
> 
> - Gilboa
> 
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JEBBBCTAB5JV2LR42JSHQWDKBU3NR5J/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WNDTBYCDBEPIYM5FKBCUBEYXLJIUJRX/


[ovirt-users] Re: Cannot upgrade cluster to v4.5 (All hosts are CentOS 8.3.2011)

2020-12-24 Thread Strahil Nikolov via Users
> In the WebUI, I see all 3 hosts marked as "up".
> But when I run hosted-engine --vm-status (or migrate the hosted
> engine), only the first (original deployed) host is available.
> I tried "reinstalling" (from the WebUI) the two hosts, no errors, no
> change.
> I tried upgrading the cluster again, host 2 / 3 (the "missing" hosts)
> upgrade is successful; hosts1 fails (cannot migrate the hosted
> engine).

Start debugging from the hosts that are not seen via "hosted-engine --
vm-status".
There are 2 log files to check:
/var/log/ovirt-hosted-engine-ha/{broker,agent}.log

Just check what they are complaining about.
Also , you need both ovirt-ha-broker.service and ovirt-ha-agent.service 
up and running. If there is an issue, the ovirt-ha-agent is being
restarted (so it's expected behaviour).

Usually the broker should stop complaining and then the agent will kick
in.


Best Regards,
Strahil Nikolov 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AM6YWRLXX3K4UQUZ22VW4PRAJXQPRFE3/


[ovirt-users] Re: Cannot ping from a nested VM

2020-12-22 Thread Strahil Nikolov via Users
I guess that you can use a direct rule for allowing any traffic to the
nested VM.
As far as I know the nested oVirt is not working nice and it's easier
to test with a single VM with KVM.
Best Regards,Strahil NikolovВ 01:07 +0100 на 23.12.2020 (ср), wodel
youchi написа:
> Hi,
> 
> We have an HCI plateforme based upon 4.4.3 version.
> We activated the use of nested kvm and we reinstalled all the nodes
> to be able to use it.
> 
> Then we created a new nested HCI on top of the physical HCI to test
> ansible deployment.
> The deployment went well until the phase which adds the two other
> nodes (the two other virtualized hypervisors), but the VM-Manager
> couldn't add them.
> 
> After investigation, we found that the nested VM-Manager could not
> communicate except with it's first virtualized hypervisor.
> 
> To make the nested VM-Manager capable of communicating with the
> outside world, we had to stop firewalld on the physical node of the
> HCI.
> 
> Any ideas?
> 
> Regards.
> 
> 
> 
> 
>   
> 
>   Virus-free. www.avast.com
>   
>   
> 
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEJY2TK76747RFUEFFUCUVTJLPAXNU2X/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XE6Z6IBTTFI77YCVZESNLZSZYXUMPNUV/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-22 Thread Strahil Nikolov via Users
You can use OEL or any EL-based clone.

Best Regards,
Strahil Nikolov






В вторник, 22 декември 2020 г., 08:46:54 Гринуич+2, Jason Keltz 
 написа: 






On 12/21/2020 8:22 AM, Sandro Bonazzola wrote:


>  


oVirt 4.4.4 is now generally available


The oVirt project is excited to announce the general availability of oVirt 
4.4.4 , as of December 21st, 2020.

...



>  
>  
> This release is available now on x86_64 architecture for:
> 
> * Red Hat Enterprise Linux 8.3
> * CentOS Linux (or similar) 8.3
> * CentOS Stream (tech preview)
> 
> 

Sandro,

I have a question about "Red Hat Enterprise Linux" compatibility with oVirt.  
I've always used CentOS in the past along with oVirt.  I'm running CentOS 7 
along with oVirt 4.3.  I really want to upgrade to oVirt 4.4, but I'm not 
comfortable with the future vision for CentOS as it stands for my 
virtualization platform.  If I was to move to RHEL for my oVirt systems, but 
still stick with the "self supported" model, it's not clear whether  I can get 
away with using "RHEL Workstation" for my 4 hosts ($179 USD each), or whether I 
need to purchase "Red Hat Enterprise Linux Server" ($349 USD each).  Any 
feedback would be appreciated.

Thanks!  


Jason.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TB6TOM2RGRJGXXPZL3NDLK77TGACAHIG/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/34LBTODBJX25TNMJQVX5WLLXI237Y4B3/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Strahil Nikolov via Users
I'm also using direct cabling ,so I doubt that is the problem.

Starting fresh is wise, but keep in mind:
- wipe your bricks before installing gluster
- check with 'lsblk -t' the PHY-SEC. If it's not 512, use vdo with "
--emulate512" flag
- Ensure that name resolution is working and each node can reach the
other nodes in the pool

Best Regards,
Strahil Nikolov

В 22:38 + на 21.12.2020 (пн), Charles Lam написа:
> Still not able to deploy Gluster on oVirt Node Hyperconverged - same
> error; upgraded to v4.4.4 and "kvdo not installed"
> 
> Tried suggestion and per 
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/volume_option_table
> I also tried "gluster volume heal VOLNAME granular-entry-heal enable"
> then "gluster volume heal VOLNAME" received "transport endpoint is
> not connected" double checked networking, restarted the volume per 
> https://access.redhat.com/solutions/5089741 using "gluster volume
> start VOLNAME force" also checked Gluster server and client versions
> per https://access.redhat.com/solutions/5380531 --> self-heal daemon
> shows process ID and local status but not peer status
> 
> updated to oVirt v4.4.4 and now am receiving
> 
> "vdo: ERROR - Kernel module kvdo not installed\nvdo: ERROR -
> modprobe: FATAL: Module kvdo not found in directory
> /lib/modules/4.18.0-240.1.1.el8_3.x86_64\n"
> 
> This appears to be a recent issue.  I am going to re-image nodes with
> oVirt Node v4.4.4 and rebuild networking and see if that helps, if
> not I will revert to v4.4.2 (most recent successful on this cluster)
> and see if I can build it there.
> 
> I am using local direct cable between three host nodes for storage
> network, with statically assigned IPs on local network adapters along
> with Hosts file and 3x /30 subnets for routing.  Management network
> is DHCP and set up as before when successful.  I have confirmed
> "files" is listed first in nsswitch.conf and have not had any issues
> with ssh to storage network or management network --> could anything
> related to direct cabling be reason for peer connection issue with
> self-heal daemon even though "gluster peer status" and "gluster peer
> probe" are successful?
> 
> Thanks again, I will update after rebuilding with oVirt Node v4.4.4
> 
> Respectfully,
> Charles
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICVEYLCC677BYGQ6SJC6FB7YGPACSBPY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6FVMIZR6CH6REKAZH3Z2DXKVWPIGLXOB/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Strahil Nikolov via Users
I see that the selfheal daemon is not running.

Just try the following from host1:
systemctl stop glusterd; sleep 5; systemctl start glusterd
for i in $(gluster volume list); do gluster volume set $i 
cluster.granular-entry-heal enable ; done

And then rerun the Ansible flow.

Best Regards,
Strahil Nikolov






В понеделник, 21 декември 2020 г., 17:54:42 Гринуич+2, Charles Lam 
 написа: 





Thanks so very much Strahil for your continued assistance!

[root@fmov1n1 conf.d]# gluster pool list
UUID                                    Hostname                State
16e921fb-99d3-4a2e-81e6-ba095dbc14ca    host2.fqdn.tld  Connected
d4488961-c854-449a-a211-1593810df52f    host3.fqdn.tld  Connected
f9f9282c-0c1d-405a-a3d3-815e5c6b2606    localhost              Connected
[root@fmov1n1 conf.d]# gluster volume list
data
engine
vmstore
[root@fmov1n1 conf.d]# for i in $(gluster volume list); do gluster volume 
status $i; gluster volume info $i; echo 
"###";
 done
Status of volume: data
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick host1.fqdn.tld:/gluster_bricks
/data/data                                  49153    0          Y      899467
Brick host2.fqdn.tld:/gluster_bricks
/data/data                                  49153    0          Y      820456
Brick host3.fqdn.tld:/gluster_bricks
/data/data                                  49153    0          Y      820482
Self-heal Daemon on localhost              N/A      N/A        Y      897788
Self-heal Daemon on host3.fqdn.tld  N/A      N/A        Y      820406
Self-heal Daemon on host2.fqdn.tld  N/A      N/A        Y      820367

Task Status of Volume data
--
There are no active volume tasks


Volume Name: data
Type: Replicate
Volume ID: b4e984c8-7c43-4faa-92e1-84351a645408
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host1.fqdn.tld:/gluster_bricks/data/data
Brick2: host2.fqdn.tld:/gluster_bricks/data/data
Brick3: host3.fqdn.tld:/gluster_bricks/data/data
Options Reconfigured:
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on
###
Status of volume: engine
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick host1.fqdn.tld:/gluster_bricks
/engine/engine                              49152    0          Y      897767
Brick host2.fqdn.tld:/gluster_bricks
/engine/engine                              49152    0          Y      820346
Brick host3.fqdn.tld:/gluster_bricks
/engine/engine                              49152    0          Y      820385
Self-heal Daemon on localhost              N/A      N/A        Y      897788
Self-heal Daemon on host3.fqdn.tld  N/A      N/A        Y      820406
Self-heal Daemon on host2.fqdn.tld  N/A      N/A        Y      820367

Task Status of Volume engine
--
There are no active volume tasks


Volume Name: engine
Type: Replicate
Volume ID: 75cc04e6-d1cb-4069-aa25-81550b7878db
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host1.fqdn.tld:/gluster_bricks/engine/engine
Brick2: host2.fqdn.tld:/gluster_bricks/engine/engine
Brick3: host3.fqdn.tld:/gluster_bricks/engine/engine
Options Reconfigured:
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-19 Thread Strahil Nikolov via Users
You will need to check the gluster volumes status.
Can you provide output of the following (from 1 node ):

gluster pool list
gluster volume list
for i in $(gluster volume list); do gluster volume status $i ; gluster volume 
info $i; echo 
"###"
 ; done

Best Regards,
Strahil Nikolov







В петък, 18 декември 2020 г., 22:44:48 Гринуич+2, Charles Lam 
 написа: 





Dear friends,

Thanks to Donald and Strahil, my earlier Gluster deploy issue was resolved by 
disabling multipath on the nvme drives.  The Gluster deployment is now failing 
on the three node hyperconverged oVirt v4.3.3 deployment at:
  
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67

with:

"stdout": "One or more bricks could be down. Please execute the command
again after bringing all bricks online and finishing any pending heals\nVolume 
heal
failed."

Specifically:

TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine',
'brick': '/gluster_bricks/engine/engine', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"engine", "granular-entry-heal", "enable"],
"delta": "0:00:10.112451", "end": "2020-12-18
19:50:22.818741", "item": {"arbiter": 0, "brick":
"/gluster_bricks/engine/engine", "volname": "engine"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:12.706290", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing 
any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all 
bricks online
and finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick':
'/gluster_bricks/data/data', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"data", "granular-entry-heal", "enable"], "delta":
"0:00:10.110165", "end": "2020-12-18 19:50:38.260277",
"item": {"arbiter": 0, "brick":
"/gluster_bricks/data/data", "volname": "data"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:28.150112", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing 
any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all 
bricks online
and finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore',
'brick': '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"vmstore", "granular-entry-heal", "enable"],
"delta": "0:00:10.113203", "end": "2020-12-18
19:50:53.767864", "item": {"arbiter": 0, "brick":
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:43.654661", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing 
any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all 
bricks online
and finishing any pending heals", "Volume heal failed."]}

Any suggestions regarding troubleshooting, insight or recommendations for 
reading are greatly appreciated.  I apologize for all the email and am only 
creating this as a separate thread as it is a new, presumably unrelated issue.  
I welcome any recommendations if I can improve my forum etiquette.

Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZRER63XZ3XF6HGQV2VNAQ4BKS6ZSHYVP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6GFAGJM5554CHIO4MIZY4P7JPMHUIGKI/


[ovirt-users] Re: fence_xvm for testing

2020-12-18 Thread Strahil Nikolov via Users







В четвъртък, 17 декември 2020 г., 22:32:14 Гринуич+2, Alex K 
 написа: 







On Thu, Dec 17, 2020, 14:43 Strahil Nikolov  wrote:
> Sadly no. I have used it on test Clusters with KVM VMs.
You mean clusters managed with pacemaker?

Yes, with pacemaker.
>  
> If you manage to use oVirt as a nested setup, fencing works quite well with 
> ovirt.
I have setup nested ovirt 4.3 on top a KVM host running centos 8 stream.
>  
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В четвъртък, 17 декември 2020 г., 11:16:47 Гринуич+2, Alex K 
>  написа: 
> 
> 
> 
> 
> 
> Hi Strahil, 
> 
> Do you have a working setup with fence_xvm for ovirt 4.3?
> 
> On Mon, Dec 14, 2020 at 8:59 PM Strahil Nikolov  wrote:
>> Fence_xvm requires a key is deployed on both the Host and the VMs in order 
>> to succeed. What is happening when you use the cli on any of the VMs ?
>> Also, the VMs require an open tcp port to receive the necessary output of 
>> each request.
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В понеделник, 14 декември 2020 г., 10:57:11 Гринуич+2, Alex K 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> Hi friends, 
>> 
>> I was wondering what is needed to setup fence_xvm in order to use for power 
>> management in virtual nested environments for testing purposes. 
>> 
>> I have followed the following steps: 
>> https://github.com/rightkick/Notes/blob/master/Ovirt-fence_xmv.md
>> 
>> I tried also engine-config -s CustomFenceAgentMapping="fence_xvm=_fence_xvm"
>> From command line all seems fine and I can get the status of the host VMs, 
>> but I was not able to find what is needed to set this up at engine UI: 
>> 
>> 
>> At username and pass I just filled dummy values as they should not be needed 
>> for fence_xvm. 
>> I always get an error at GUI while engine logs give: 
>> 
>> 
>> 2020-12-14 08:53:48,343Z WARN  
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
>> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
>> kvm0.lab.local.Internal JSON-RPC error
>> 2020-12-14 08:53:48,343Z INFO  
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default 
>> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand, 
>> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN', 
>> message='Internal JSON-RPC error'}, log id: 2437b13c
>> 2020-12-14 08:53:48,400Z WARN  
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
>> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power 
>> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local and 
>> Fence Agent fence_xvm:225.0.0.12 failed.
>> 2020-12-14 08:53:48,400Z WARN  
>> [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4) 
>> [07c1d540-6d8d-419c-affb-181495d75759] Fence action failed using proxy host 
>> 'kvm1.lab.local', trying another proxy
>> 2020-12-14 08:53:48,485Z ERROR 
>> [org.ovirt.engine.core.bll.pm.FenceProxyLocator] (default task-4) 
>> [07c1d540-6d8d-419c-affb-181495d75759] Can not run fence action on host 
>> 'kvm0.lab.local', no suitable proxy host was found.
>> 2020-12-14 08:53:48,486Z WARN  
>> [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4) 
>> [07c1d540-6d8d-419c-affb-181495d75759] Failed to find another proxy to 
>> re-run failed fence action, retrying with the same proxy 'kvm1.lab.local'
>> 2020-12-14 08:53:48,582Z WARN  
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
>> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
>> kvm0.lab.local.Internal JSON-RPC error
>> 2020-12-14 08:53:48,582Z INFO  
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default 
>> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand, 
>> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN', 
>> message='Internal JSON-RPC error'}, log id: 8607bc9
>> 2020-12-14 08:53:48,637Z WARN  
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
>> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power 
>> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local and 
>> Fence Agent fence_xvm:225.0.0.12 failed.
>> 
>> 
>> Any idea?
>> 
>> Thanx, 
>> Alex
>> 
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> 

[ovirt-users] Re: Adding host to hosted engine fails

2020-12-17 Thread Strahil Nikolov via Users
I would start more simple and mount the volume via FUSE on any of the oVirt 
hosts:

mount -t glusterfs :/volume /mnt

Then browse the /mnt and verify that you can read and write via vdsm user:

sudo -u vdsm touch /mnt/testfile
sudo -u vdsm mkdir /mnt/testdir
sudo -u vdsm touch /mnt/testdir/testfile


Best Regards,
Strahil Nikolov






В четвъртък, 17 декември 2020 г., 11:45:45 Гринуич+2, Ritesh Chikatwar 
 написа: 





Hello,


Which version of ovirt are you using?
Can you make sure gluster service is running or not because i see an error as 
Could not connect to storageServer.
Also please share the engine log as well and a few more lines after the error 
occurred in vdsm.

Ritesh

On Thu, Dec 17, 2020 at 12:00 PM Ariez Ahito  wrote:
> here is our setup
> stand alone glusterfs storage replica3
> 10.33.50.33
> 10.33.50.34
> 10.33.50.35
> 
> we deployed hosted-engine and managed to connect to our glusterfs storage
> 
> now we are having issues adding hosts 
> 
> here is the logs
> dsm.gluster.exception.GlusterVolumesListFailedException: Volume list failed: 
> rc=1 out=() err=['Command {self.cmd} failed with rc={self.rc} 
> out={self.out!r} err={self.err!r}']
> 2020-12-17 14:22:27,106+0800 INFO  (jsonrpc/4) [storage.StorageDomainCache] 
> Invalidating storage domain cache (sdc:74)
> 2020-12-17 14:22:27,106+0800 INFO  (jsonrpc/4) [vdsm.api] FINISH 
> connectStorageServer return={'statuslist': [{'id': 
> 'afa2d41a-d817-4f4a-bd35-5ffedd1fa65b', 'status': 4149}]} 
> from=:::10.33.0.10,50058, flow_id=6170eaa3, 
> task_id=f00d28fa-077f-403a-8024-9f9b533bccb5 (api:54)
> 2020-12-17 14:22:27,107+0800 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC 
> call StoragePool.connectStorageServer took more than 1.00 seconds to succeed: 
> 3.34 (__init__:316)
> 2020-12-17 14:22:27,213+0800 INFO  (jsonrpc/6) [vdsm.api] START 
> connectStorageServer(domType=7, 
> spUUID='1abdb9e4-3f85-11eb-9994-00163e4e4935', conList=[{'password': 
> '', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 
> 'backup-volfile-servers=gluster3:gluster4', 'iqn': '', 'connection': 
> 'gluster3:/VOL2', 'ipv6_enabled': 'false', 'id': 
> '2fb6989d-b26b-42e7-af35-4e4cf718eebf', 'user': '', 'tpgt': '1'}, 
> {'password': '', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 
> 'backup-volfile-servers=gluster3:gluster4', 'iqn': '', 'connection': 
> 'gluster3:/VOL3', 'ipv6_enabled': 'false', 'id': 
> 'b7839bcd-c0e3-422c-8f2c-47351d24b6de', 'user': '', 'tpgt': '1'}], 
> options=None) from=:::10.33.0.10,50058, flow_id=6170eaa3, 
> task_id=cfeb3401-54b9-4756-b306-88d4275c0690 (api:48)
> 2020-12-17 14:22:29,058+0800 INFO  (periodic/1) [vdsm.api] START 
> repoStats(domains=()) from=internal, 
> task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:48)
> 2020-12-17 14:22:29,058+0800 INFO  (periodic/1) [vdsm.api] FINISH repoStats 
> return={} from=internal, task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:54)
> 2020-12-17 14:22:30,512+0800 ERROR (jsonrpc/6) [storage.HSM] Could not 
> connect to storageServer (hsm:2444)
> 
> 
> in the events  tab
> The error message for connection gluster3:/ISO returned by VDSM was: Failed 
> to fetch Gluster Volume List
> The error message for connection gluster3:/VOL1 returned by VDSM was: Failed 
> to fetch Gluster Volume List
> 
> thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJZNXHOIFHWNDJJ7INI3VNLT46TB3EAW/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UTFH7VWFPBGSQIZGJQXXLBODXTBPPJT2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XPUOIZL5K25CJQRUT5ICJONFY6IQDT7U/


[ovirt-users] Re: fence_xvm for testing

2020-12-17 Thread Strahil Nikolov via Users
Sadly no. I have used it on test Clusters with KVM VMs.

If you manage to use oVirt as a nested setup, fencing works quite well with 
ovirt.

Best Regards,
Strahil Nikolov






В четвъртък, 17 декември 2020 г., 11:16:47 Гринуич+2, Alex K 
 написа: 





Hi Strahil, 

Do you have a working setup with fence_xvm for ovirt 4.3?

On Mon, Dec 14, 2020 at 8:59 PM Strahil Nikolov  wrote:
> Fence_xvm requires a key is deployed on both the Host and the VMs in order to 
> succeed. What is happening when you use the cli on any of the VMs ?
> Also, the VMs require an open tcp port to receive the necessary output of 
> each request.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 14 декември 2020 г., 10:57:11 Гринуич+2, Alex K 
>  написа: 
> 
> 
> 
> 
> 
> Hi friends, 
> 
> I was wondering what is needed to setup fence_xvm in order to use for power 
> management in virtual nested environments for testing purposes. 
> 
> I have followed the following steps: 
> https://github.com/rightkick/Notes/blob/master/Ovirt-fence_xmv.md
> 
> I tried also engine-config -s CustomFenceAgentMapping="fence_xvm=_fence_xvm"
> From command line all seems fine and I can get the status of the host VMs, 
> but I was not able to find what is needed to set this up at engine UI: 
> 
> 
> At username and pass I just filled dummy values as they should not be needed 
> for fence_xvm. 
> I always get an error at GUI while engine logs give: 
> 
> 
> 2020-12-14 08:53:48,343Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
> kvm0.lab.local.Internal JSON-RPC error
> 2020-12-14 08:53:48,343Z INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default 
> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand, 
> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN', 
> message='Internal JSON-RPC error'}, log id: 2437b13c
> 2020-12-14 08:53:48,400Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power 
> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local and 
> Fence Agent fence_xvm:225.0.0.12 failed.
> 2020-12-14 08:53:48,400Z WARN  
> [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4) 
> [07c1d540-6d8d-419c-affb-181495d75759] Fence action failed using proxy host 
> 'kvm1.lab.local', trying another proxy
> 2020-12-14 08:53:48,485Z ERROR 
> [org.ovirt.engine.core.bll.pm.FenceProxyLocator] (default task-4) 
> [07c1d540-6d8d-419c-affb-181495d75759] Can not run fence action on host 
> 'kvm0.lab.local', no suitable proxy host was found.
> 2020-12-14 08:53:48,486Z WARN  
> [org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4) 
> [07c1d540-6d8d-419c-affb-181495d75759] Failed to find another proxy to re-run 
> failed fence action, retrying with the same proxy 'kvm1.lab.local'
> 2020-12-14 08:53:48,582Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
> VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
> kvm0.lab.local.Internal JSON-RPC error
> 2020-12-14 08:53:48,582Z INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default 
> task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand, 
> return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN', 
> message='Internal JSON-RPC error'}, log id: 8607bc9
> 2020-12-14 08:53:48,637Z WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
> FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power 
> management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local and 
> Fence Agent fence_xvm:225.0.0.12 failed.
> 
> 
> Any idea?
> 
> Thanx, 
> Alex
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7IHC4MYY5LJFJMEJMLRRFSTMD7IK23I/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LVCO67OSNVWAD37FCH6C4YQPMYJD67OM/


[ovirt-users] Re: Network Teamd support

2020-12-17 Thread Strahil Nikolov via Users
Hey Dominik,

it was mentioned several times before why teaming is "better" than bonding ;)

Best Regards,
Strahil Nikolov






В сряда, 16 декември 2020 г., 16:59:20 Гринуич+2, Dominik Holler 
 написа: 







On Fri, Dec 11, 2020 at 1:19 AM Carlos C  wrote:
> Hi folks,
> 
> Does Ovirt 4.4.4 support or will support Network Teamd? Or only bonding?
> 


Currently, oVirt does not support teaming,
but you are welcome to share which the feature you are missing in the current 
oVirt bonding implementation in
https://bugzilla.redhat.com/show_bug.cgi?id=1351510

Thanks
Dominik

 
>  regards
> Carlos
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABGHHQYZBLO34YXBP4BKX6UGLIOL7IVU/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UWYK4MADCU2ZPDQETOETSSDX546HDR6Q/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJWRRNRRIHBIXOTVQ7KESIEWSBMFFBKM/


[ovirt-users] Re: Cannot connect Glusterfs storage to Ovirt

2020-12-17 Thread Strahil Nikolov via Users
Did you mistype in the e-mail or you really put / ?
For Gluster , there should be a ":" character between Gluster Vol Server and 
volume:

: and :/ are both valid ways to define the volume.

Best Regards,
Strahil Nikolov






В сряда, 16 декември 2020 г., 02:37:45 Гринуич+2, Ariez Ahito 
 написа: 





HI guys, i have installed ovirt 4.4 hosted engine and a separate glusterfs 
storage.
now during hosted engine deployment when i try do choose 
STORAGE TYPE: gluster
Storage connection: 10.33.50.33/VOL1
Mount Option:

when i try to connect

this gives me an error:
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is 
"[Problem while trying to mount target]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
reason is \"Operation Failed\". Fault detail is \"[Problem while trying to 
mount target]\". HTTP response code is 400."}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KLPN6P4FMY6LJAD4ETRYLV5PCA7BAV6J/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NV2WNQOLT2BET4DVNYYRWOQO3QZ5QBRZ/


[ovirt-users] Re: fence_xvm for testing

2020-12-14 Thread Strahil Nikolov via Users
Fence_xvm requires a key is deployed on both the Host and the VMs in order to 
succeed. What is happening when you use the cli on any of the VMs ?
Also, the VMs require an open tcp port to receive the necessary output of each 
request.

Best Regards,
Strahil Nikolov






В понеделник, 14 декември 2020 г., 10:57:11 Гринуич+2, Alex K 
 написа: 





Hi friends, 

I was wondering what is needed to setup fence_xvm in order to use for power 
management in virtual nested environments for testing purposes. 

I have followed the following steps: 
https://github.com/rightkick/Notes/blob/master/Ovirt-fence_xmv.md

I tried also engine-config -s CustomFenceAgentMapping="fence_xvm=_fence_xvm"
From command line all seems fine and I can get the status of the host VMs, but 
I was not able to find what is needed to set this up at engine UI: 


At username and pass I just filled dummy values as they should not be needed 
for fence_xvm. 
I always get an error at GUI while engine logs give: 


2020-12-14 08:53:48,343Z WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
kvm0.lab.local.Internal JSON-RPC error
2020-12-14 08:53:48,343Z INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default task-4) 
[07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand, return: 
FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN', message='Internal 
JSON-RPC error'}, log id: 2437b13c
2020-12-14 08:53:48,400Z WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power 
management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local and 
Fence Agent fence_xvm:225.0.0.12 failed.
2020-12-14 08:53:48,400Z WARN  
[org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4) 
[07c1d540-6d8d-419c-affb-181495d75759] Fence action failed using proxy host 
'kvm1.lab.local', trying another proxy
2020-12-14 08:53:48,485Z ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator] 
(default task-4) [07c1d540-6d8d-419c-affb-181495d75759] Can not run fence 
action on host 'kvm0.lab.local', no suitable proxy host was found.
2020-12-14 08:53:48,486Z WARN  
[org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4) 
[07c1d540-6d8d-419c-affb-181495d75759] Failed to find another proxy to re-run 
failed fence action, retrying with the same proxy 'kvm1.lab.local'
2020-12-14 08:53:48,582Z WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host 
kvm0.lab.local.Internal JSON-RPC error
2020-12-14 08:53:48,582Z INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default task-4) 
[07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand, return: 
FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN', message='Internal 
JSON-RPC error'}, log id: 8607bc9
2020-12-14 08:53:48,637Z WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID: 
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power 
management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local and 
Fence Agent fence_xvm:225.0.0.12 failed.


Any idea?

Thanx, 
Alex


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7IHC4MYY5LJFJMEJMLRRFSTMD7IK23I/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5KCPUQ2OCDSTQ5KKRZQ4LQR7ON5JVFU2/


[ovirt-users] Re: CentOS 8 is dead

2020-12-11 Thread Strahil Nikolov via Users
Hi Thomas,

actually your expectations are a little bit high. Why I'm saying this ?
oVirt is the upstream of RedHat's and Oracle's paid solutions. As such , it's 
much more dynamic and we are a kind of testers of it. So, oVirt to RHV is like 
Fedora (and not CentOS) to RHEL.

Actually , you are looking for RHV fork (as CentOS is such) and not for oVirt.

In order to negate those stuff, you need to:
- Use patch management. You can't install packages{A,B,C} on your test 
environment and then install packages{D,E,F} on prod and pretend that 
everything is fine.
- Learn a little bit about oVirt & Gluster. Both softwares require some prior 
knowledge or you will have headaches. Gluster is simple to setup , but it's 
complex and not free of bugs (just like every upstream project).And of course, 
it is the upstream of RHGS - so you are in the same boat with oVirt.

If you really need stability , then you have the choice to evaluate RHEL + RHV 
+ Gluster and I can assure you it is more stable than the current setup.

I should admit that I got 2 cases where Gluster caused me headaches , but 
that's 2 issues for 2 years and compared to the multimillion Storages that we 
got (and failed 3 times till the vendor fixed the firmware) - is far less than 
expected.

My Lab is currently frozen to 4.3.10 and the only headaches are my old hardware.

Of course , if you feel much more confident with OpenVZ than oVirt, I think 
that it's quite natural to prefer it.

On the positive side, the community of oVirt is quite active and willing to 
assist (including RedHat engineers) and I have not seen a single issue not 
solved.

Best Regards,
Strahil Nikolov






В четвъртък, 10 декември 2020 г., 22:03:45 Гринуич+2, tho...@hoberg.net 
 написа: 





I came to oVirt thinking that it was like CentOS: There might be bugs, but 
given the mainline usage in home and coporate labs with light workloads and 
nothing special, chances to hit one should be pretty minor: I like looking for 
new fronteers atop of my OS, not inside.

I have been runing CentOS/OpenVZ for years in a previous job, mission critical 
24x7 stuff where minutes of outage meant being grilled for hours in meetings 
afterwards. And with PCI-DSS compliance certified. Never had an issue with 
OpenVZ/CentOS, all those minute goofs where human error or Oracle inventing 
execution plans.

Boy was I wrong about oVirt! Just setting it up took weeks. Ansible loves 
eating Gigahertz and I was running on Atoms. I had to learn how to switch from 
an i7 in mid-installation to have it finish at all. I the end I had learned 
tons of new things, but all I wanted was a cluster that would work as much out 
of the box as CentOS or OpenVZ.

Something as fundamental as exporting and importing a VM might simply not work 
and not even get fixed.

Migrating HCI from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4 is anything but 
smooth, a complete rebuild seems the lesser evil: Now if only exports and 
imports worked reliably!

Rebooting a HCI nodes seems to involve an "I am dying!" aria on the network, 
where the whole switch becomes unresponsive for 10 minutes and the fault 
tolerant cluster on it being 100% unresponsive (including all other machines on 
that switch). I has so much fun resynching gluster file systems and searching 
through all those log files for signs as to what was going on!
And the instructions on how to fix gluster issues seems so wonderfully detailed 
and vague, it seems one could spend days trying to fix things or rebuild and 
restore. It doesn't help that the fate of Gluster very much seems to hang in 
the air, when the scalable HCI aspect was the only reason I ever wanted oVirt.

Could just be an issue with RealTek adapters, because I never oberved something 
like that with Intel NICs or on (recycled old) enterprise hardware

I guess official support for a 3 node HCI cluster on passive Atoms isn't going 
to happen, unless I make happen 100% myself: It's open source after all!

Just think what 3/6/9 node HCI based on Raspberry PI would do for the project! 
The 9 node HCI should deliver better 10Gbit GlusterFS performance than most 
QNAP units at the same cost with a single 10Gbit interface even with 7:2 
erasure coding!

I really think the future of oVirt may be at the edge, not in the datacenter 
core.

In short: oVirt is very much beta software and quite simply a full-time job if 
you depend on it working over time.

I can't see that getting any better when one beta gets to run on top of another 
beta. At the moment my oVirt experience has me doubt RHV on RHEL would work any 
better, even if it's cheaper than VMware.

OpenVZ was simply the far better alternative than KVM for most of the things I 
needed from virtualization and it was mainly the hastle of trying to make that 
work with RHEL which had me switching to CentOS. CentOS with OpenVZ was the 
bedrock of that business for 15 years and proved to me that Redhat was 
hell-bent on making bad decisions on technological 

[ovirt-users] Re: CentOS 8 is dead

2020-12-08 Thread Strahil Nikolov via Users
Actually,

you are not the only one thinking about it.
You can check a lot of users (including me) are joining the following
slack channel: https://app.slack.com/client/T0YKGK200/D01H5BZ85LG

Best Regards,
Strahil Nikolov

В 16:01 -0500 на 08.12.2020 (вт), Derek Atkins написа:
> On Tue, December 8, 2020 3:49 pm, Christopher Cox wrote:
> > On 12/8/20 2:20 PM, Michael Watters wrote:
> > > This was one of my fears regarding the IBM acquisition.  I guess
> > > we
> > > can't complain too much, it's not like anybody *pays* for
> > > CentOS.  :)
> > 
> > Yes, but this greatly limits oVirt use to temporal dev labs only.
> > 
> > Maybe oVirt should look into what it would take to one of the long
> > term
> > Devian
> > based distros
> 
> So... stupid question, but...   What would it take for a group of
> interested individuals to "take over" the current CentOS-as-RHEL-
> rebuild
> processes currently in place?  I honestly have no idea how much
> person-hour effort it it is to maintain CentOS, or what other
> resources
> (build machines / infrastructure) are required?
> 
> > ...snippity
> 
> -derek
> -- 
>Derek Atkins 617-623-3745
>de...@ihtfp.com www.ihtfp.com
>Computer and Internet Security Consultant
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IVROYZSBEM3GSWGON452YKOF7U5HXNTY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGOWJ3TCZPFMU753XGJOYZR3RHML2I7W/


[ovirt-users] Re: CentOS 8 is dead

2020-12-08 Thread Strahil Nikolov via Users
Yeah,
the main problem is that Stream won't be as stable as RHEL (which also
has tons of bugs) and you will have to fight with bugs in the OS as if
I'm running a Fedota and on top of that - we have to be extra careful
for oVirt.
Also the Stream is quite new and we can't say if it will be as CentOS
was in the past.

Best Regards,
Strahil Nikolov

В 14:56 -0500 на 08.12.2020 (вт), Alex McWhirter написа:
> On 2020-12-08 14:37, Strahil Nikolov via Users wrote:
> > Hello All,
> > 
> > I'm really worried about the following news:
> > https://blog.centos.org/2020/12/future-is-centos-stream/
> > 
> > Did anyone tried to port oVirt to SLES/openSUSE or any Debian-based
> > distro ?
> > 
> > Best Regards,
> > Strahil Nikolov
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZC4D4OSYL64DX5VYXDJCHDNRZDRGIT6/
> 
> I fail to see a major issue honestly. If current RHEL is 8.3, CentOS 
> Stream is essentially the RC for 8.4... oVirt in and of itself is
> also 
> an upstream project, targeting upstream in advance is likely
> beneficial 
> for all parties involved.
> 
> CentOS has been lagging behind RHEL quite a lot, creating it's own
> set 
> of issues. Being ahead of the curve is more beneficial than
> detrimental 
> IMO. The RHEL sources are still being published to the CentOS git,
> oVirt 
> node could be built against that, time will tell.
> 
> Supported or not, i bet someone forks it anyways.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SM3JM6DHDCRJGH4LMGIKYOLCJKGWJKS4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/535INKZGF22FIMJIVIADYNBUJ6J3P26U/


[ovirt-users] CentOS 8 is dead

2020-12-08 Thread Strahil Nikolov via Users
Hello All,

I'm really worried about the following news: 
https://blog.centos.org/2020/12/future-is-centos-stream/

Did anyone tried to port oVirt to SLES/openSUSE or any Debian-based
distro ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZC4D4OSYL64DX5VYXDJCHDNRZDRGIT6/


[ovirt-users] Re: no disk /dev/sdb/glusters_vg_sdb/gluster_lv_engine etc...

2020-12-06 Thread Strahil Nikolov via Users

I have no idea how you obtained this one: 
/dev/sdb/glusters_vg_sdb/gluster_lv_engine

Usually LVs are either /dev/mapper/VG-LV or /dev/VG/LV.

It seems strange that you didn't blacklist in multipath your local disk.

Here is an example how to do it (remove devnode '*' if you have SAN, and 
replace the wwids ):

[root@ovirt2 conf.d]# cat /etc/multipath/conf.d/blacklist.conf 
blacklist {
        devnode "*"
        wwid 
nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001
        wwid TOSHIBA-TR200_Z7KB600SK46S
        wwid ST500NM0011_Z1M00LM7
        wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189
        wwid WDC_WD15EADS-00P8B0_WD-WMAVU0885453
        wwid WDC_WD5003ABYZ-011FA0_WD-WMAYP0F35PJ4
}

Best Regards,
Strahil Nikolov




В събота, 5 декември 2020 г., 12:06:16 Гринуич+2, garcialiang.a...@gmail.com 
 написа: 





Hello,

I've the problem with the disk /dev/sdb. After reboot my server, I losed my 
partition in /dev/sdb/glusters_vg_sdb/gluster_lv_engine.
I've only 
362cea7f05157a3002725415707fe65ec 253:0    0  9.8T  0 mpath
when I look gluster volume info, I've all the volume.
Why I've not sdb 
/dev/sdb/glusters_vg_sdb/gluster_lv_engine  
/dev/sdb/glusters_vg_sdb/gluster_lv_data
/dev/sdb/glusters_vg_sdb/gluster_lv_vmstore
How I can mount the volume ?
Could you help me?

Best regards

Anne  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/URCDTIMZJGVDKXRD3YL25WQHJOJYEGT3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4RH5ICPPF2QATOIJ5H6XPSEWBSY6PZL3/


[ovirt-users] Re: new hyperconverged setup

2020-12-06 Thread Strahil Nikolov via Users
With 10GBE you can reach no more than 1.25GB , so you should enable 
cluster.choose-local if your disks can squeeze more than that.

With Replica 3 , you got 1/3 of total space so it seems normal.
You can always expand a distributed-replicated volume as long as you expand 
with 3 bricks at a time (but you will have to rebalance your volume).

Keep the replica set (the 3 bricks) of the same size or you will waste space.

Best Regards,
Strahil Nikolov






В четвъртък, 3 декември 2020 г., 21:29:16 Гринуич+2, cpo cpo 
 написа: 





Thanks for info.  My storage network speed is 10gb.  

I rolled out a config where I created 2 domains 1 for and engine the other for 
vmstore.  I decided not to use dedup.  I kept my 3 disk on each host  non 
raided.  For my vmstore domain I created 9 bricks 1 for each physical disk with 
a LV size of 6.6tb  I created the bricks using the gui wizard. The management 
engine says I have 19tb of useable space  and my vmstore volume is distributed 
replicate.  Would it be ok to create more bricks for this domain with 6.6 TB LV 
size or will I run into issues with physical space?  The reason why I ask is 
the having 40TB of physical space and only getting 20 TB usable space in my 
current config gives me a little heart burn.


Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBBDIQ42YIWHJQ55L6MZY2KN3DCCICWT/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5E3YGWYRN7ODIW5KYQZ3ZGMYXLFOXKTA/


[ovirt-users] Re: centos 7.9

2020-12-03 Thread Strahil Nikolov via Users
It works:

[root@ovirt2 ~]# ssh engine "rpm -qa | grep ovirt-engine-4"
root@engine's password: 
ovirt-engine-4.3.10.4-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.3.10.4-1.el7.noarch
[root@ovirt2 ~]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)

Best Regards,
Strahil Nikolov






В четвъртък, 3 декември 2020 г., 14:24:44 Гринуич+2, José Ferradeira via Users 
 написа: 





Hello,

Can I run ovirt 4.3.10.4-1.el7 over Centos 7.9 ?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YRGD7LM2C2GBCPD7IDG67YLYV5ZKMKLV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGOC6DVKPQ47IQDC4PZN5J6MT3FQ2Z5K/


[ovirt-users] Re: new hyperconverged setup

2020-12-02 Thread Strahil Nikolov via Users
You are using NVMEs, so you can safely use JBOD mode.

I think that the size should be for a single disk, but you can experiment on 
that. If the LVs are too small, you can just extend them or even recreate them.
What is you network speed ?

If your network bandwidth matches your NVMEs' combined Read speed - then avoid 
dedup and compression unless you really need them.

Most probably your NVMEs have a 4096 sector size . If yes, you will need vdo 
with '--emulate512'. Yet, tuning VDO for such fast devices is not a trivial 
task.

Most probably you can configure a VDO per each NVME (or even per pertition/lv) 
, and configure it with compression and dedup disabled (emulate512 only if 
needed).
Also check the UDS index topic, as it is quite important:

https://access.redhat.com/documentation/en_us/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-ig-administering-vdo#vdo-ig-config-uds-index

If your network is slower than your disks, you can enable dedup and 
compression. Also gluster has a setting called 'cluster.choose-local' . You can 
enable it in order to tell the FUSE driver to prefer local brick instead those 
over the network.

Best Regards,
Strahil Nikolov






В сряда, 2 декември 2020 г., 14:10:21 Гринуич+2, dofrank...@port-orange.org 
 написа: 





I am about to setup a new hyperconverged setup with the following 

3 host setup
2 x nvme 240gb hd for OS on each host (raid)
3 x nvme 7 tb hd for vm storage on each host (no raid card)
500gb RAM per each host

I am confused during the gluster setup portion using the web ui setup wizard.  
When asked for the LV size do I input the maximum size of the hard drive on a 
single host or do I combine the total capacity of the matching hard drives on 
all 3 host?  Example that I would be putting in is hard drive: /dev/nvme1p1 
capacity 7tb or should it be 21tb (combined capacity of the matching single HD 
on the other host).  Or is there a better method you recommend?  And since I am 
using nvme hard drive would you recommend using dedup and compression or no?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CLIXBOMZ7EEZ54FZKG4A4YCWVFRS2N23/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4PADR6QF3T446GCOU7XZA2YLY2GPE5BJ/


[ovirt-users] Re: Unable to move or copy disks

2020-12-02 Thread Strahil Nikolov via Users
Actually I have to dig the mailing list, cause I can't remmember the exact 
steps and if you miss something - everything can go wild.

I have the vague feeling that I just copied the data inside the volume and then 
I just renamed the master directories . There is a catch - oVirt is not very 
smart and it doesn't expect any foreign data to reside there.

Of course, I could survive the downtime.


Best Regards,
Strahil Nikolov





В вторник, 1 декември 2020 г., 19:40:28 Гринуич+2, supo...@logicworks.pt 
 написа: 





Thanks

Did you use the command cp to copy data between gluster volumes?

Regards

José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: users@ovirt.org
Enviadas: Terça-feira, 1 De Dezembro de 2020 8:05:17
Assunto: Re: [ovirt-users] Re: Unable to move or copy disks

This looks like the bug I have reported a long time ago.
The only fix I found was to create new gluster volume and "cp -a" all data from 
the old to the new volume.

Do you have spare space for a new Gluster volume ?
If yes, create the new volume and add it to Ovirt, then dd the file and move 
the disk to that new storage.
Once you move all VM's disks you can get rid of the old Gluster volume and 
reuse the space .

P.S.: Sadly I didn't have the time to look at your logs .


Best Regards,
Strahil Nikolov






В понеделник, 30 ноември 2020 г., 01:22:46 Гринуич+2,  
написа: 





No errors

# sudo -u vdsm dd 
if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242
 of=/dev/null bs=4M status=progress
107336433664 bytes (107 GB) copied, 245.349334 s, 437 MB/s
25600+0 records in
25600+0 records out
107374182400 bytes (107 GB) copied, 245.682 s, 437 MB/s

After this I tried again to move the disk, and surprise, successfully

I didn't believe it.
Try to move another disk, the same error came back
I did a dd to this other disk and tried again to move it, again successfully

!!!


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 20:22:36
Assunto: Re: [ovirt-users] Re: Unable to move or copy disks

Usually distributed volumes are supported on a Single-node setup, but it 
shouldn't be the problem.


As you know the affected VMs , you can easily find the disks of a VM.

Then try to read the VM's disk:

sudo -u vdsm dd 
if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data//images//
 of=/dev/null bs=4M status=progress

Does it give errors ?


Best Regards,
Strahil Nikolov



В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, supo...@logicworks.pt 
 написа: 





No heals pending
There are some VM's I can move the disk but some others VM's I cannot move the 
disk


It's a simple gluster
]# gluster volume info

Volume Name: gfs1data
Type: Distribute
Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfs1.server.pt:/home/brick1
Options Reconfigured:
diagnostics.brick-log-level: INFO
performance.client-io-threads: off
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: yes
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on




De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 17:27:04
Assunto: Re: [ovirt-users] Re: Unable to move or copy disks

Are you sure you don't have any heals pending ?
I should admit I have never seen this type of error.

Is it happening for all VMs or only specific ones ?


Best Regards,
Strahil Nikolov






В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, supo...@logicworks.pt 
 написа: 





Sorry, I found this error on gluster logs:

 [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: 
Failed to get anonymous fd for real_path: 
/home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such 
file or directory]


De: supo...@logicworks.pt
Para: "Strahil Nikolov" 
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 13:13:00
Assunto: [ovirt-users] Re: Unable to move or copy disks

I don't find any error in the gluster logs, I just find this error in the vdsm 
log:

2020-11-29 12:57:45,528+ INFO  (tasks/1) [storage.SANLock] Successfully 
released Lease(name='61d85180-65a4-452d-8773-db778f56e242', 

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-12-01 Thread Strahil Nikolov via Users
Could it be faulty ram ?
Do you use ECC ram ?

Best Regards,
Strahil Nikolov






В вторник, 1 декември 2020 г., 06:17:10 Гринуич+2, Vinícius Ferrão via Users 
 написа: 






Hi again,



I had to shutdown everything because of a power outage in the office. When 
trying to get the infra up again, even the Engine have corrupted: 



[  772.466982] XFS (dm-4): Invalid superblock magic number
mount: /var: wrong fs type, bad option, bad superblock on 
/dev/mapper/ovirt-var, missing codepage or helper program, or other error.
[  772.472885] XFS (dm-3): Mounting V5 Filesystem
[  773.629700] XFS (dm-3): Starting recovery (logdev: internal)
[  773.731104] XFS (dm-3): Metadata CRC error detected at 
xfs_agfl_read_verify+0xa1/0xf0 [xfs], xfs_agfl block 0xf3 
[  773.734352] XFS (dm-3): Unmount and run xfs_repair
[  773.736216] XFS (dm-3): First 128 bytes of corrupted metadata buffer:
[  773.738458] : 23 31 31 35 36 35 35 34 29 00 2d 20 52 65 62 75  
#1156554).- Rebu
[  773.741044] 0010: 69 6c 74 20 66 6f 72 20 68 74 74 70 73 3a 2f 2f  ilt 
for https://
[  773.743636] 0020: 66 65 64 6f 72 61 70 72 6f 6a 65 63 74 2e 6f 72  
fedoraproject.or
[  773.746191] 0030: 67 2f 77 69 6b 69 2f 46 65 64 6f 72 61 5f 32 33  
g/wiki/Fedora_23
[  773.748818] 0040: 5f 4d 61 73 73 5f 52 65 62 75 69 6c 64 00 2d 20  
_Mass_Rebuild.- 
[  773.751399] 0050: 44 72 6f 70 20 6f 62 73 6f 6c 65 74 65 20 64 65  Drop 
obsolete de
[  773.753933] 0060: 66 61 74 74 72 20 73 74 61 6e 7a 61 73 20 28 23  fattr 
stanzas (#
[  773.756428] 0070: 31 30 34 37 30 33 31 29 00 2d 20 49 6e 73 74 61  
1047031).- Insta
[  773.758873] XFS (dm-3): metadata I/O error in "xfs_trans_read_buf_map" at 
daddr 0xf3 len 1 error 74
[  773.763756] XFS (dm-3): xfs_do_force_shutdown(0x8) called from line 446 of 
file fs/xfs/libxfs/xfs_defer.c. Return address = 962bd5ee
[  773.769363] XFS (dm-3): Corruption of in-memory data detected.  Shutting 
down filesystem
[  773.772643] XFS (dm-3): Please unmount the filesystem and rectify the 
problem(s)
[  773.776079] XFS (dm-3): xfs_imap_to_bp: xfs_trans_read_buf() returned error 
-5.
[  773.779113] XFS (dm-3): xlog_recover_clear_agi_bucket: failed to clear agi 
3. Continuing.
[  773.783039] XFS (dm-3): xfs_imap_to_bp: xfs_trans_read_buf() returned error 
-5.
[  773.785698] XFS (dm-3): xlog_recover_clear_agi_bucket: failed to clear agi 
3. Continuing.
[  773.790023] XFS (dm-3): Ending recovery (logdev: internal)
[  773.792489] XFS (dm-3): Error -5 recovering leftover CoW allocations.
mount: /var/log: can't read superblock on /dev/mapper/ovirt-log.
mount: /var/log/audit: mount point does not exist.




/var seems to be completely trashed.




The only time that I’ve seem something like this was faulty hardware. But 
nothing shows up on logs, as far as I know.




After forcing repairs with -L I’ve got other issues:




mount -a
[  326.170941] XFS (dm-4): Mounting V5 Filesystem
[  326.404788] XFS (dm-4): Ending clean mount
[  326.415291] XFS (dm-3): Mounting V5 Filesystem
[  326.611673] XFS (dm-3): Ending clean mount
[  326.621705] XFS (dm-2): Mounting V5 Filesystem
[  326.784067] XFS (dm-2): Starting recovery (logdev: internal)
[  326.792083] XFS (dm-2): Metadata CRC error detected at 
xfs_agi_read_verify+0xc7/0xf0 [xfs], xfs_agi block 0x2 
[  326.794445] XFS (dm-2): Unmount and run xfs_repair
[  326.795557] XFS (dm-2): First 128 bytes of corrupted metadata buffer:
[  326.797055] : 4d 33 44 34 39 56 00 00 80 00 00 00 f0 cf 00 00  
M3D49V..
[  326.799685] 0010: 00 00 00 00 02 00 00 00 23 10 00 00 3d 08 01 08  
#...=...
[  326.802290] 0020: 21 27 44 34 39 56 00 00 00 d0 00 00 01 00 00 00  
!'D49V..
[  326.804748] 0030: 50 00 00 00 00 00 00 00 23 10 00 00 41 01 08 08  
P...#...A...
[  326.807296] 0040: 21 27 44 34 39 56 00 00 10 d0 00 00 02 00 00 00  
!'D49V..
[  326.809883] 0050: 60 00 00 00 00 00 00 00 23 10 00 00 41 01 08 08  
`...#...A...
[  326.812345] 0060: 61 2f 44 34 39 56 00 00 00 00 00 00 00 00 00 00  
a/D49V..
[  326.814831] 0070: 50 34 00 00 00 00 00 00 23 10 00 00 82 08 08 04  
P4..#...
[  326.817237] XFS (dm-2): metadata I/O error in "xfs_trans_read_buf_map" at 
daddr 0x2 len 1 error 74
mount: /var/log/audit: mount(2) system call failed: Structure needs cleaning.




But after more xfs_repair -L the engine is up…




Now I need to scavenge other VMs and do the same thing.




That’s it.




Thanks all,

V.




PS: For those interested, there’s a paste of the fixes: 
https://pastebin.com/jsMguw6j







>  
> On 29 Nov 2020, at 17:03, Strahil Nikolov  wrote:
> 
> 
>  
> Damn...
> 
> You are using EFI boot. Does this happen only to EFI machines ?
> Did you notice if only EL 8 is affected ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão 
>  написа: 
> 
> 
> 
> 
> 
> Yes!
> 
> I have a live VM right now that will 

[ovirt-users] Re: Unable to move or copy disks

2020-12-01 Thread Strahil Nikolov via Users
This looks like the bug I have reported a long time ago.
The only fix I found was to create new gluster volume and "cp -a" all data from 
the old to the new volume.

Do you have spare space for a new Gluster volume ?
If yes, create the new volume and add it to Ovirt, then dd the file and move 
the disk to that new storage.
Once you move all VM's disks you can get rid of the old Gluster volume and 
reuse the space .

P.S.: Sadly I didn't have the time to look at your logs .


Best Regards,
Strahil Nikolov






В понеделник, 30 ноември 2020 г., 01:22:46 Гринуич+2,  
написа: 





No errors

# sudo -u vdsm dd 
if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242
 of=/dev/null bs=4M status=progress
107336433664 bytes (107 GB) copied, 245.349334 s, 437 MB/s
25600+0 records in
25600+0 records out
107374182400 bytes (107 GB) copied, 245.682 s, 437 MB/s

After this I tried again to move the disk, and surprise, successfully

I didn't believe it.
Try to move another disk, the same error came back
I did a dd to this other disk and tried again to move it, again successfully

!!!


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 20:22:36
Assunto: Re: [ovirt-users] Re: Unable to move or copy disks

Usually distributed volumes are supported on a Single-node setup, but it 
shouldn't be the problem.


As you know the affected VMs , you can easily find the disks of a VM.

Then try to read the VM's disk:

sudo -u vdsm dd 
if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data//images//
 of=/dev/null bs=4M status=progress

Does it give errors ?


Best Regards,
Strahil Nikolov



В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, supo...@logicworks.pt 
 написа: 





No heals pending
There are some VM's I can move the disk but some others VM's I cannot move the 
disk


It's a simple gluster
]# gluster volume info

Volume Name: gfs1data
Type: Distribute
Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfs1.server.pt:/home/brick1
Options Reconfigured:
diagnostics.brick-log-level: INFO
performance.client-io-threads: off
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: yes
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on




De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 17:27:04
Assunto: Re: [ovirt-users] Re: Unable to move or copy disks

Are you sure you don't have any heals pending ?
I should admit I have never seen this type of error.

Is it happening for all VMs or only specific ones ?


Best Regards,
Strahil Nikolov






В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, supo...@logicworks.pt 
 написа: 





Sorry, I found this error on gluster logs:

 [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: 
Failed to get anonymous fd for real_path: 
/home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such 
file or directory]


De: supo...@logicworks.pt
Para: "Strahil Nikolov" 
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 13:13:00
Assunto: [ovirt-users] Re: Unable to move or copy disks

I don't find any error in the gluster logs, I just find this error in the vdsm 
log:

2020-11-29 12:57:45,528+ INFO  (tasks/1) [storage.SANLock] Successfully 
released Lease(name='61d85180-65a4-452d-8773-db778f56e242', 
path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease',
 offset=0) (clusterlock:524)
2020-11-29 12:57:45,528+ ERROR (tasks/1) [root] Job 
u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", 
line 86, in _run
    self._operation.run()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in 
run
    for data in self._operation.watch():
  File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, 
in watch
    self._finalize(b"", err)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, 
in _finalize
    

[ovirt-users] Re: Unable to move or copy disks

2020-11-29 Thread Strahil Nikolov via Users
Usually distributed volumes are supported on a Single-node setup, but it 
shouldn't be the problem.


As you know the affected VMs , you can easily find the disks of a VM.

Then try to read the VM's disk:

sudo -u vdsm dd 
if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data//images//
 of=/dev/null bs=4M status=progress

Does it give errors ?


Best Regards,
Strahil Nikolov



В неделя, 29 ноември 2020 г., 20:06:42 Гринуич+2, supo...@logicworks.pt 
 написа: 





No heals pending
There are some VM's I can move the disk but some others VM's I cannot move the 
disk


It's a simple gluster
]# gluster volume info

Volume Name: gfs1data
Type: Distribute
Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfs1.server.pt:/home/brick1
Options Reconfigured:
diagnostics.brick-log-level: INFO
performance.client-io-threads: off
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: yes
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on




De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 17:27:04
Assunto: Re: [ovirt-users] Re: Unable to move or copy disks

Are you sure you don't have any heals pending ?
I should admit I have never seen this type of error.

Is it happening for all VMs or only specific ones ?


Best Regards,
Strahil Nikolov






В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, supo...@logicworks.pt 
 написа: 





Sorry, I found this error on gluster logs:

 [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: 
Failed to get anonymous fd for real_path: 
/home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such 
file or directory]


De: supo...@logicworks.pt
Para: "Strahil Nikolov" 
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 13:13:00
Assunto: [ovirt-users] Re: Unable to move or copy disks

I don't find any error in the gluster logs, I just find this error in the vdsm 
log:

2020-11-29 12:57:45,528+ INFO  (tasks/1) [storage.SANLock] Successfully 
released Lease(name='61d85180-65a4-452d-8773-db778f56e242', 
path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease',
 offset=0) (clusterlock:524)
2020-11-29 12:57:45,528+ ERROR (tasks/1) [root] Job 
u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", 
line 86, in _run
    self._operation.run()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in 
run
    for data in self._operation.watch():
  File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, 
in watch
    self._finalize(b"", err)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, 
in _finalize
    raise cmdutils.Error(self._cmd, rc, out, err)
Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 
'none', '-f', 'raw', 
u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242',
 '-O', 'raw', 
u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242']
 failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 
134086625: No such file or directory\n')
2020-11-29 12:57:45,528+ INFO  (tasks/1) [root] Job 
u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds 
(jobs:249)
2020-11-29 12:57:45,529+ INFO  (tasks/1) [storage.ThreadPool.WorkerThread] 
FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210)



Any idea?

Regards
José

De: supo...@logicworks.pt
Para: "Strahil Nikolov" 
Cc: users@ovirt.org
Enviadas: Sábado, 28 De Novembro de 2020 18:39:47
Assunto: [ovirt-users] Re: Unable to move or copy disks

I really don't understand this.
I have 2 glusters same version, 6.10

I can move a disk from gluster2 to gluster1, but cannot move the same disk from 
gluster1 to gluster2
ovirt version: 4.3.10.4-1.el7


Regards
José


De: "Strahil 

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-29 Thread Strahil Nikolov via Users
Damn...

You are using EFI boot. Does this happen only to EFI machines ?
Did you notice if only EL 8 is affected ?

Best Regards,
Strahil Nikolov






В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão 
 написа: 





Yes!

I have a live VM right now that will de dead on a reboot:

[root@kontainerscomk ~]# cat /etc/*release
NAME="Red Hat Enterprise Linux"
VERSION="8.3 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.3"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.3 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.3:GA"
HOME_URL="https://www.redhat.com/;
BUG_REPORT_URL="https://bugzilla.redhat.com/;

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.3"
Red Hat Enterprise Linux release 8.3 (Ootpa)
Red Hat Enterprise Linux release 8.3 (Ootpa)

[root@kontainerscomk ~]# sysctl -a | grep dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 30
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200

[root@kontainerscomk ~]# xfs_db -r /dev/dm-0
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 
0xa82a)
Use -F to force a read attempt.
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0 -F
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 
0xa82a)
xfs_db: size check failed
xfs_db: V1 inodes unsupported. Please try an older xfsprogs.

[root@kontainerscomk ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Nov 19 22:40:39 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root  /                      xfs    defaults        0 0
UUID=ad84d1ea-c9cc-4b22-8338-d1a6b2c7d27e /boot                  xfs    
defaults        0 0
UUID=4642-2FF6          /boot/efi              vfat    
umask=0077,shortname=winnt 0 2
/dev/mapper/rhel-swap  none                    swap    defaults        0 0

Thanks,


-Original Message-
From: Strahil Nikolov  
Sent: Sunday, November 29, 2020 2:33 PM
To: Vinícius Ferrão 
Cc: users 
Subject: Re: [ovirt-users] Re: Constantly XFS in memory corruption inside VMs

Can you check the output on the VM that was affected:
# cat /etc/*release
# sysctl -a | grep dirty


Best Regards,
Strahil Nikolov





В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users 
 написа: 





Hi Strahil.

I’m not using barrier options on mount. It’s the default settings from CentOS 
install.

I have some additional findings, there’s a big number of discarded packages on 
the switch on the hypervisor interfaces.

Discards are OK as far as I know, I hope TCP handles this and do the proper 
retransmissions, but I ask if this may be related or not. Our storage is over 
NFS. My general expertise is with iSCSI and I’ve never seen this kind of issue 
with iSCSI, not that I’m aware of.

In other clusters, I’ve seen a high number of discards with iSCSI on XenServer 
7.2 but there’s no corruption on the VMs there...

Thanks,

Sent from my iPhone

> On 29 Nov 2020, at 04:00, Strahil Nikolov  wrote:
> 
> Are you using "nobarrier" mount options in the VM ?
> 
> If yes, can you try to remove the "nobarrrier" option.
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão 
>  написа: 
> 
> 
> 
> 
> 
> Hi Strahil,
> 
> I moved a running VM to other host, rebooted and no corruption was found. If 
> there's any corruption it may be silent corruption... I've cases where the VM 
> was new, just installed, run dnf -y update to get the updated packages, 
> rebooted, and boom XFS corruption. So perhaps the motion process isn't the 
> one to blame.
> 
> But, in fact, I remember when moving a VM that it went down during the 
> process and when I rebooted it was corrupted. But this may not seems related. 
> It perhaps was already in a inconsistent state.
> 
> Anyway, here's the mount options:
> 
> Host1:
> 192.168.10.14:/mnt/pool0/ovirt/vm on 
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4 
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
> 
> Host2:
> 192.168.10.14:/mnt/pool0/ovirt/vm on 
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4 
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
> 
> The options are the default ones. I haven't changed anything when configuring 
> this 

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-29 Thread Strahil Nikolov via Users
Can you check the output on the VM that was affected:
# cat /etc/*release
# sysctl -a | grep dirty


Best Regards,
Strahil Nikolov





В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users 
 написа: 





Hi Strahil.

I’m not using barrier options on mount. It’s the default settings from CentOS 
install.

I have some additional findings, there’s a big number of discarded packages on 
the switch on the hypervisor interfaces.

Discards are OK as far as I know, I hope TCP handles this and do the proper 
retransmissions, but I ask if this may be related or not. Our storage is over 
NFS. My general expertise is with iSCSI and I’ve never seen this kind of issue 
with iSCSI, not that I’m aware of.

In other clusters, I’ve seen a high number of discards with iSCSI on XenServer 
7.2 but there’s no corruption on the VMs there...

Thanks,

Sent from my iPhone

> On 29 Nov 2020, at 04:00, Strahil Nikolov  wrote:
> 
> Are you using "nobarrier" mount options in the VM ?
> 
> If yes, can you try to remove the "nobarrrier" option.
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão 
>  написа: 
> 
> 
> 
> 
> 
> Hi Strahil,
> 
> I moved a running VM to other host, rebooted and no corruption was found. If 
> there's any corruption it may be silent corruption... I've cases where the VM 
> was new, just installed, run dnf -y update to get the updated packages, 
> rebooted, and boom XFS corruption. So perhaps the motion process isn't the 
> one to blame.
> 
> But, in fact, I remember when moving a VM that it went down during the 
> process and when I rebooted it was corrupted. But this may not seems related. 
> It perhaps was already in a inconsistent state.
> 
> Anyway, here's the mount options:
> 
> Host1:
> 192.168.10.14:/mnt/pool0/ovirt/vm on 
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4 
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,local_lock=none,addr=192.168.10.14)
> 
> Host2:
> 192.168.10.14:/mnt/pool0/ovirt/vm on 
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4 
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,local_lock=none,addr=192.168.10.14)
> 
> The options are the default ones. I haven't changed anything when configuring 
> this cluster.
> 
> Thanks.
> 
> 
> 
> -Original Message-
> From: Strahil Nikolov  
> Sent: Saturday, November 28, 2020 1:54 PM
> To: users ; Vinícius Ferrão 
> Subject: Re: [ovirt-users] Constantly XFS in memory corruption inside VMs
> 
> Can you try with a test vm, if this happens after a Virtual Machine migration 
> ?
> 
> What are your mount options for the storage domain ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via Users 
>  написа: 
> 
> 
> 
> 
> 
>  
> 
> 
> Hello,
> 
>  
> 
> I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS 
> shared storage on TrueNAS 12.0 is constantly getting XFS corruption inside 
> the VMs.
> 
>  
> 
> For random reasons VM’s gets corrupted, sometimes halting it or just being 
> silent corrupted and after a reboot the system is unable to boot due to 
> “corruption of in-memory data detected”. Sometimes the corrupted data are 
> “all zeroes”, sometimes there’s data there. In extreme cases the XFS 
> superblock 0 get’s corrupted and the system cannot even detect a XFS 
> partition anymore since the magic XFS key is corrupted on the first blocks of 
> the virtual disk.
> 
>  
> 
> This is happening for a month now. We had to rollback some backups, and I 
> don’t trust anymore on the state of the VMs.
> 
>  
> 
> Using xfs_db I can see that some VM’s have corrupted superblocks but the VM 
> is up. One in specific, was with sb0 corrupted, so I knew when a reboot kicks 
> in the machine will be gone, and that’s exactly what happened.
> 
>  
> 
> Another day I was just installing a new CentOS 8 VM for random reasons, and 
> after running dnf -y update and a reboot the VM was corrupted needing XFS 
> repair. That was an extreme case.
> 
>  
> 
> So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong on 
> the system. No errors logged on dmesg, nothing on /var/log/messages and no 
> errors on the “zpools”, not even after scrub operations. On the switch, a 
> Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There are 
> no “up and down” and zero errors on all interfaces (we have a 4x Port LACP on 
> the TrueNAS side and 2x Port LACP on each hosts), everything seems to be 
> fine. The only metric that I was unable to get is “dropped packages”, but I’m 
> don’t know if this can be an issue or not.
> 
>  
> 
> Finally, on oVirt, I can’t find anything either. I looked on 
> /var/log/messages and 

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-29 Thread Strahil Nikolov via Users
As you use proto=TCP it should not cause the behaviour you are observing.
I was wondering if the VM is rebooted for some reason (maybe HA) during 
intensive I/O.

Best Regards,
Strahil Nikolov






В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users 
 написа: 





Hi Strahil.

I’m not using barrier options on mount. It’s the default settings from CentOS 
install.

I have some additional findings, there’s a big number of discarded packages on 
the switch on the hypervisor interfaces.

Discards are OK as far as I know, I hope TCP handles this and do the proper 
retransmissions, but I ask if this may be related or not. Our storage is over 
NFS. My general expertise is with iSCSI and I’ve never seen this kind of issue 
with iSCSI, not that I’m aware of.

In other clusters, I’ve seen a high number of discards with iSCSI on XenServer 
7.2 but there’s no corruption on the VMs there...

Thanks,

Sent from my iPhone

> On 29 Nov 2020, at 04:00, Strahil Nikolov  wrote:
> 
> Are you using "nobarrier" mount options in the VM ?
> 
> If yes, can you try to remove the "nobarrrier" option.
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão 
>  написа: 
> 
> 
> 
> 
> 
> Hi Strahil,
> 
> I moved a running VM to other host, rebooted and no corruption was found. If 
> there's any corruption it may be silent corruption... I've cases where the VM 
> was new, just installed, run dnf -y update to get the updated packages, 
> rebooted, and boom XFS corruption. So perhaps the motion process isn't the 
> one to blame.
> 
> But, in fact, I remember when moving a VM that it went down during the 
> process and when I rebooted it was corrupted. But this may not seems related. 
> It perhaps was already in a inconsistent state.
> 
> Anyway, here's the mount options:
> 
> Host1:
> 192.168.10.14:/mnt/pool0/ovirt/vm on 
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4 
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,local_lock=none,addr=192.168.10.14)
> 
> Host2:
> 192.168.10.14:/mnt/pool0/ovirt/vm on 
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4 
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,local_lock=none,addr=192.168.10.14)
> 
> The options are the default ones. I haven't changed anything when configuring 
> this cluster.
> 
> Thanks.
> 
> 
> 
> -Original Message-
> From: Strahil Nikolov  
> Sent: Saturday, November 28, 2020 1:54 PM
> To: users ; Vinícius Ferrão 
> Subject: Re: [ovirt-users] Constantly XFS in memory corruption inside VMs
> 
> Can you try with a test vm, if this happens after a Virtual Machine migration 
> ?
> 
> What are your mount options for the storage domain ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via Users 
>  написа: 
> 
> 
> 
> 
> 
>  
> 
> 
> Hello,
> 
>  
> 
> I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS 
> shared storage on TrueNAS 12.0 is constantly getting XFS corruption inside 
> the VMs.
> 
>  
> 
> For random reasons VM’s gets corrupted, sometimes halting it or just being 
> silent corrupted and after a reboot the system is unable to boot due to 
> “corruption of in-memory data detected”. Sometimes the corrupted data are 
> “all zeroes”, sometimes there’s data there. In extreme cases the XFS 
> superblock 0 get’s corrupted and the system cannot even detect a XFS 
> partition anymore since the magic XFS key is corrupted on the first blocks of 
> the virtual disk.
> 
>  
> 
> This is happening for a month now. We had to rollback some backups, and I 
> don’t trust anymore on the state of the VMs.
> 
>  
> 
> Using xfs_db I can see that some VM’s have corrupted superblocks but the VM 
> is up. One in specific, was with sb0 corrupted, so I knew when a reboot kicks 
> in the machine will be gone, and that’s exactly what happened.
> 
>  
> 
> Another day I was just installing a new CentOS 8 VM for random reasons, and 
> after running dnf -y update and a reboot the VM was corrupted needing XFS 
> repair. That was an extreme case.
> 
>  
> 
> So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong on 
> the system. No errors logged on dmesg, nothing on /var/log/messages and no 
> errors on the “zpools”, not even after scrub operations. On the switch, a 
> Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There are 
> no “up and down” and zero errors on all interfaces (we have a 4x Port LACP on 
> the TrueNAS side and 2x Port LACP on each hosts), everything seems to be 
> fine. The only metric that I was unable to get is “dropped packages”, but I’m 
> don’t know if this can be an issue or not.
> 
>  
> 
> Finally, on oVirt, I can’t find 

[ovirt-users] Re: Unable to move or copy disks

2020-11-29 Thread Strahil Nikolov via Users
Are you sure you don't have any heals pending ?
I should admit I have never seen this type of error.

Is it happening for all VMs or only specific ones ?


Best Regards,
Strahil Nikolov






В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, supo...@logicworks.pt 
 написа: 





Sorry, I found this error on gluster logs:

 [MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix: 
Failed to get anonymous fd for real_path: 
/home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such 
file or directory]


De: supo...@logicworks.pt
Para: "Strahil Nikolov" 
Cc: users@ovirt.org
Enviadas: Domingo, 29 De Novembro de 2020 13:13:00
Assunto: [ovirt-users] Re: Unable to move or copy disks

I don't find any error in the gluster logs, I just find this error in the vdsm 
log:

2020-11-29 12:57:45,528+ INFO  (tasks/1) [storage.SANLock] Successfully 
released Lease(name='61d85180-65a4-452d-8773-db778f56e242', 
path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242.lease',
 offset=0) (clusterlock:524)
2020-11-29 12:57:45,528+ ERROR (tasks/1) [root] Job 
u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' failed (jobs:221)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdm/api/copy_data.py", 
line 86, in _run
    self._operation.run()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line 343, in 
run
    for data in self._operation.watch():
  File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 106, 
in watch
    self._finalize(b"", err)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line 179, 
in _finalize
    raise cmdutils.Error(self._cmd, rc, out, err)
Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 
'none', '-f', 'raw', 
u'/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242',
 '-O', 'raw', 
u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-4fac-b062-bded661e20a1/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242']
 failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading sector 
134086625: No such file or directory\n')
2020-11-29 12:57:45,528+ INFO  (tasks/1) [root] Job 
u'cc8ea210-df4b-4f0b-a385-5bc3adc825f6' will be deleted in 3600 seconds 
(jobs:249)
2020-11-29 12:57:45,529+ INFO  (tasks/1) [storage.ThreadPool.WorkerThread] 
FINISH task 309c4289-fbba-489b-94c7-8aed36948c29 (threadPool:210)



Any idea?

Regards
José

De: supo...@logicworks.pt
Para: "Strahil Nikolov" 
Cc: users@ovirt.org
Enviadas: Sábado, 28 De Novembro de 2020 18:39:47
Assunto: [ovirt-users] Re: Unable to move or copy disks

I really don't understand this.
I have 2 glusters same version, 6.10

I can move a disk from gluster2 to gluster1, but cannot move the same disk from 
gluster1 to gluster2
ovirt version: 4.3.10.4-1.el7


Regards
José


De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: users@ovirt.org
Enviadas: Segunda-feira, 23 De Novembro de 2020 5:45:37
Assunto: Re: [ovirt-users] Re: Unable to move or copy disks

No, but keep an eye on you "/var/log" as debug is providing a lot of info.

Usually when you got a failure to move the disk, you can disable and check the 
logs.

Best Regards,
Strahil Nikolov






В неделя, 22 ноември 2020 г., 21:12:26 Гринуич+2,  
написа: 





Do I need to restart gluster after enable debug level?

gluster volume set data2 diagnostics.brick-log-level DEBUG



De: "Strahil Nikolov" 
Para: supo...@logicworks.pt
Cc: users@ovirt.org
Enviadas: Sábado, 21 De Novembro de 2020 19:42:44
Assunto: Re: [ovirt-users] Re: Unable to move or copy disks

You still haven't provided debug logs from the Gluster Bricks.
There will be always a chance that a bug hits you ... no matter OS and tech. 
What matters - is how you debug and overcome that bug.

Check the gluster brick debug logs and you can test if the issue happens with 
an older version.

Also, consider providing oVirt version, Gluster version and some details about 
your setup - otherwise helping you is almost impossible.

Best Regards,
Strahil Nikolov






В събота, 21 ноември 2020 г., 18:16:13 Гринуич+2, supo...@logicworks.pt 
 написа: 





With older gluster verison this does not happens.

Always get this error: VDSM NODE3 command HSMGetAllTasksStatusesVDS failed: low 
level Image copy failed: ()
and in the vdsm.log: 
ERROR (tasks/7) [storage.Image] Copy image error: 
image=6939cc5d-dbca-488d-ab7a-c8b8d39c3656, src 
domain=70e33d55-f1be-4826-b85a-9650c76c8db8, dst 

[ovirt-users] Re: Guest OS Memory Free/Cached/Buffered: Not Configured

2020-11-29 Thread Strahil Nikolov via Users
Hi Stefan,

you can control the VM's cache (if Linux) via:
- vm.vfs_cache_pressure
- vm.dirty_background_ratio/vm.dirty_background_bytes
- vm.dirty_ratio/vm.dirty_bytes
- vm.dirty_writeback_centisecs 
- vm.dirty_expire_centisecs

I would just increase the vfs_cache_pressure to 120 and check if it frees most 
of their cache.Sadly without cache performance will drop , but you can't assign 
unlimited memory :D


Best Regards,
Strahil Nikolov





В неделя, 29 ноември 2020 г., 10:57:00 Гринуич+2, Stefan Seifried 
 написа: 





Hi,

I'm quite new to oVirt, so my apologizies if I'm asking something dead obvious:
I noticed that there is an item in the 'General Tab' of each VM, which says 
'Guest OS Memory Free/Cached/Buffered' and on all my VM's it says 'Not 
Configured'. Right now I'm trying to figure out how to enable this feature. I 
assume that this gives me the equivalent output of executing 'free' on the 
shell on a Linux guest. 

Googling and digging around the VMM guide did not give me any pointers so far. 

Thanks in advance,
Stefan

PS: A little background info: I have one 'client' which keeps nagging me to 
increase the RAM on his VM because it's constantly operating at 95% memory load 
(as shown on the VM dashboard). After a quick investigation with 'free' I could 
see that Linux has built up the disk cache to 12G (from 16G total, no swapping 
occured). My intention is to make the real memory load visible to him, as he 
has already access to the VM portal for shutdown/restart/etc.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBDDFYNKD7PESO7BTR66SZRGQTSXCVD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VF7C4IQE4RBQ4YW75DQNDRSIP5TRKGKA/


[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-28 Thread Strahil Nikolov via Users
Are you using "nobarrier" mount options in the VM ?

If yes, can you try to remove the "nobarrrier" option.


Best Regards,
Strahil Nikolov






В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão 
 написа: 





Hi Strahil,

I moved a running VM to other host, rebooted and no corruption was found. If 
there's any corruption it may be silent corruption... I've cases where the VM 
was new, just installed, run dnf -y update to get the updated packages, 
rebooted, and boom XFS corruption. So perhaps the motion process isn't the one 
to blame.

But, in fact, I remember when moving a VM that it went down during the process 
and when I rebooted it was corrupted. But this may not seems related. It 
perhaps was already in a inconsistent state.

Anyway, here's the mount options:

Host1:
192.168.10.14:/mnt/pool0/ovirt/vm on 
/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4 
(rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,local_lock=none,addr=192.168.10.14)

Host2:
192.168.10.14:/mnt/pool0/ovirt/vm on 
/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4 
(rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,local_lock=none,addr=192.168.10.14)

The options are the default ones. I haven't changed anything when configuring 
this cluster.

Thanks.



-Original Message-
From: Strahil Nikolov  
Sent: Saturday, November 28, 2020 1:54 PM
To: users ; Vinícius Ferrão 
Subject: Re: [ovirt-users] Constantly XFS in memory corruption inside VMs

Can you try with a test vm, if this happens after a Virtual Machine migration ?

What are your mount options for the storage domain ?

Best Regards,
Strahil Nikolov






В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via Users 
 написа: 





  


Hello,

 

I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS shared 
storage on TrueNAS 12.0 is constantly getting XFS corruption inside the VMs.

 

For random reasons VM’s gets corrupted, sometimes halting it or just being 
silent corrupted and after a reboot the system is unable to boot due to 
“corruption of in-memory data detected”. Sometimes the corrupted data are “all 
zeroes”, sometimes there’s data there. In extreme cases the XFS superblock 0 
get’s corrupted and the system cannot even detect a XFS partition anymore since 
the magic XFS key is corrupted on the first blocks of the virtual disk.

 

This is happening for a month now. We had to rollback some backups, and I don’t 
trust anymore on the state of the VMs.

 

Using xfs_db I can see that some VM’s have corrupted superblocks but the VM is 
up. One in specific, was with sb0 corrupted, so I knew when a reboot kicks in 
the machine will be gone, and that’s exactly what happened.

 

Another day I was just installing a new CentOS 8 VM for random reasons, and 
after running dnf -y update and a reboot the VM was corrupted needing XFS 
repair. That was an extreme case.

 

So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong on 
the system. No errors logged on dmesg, nothing on /var/log/messages and no 
errors on the “zpools”, not even after scrub operations. On the switch, a 
Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There are no 
“up and down” and zero errors on all interfaces (we have a 4x Port LACP on the 
TrueNAS side and 2x Port LACP on each hosts), everything seems to be fine. The 
only metric that I was unable to get is “dropped packages”, but I’m don’t know 
if this can be an issue or not.

 

Finally, on oVirt, I can’t find anything either. I looked on /var/log/messages 
and /var/log/sanlock.log but there’s nothing that I found suspicious.

 

Is there’s anyone out there experiencing this? Our VM’s are mainly CentOS 7/8 
with XFS, there’s 3 Windows VM’s that does not seems to be affected, everything 
else is affected.

 

Thanks all.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLYSE7HCFNWTWFZZTL2EJHV36OENHUGB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IYCYMCPXTXQHYDTZLN3T4WLIBIN4HPDM/


[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-28 Thread Strahil Nikolov via Users
Can you try with a test vm, if this happens after a Virtual Machine migration ?

What are your mount options for the storage domain ?

Best Regards,
Strahil Nikolov






В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via Users 
 написа: 





  


Hello,

 

I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS shared 
storage on TrueNAS 12.0 is constantly getting XFS corruption inside the VMs.

 

For random reasons VM’s gets corrupted, sometimes halting it or just being 
silent corrupted and after a reboot the system is unable to boot due to 
“corruption of in-memory data detected”. Sometimes the corrupted data are “all 
zeroes”, sometimes there’s data there. In extreme cases the XFS superblock 0 
get’s corrupted and the system cannot even detect a XFS partition anymore since 
the magic XFS key is corrupted on the first blocks of the virtual disk.

 

This is happening for a month now. We had to rollback some backups, and I don’t 
trust anymore on the state of the VMs.

 

Using xfs_db I can see that some VM’s have corrupted superblocks but the VM is 
up. One in specific, was with sb0 corrupted, so I knew when a reboot kicks in 
the machine will be gone, and that’s exactly what happened.

 

Another day I was just installing a new CentOS 8 VM for random reasons, and 
after running dnf -y update and a reboot the VM was corrupted needing XFS 
repair. That was an extreme case.

 

So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong on 
the system. No errors logged on dmesg, nothing on /var/log/messages and no 
errors on the “zpools”, not even after scrub operations. On the switch, a 
Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There are no 
“up and down” and zero errors on all interfaces (we have a 4x Port LACP on the 
TrueNAS side and 2x Port LACP on each hosts), everything seems to be fine. The 
only metric that I was unable to get is “dropped packages”, but I’m don’t know 
if this can be an issue or not.

 

Finally, on oVirt, I can’t find anything either. I looked on /var/log/messages 
and /var/log/sanlock.log but there’s nothing that I found suspicious.

 

Is there’s anyone out there experiencing this? Our VM’s are mainly CentOS 7/8 
with XFS, there’s 3 Windows VM’s that does not seems to be affected, everything 
else is affected.

 

Thanks all.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLYSE7HCFNWTWFZZTL2EJHV36OENHUGB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q6LYTLIS6VX7LXM55K35WSQ7WAFYJZNK/


[ovirt-users] Re: VM memory decrease

2020-11-26 Thread Strahil Nikolov via Users
Better check why the hosts are starving for memory.

Best Regards,
Strahil Nikolov

В 10:03 + на 26.11.2020 (чт), Erez Zarum написа:
> I think i'm answering my self :)
> I noticed "mom.Controllers.Balloon - INFO - Ballooning guest" message
> in the mom.log.
> So best option to remove this and keep High Performance is disabling
> Memory Ballooning.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLNN72BEK6IAJGOW5BDKAPJNXZRCJYYV/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WIHJSSPR3THMTSKLBE3QAZWQ4NISWN5T/


[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread Strahil Nikolov via Users
The virt settings (highly recommended for Virtual usage) enabled
SHARDING.

ONCE ENABLED, NEVER EVER DISABLE SHARDING !!!


Best Regards,
Strahil Nikolov

В 16:34 -0800 на 25.11.2020 (ср), WK написа:
> 
> No, that doesn't look right.
> 
> 
> 
> I have a testbed cluster that has a single 1G network (1500 mtu)
> 
> 
> 
> it is replica 2 + arbiter on top of 7200 rpms spinning drives
>   formatted with XFS
> 
> 
> 
> This cluster runs Gluster 6.10 on Ubuntu 18 on some Dell i5-2xxx
>   boxes that were lying around.
> 
> 
> 
> it uses a stock 'virt' group tuning which provides the following:
> 
> 
> 
> root@onetest2:~/datastores/101# cat /var/lib/glusterd/groups/virt
> 
>   performance.quick-read=off
> 
>   performance.read-ahead=off
> 
>   performance.io-cache=off
> 
>   performance.low-prio-threads=32
> 
>   network.remote-dio=enable
> 
>   cluster.eager-lock=enable
> 
>   cluster.quorum-type=auto
> 
>   cluster.server-quorum-type=server
> 
>   cluster.data-self-heal-algorithm=full
> 
>   cluster.locking-scheme=granular
> 
>   cluster.shd-max-threads=8
> 
>   cluster.shd-wait-qlength=1
> 
>   features.shard=on
> 
>   user.cifs=off
> 
>   cluster.choose-local=off
> 
>   client.event-threads=4
> 
>   server.event-threads=4
> 
>   performance.client-io-threads=on
> 
> I show the following results on your test. Note: the cluster is
>   actually doing some work with 3 Vms running doing monitoring
>   things.
> 
> 
> 
> The bare metal performance is as follows:
> 
> root@onetest2:/# dd if=/dev/zero of=/test12.img bs=1G count=1
>   oflag=dsync
> 
>   1+0 records in
> 
>   1+0 records out
> 
>   1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.0783 s, 96.9 MB/s
> 
>   root@onetest2:/# dd if=/dev/zero of=/test12.img bs=1G count=1
>   oflag=dsync
> 
>   1+0 records in
> 
>   1+0 records out
> 
>   1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.5047 s, 93.3 MB/s
> 
> 
> 
> Moving over to the Gluster mount I show the following:
> 
> 
> 
> root@onetest2:~/datastores/101# dd if=/dev/zero of=/test12.img
>   bs=1G count=1 oflag=dsync
> 
>   1+0 records in
> 
>   1+0 records out
> 
>   1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.4582 s, 93.7 MB/s
> 
>   root@onetest2:~/datastores/101# dd if=/dev/zero of=/test12.img
>   bs=1G count=1 oflag=dsync
> 
>   1+0 records in
> 
>   1+0 records out
> 
>   1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.2034 s, 88.0 MB/s
> 
>   
> 
> 
> 
> So a little performance hit with Gluster but almost insignificant
>   given that other things were going on.
> 
> 
> 
> I don't know if you are in a VM environment but if so you could
>   try the virt tuning.
> 
> gluster volume set VOLUME group virt
> Unfortunately, I know little about ZFS so I can't comment on its
>   performance, but your gluster results should be closer to the
> bare
>   metal performance.
> 
> 
> 
> Also note I am using an Arbiter, so that is less work than
>   Replica 3. With a true Replica 3 I would expect the Gluster
>   results to be lower, maybe as low as  60-70 MB/s range
> 
> -wk
> 
> 
> 
> 
> 
> 
> 
> On 11/25/2020 2:29 AM, Harry O wrote:
> 
> 
> 
> 
> >   Unfortunately I didn't get any improvement by upgrading the
> > network.
> > Bare metal (zfs raid1 zvol):dd if=/dev/zero
> > of=/gluster_bricks/test1.img bs=1G count=1 oflag=dsync1+0 records
> > in1+0 records out1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.6471
> > s, 68.6 MB/s
> > Centos VM on gluster volume:dd if=/dev/zero of=/test12.img bs=1G
> > count=1 oflag=dsync1+0 records in1+0 records out1073741824 bytes
> > (1.1 GB, 1.0 GiB) copied, 36.8618 s, 29.1 MB/s
> > Does this performance look
> > normal?___Users mailing
> > list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZKRIMXDVN3MAVE7GVQDUIL5ZE473LAL/
> > 
> > 
> 
>   
> 
> ___Users mailing list -- 
> users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/J27OKT7IRWZM6DA4QEX3YZISDZOFHNAX/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: Gluster volume slower then raid1 zpool speed

2020-11-25 Thread Strahil Nikolov via Users
Any reason to use dsync flag ?


Do you have a real workload to test with ?

Best Regards,
Strahil Nikolov


В 10:29 + на 25.11.2020 (ср), Harry O написа:
> Unfortunately I didn't get any improvement by upgrading the network.
> 
> Bare metal (zfs raid1 zvol):
> dd if=/dev/zero of=/gluster_bricks/test1.img bs=1G count=1
> oflag=dsync
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.6471 s, 68.6 MB/s
> 
> Centos VM on gluster volume:
> dd if=/dev/zero of=/test12.img bs=1G count=1 oflag=dsync
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 36.8618 s, 29.1 MB/s
> 
> Does this performance look normal?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ZKRIMXDVN3MAVE7GVQDUIL5ZE473LAL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TJD2N2XTBE72HO6DKWHBO2GQIWKTJO7L/


[ovirt-users] Re: bond or different networks for frontend and backend

2020-11-24 Thread Strahil Nikolov via Users
You can find details here:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-using_channel_bonding

1 or layer3+4 — Uses upper layer protocol information (when available) to 
generate the hash. This allows for traffic to a particular network peer to span 
multiple slaves, although a single connection will not span multiple slaves.

Best Regards,
Strahil Nikolov




В вторник, 24 ноември 2020 г., 15:29:50 Гринуич+2, jb  
написа: 





Sorry I hear this hashing term the first time.

So in cockpit this would be a team with 802.3ad LACP / Passive? And you 
would put all 4 nics in this team?

In the switch I still have to define a link aggregation (on hp switch a 
trunk) with LACP?




Am 24.11.20 um 13:36 schrieb Strahil Nikolov:
> If I had 4 x 1GBE , I would consider LACP with hashing on layer 3+4.
>
> For production use, I would go with 10GBE at least.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В вторник, 24 ноември 2020 г., 10:40:22 Гринуич+2, jb  
> написа:
>
>
>
>
>
> Hello,
>
> here in the description
> (https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged.html)
> it says that we should have at least 2 interfaces to separate frontend
> and backend traffic.
>
> How is this when I have 4 nics and I make with all one bond interface,
> is this not better in terms of performance? What is your recommendation
> with 4 x 1Gbit nics?
>
> Best regards
>
> Jonathan
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/P5ZLQLG2TZN56IWZ5HC4ZDST66H5DHLO/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C5OPQLQBF7BM4FLH6HL3DKVDCEQTA3NB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GK3SZ62M3ZCYNYFJWHIVQEN2FDQNWZ36/


[ovirt-users] Re: bond or different networks for frontend and backend

2020-11-24 Thread Strahil Nikolov via Users
If I had 4 x 1GBE , I would consider LACP with hashing on layer 3+4.

For production use, I would go with 10GBE at least.

Best Regards,
Strahil Nikolov






В вторник, 24 ноември 2020 г., 10:40:22 Гринуич+2, jb  
написа: 





Hello,

here in the description 
(https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged.html)
 
it says that we should have at least 2 interfaces to separate frontend 
and backend traffic.

How is this when I have 4 nics and I make with all one bond interface, 
is this not better in terms of performance? What is your recommendation 
with 4 x 1Gbit nics?

Best regards

Jonathan

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P5ZLQLG2TZN56IWZ5HC4ZDST66H5DHLO/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDRUOUQGS42K7I3IASHJ3RHKH3DNOO4Q/


  1   2   3   4   5   6   >