[ovirt-users] Re: Networking question on setting up self-hosting engine
Hi, On Wed, Jan 13, 2021 at 7:16 PM wrote: > > Dear list, > > I have tried setting up self-hosting engine on a host with ONE Nic (Ovirt 4.4 > CentOS 8 Stream). I followed the Quick Start Guide, and tried the command > line self-host setup, but ended up with the following error: > > {u'msg': u'There was a failure deploying the engine on the local engine VM. > The system may not be provisioned according to the playbook results > > I tried on another host with TWO NICs (Ovirt 4.3 Oracle Linux 7 Update 9). > This time I setup a bridge BR0 and disable EM1 (the first Ethernet interface > on the host), and then created Bond0 on-top of BR0. Both Bond0 and EM2 (the > second Ethernet interface on the host) were up. And then I tried again using > Ovirt-Cockpit wizard, with the Engine VM set on BR0, and the deployment of > Engine VM simply failed. The Engine and Host are using the same network > numbers (192.168.2.0/24) and they resolved correctly. I read the logs in > var/log/ovirt-engine/engine.log but there wasn't any error reported. > > I have already tried many times for the past few days and I'm at my wits-end. > May I know: > 1) Is it possible to install Self-hosted engine with just ONE NIC? Generally speaking, yes. > 2) Any suggestion how to troubleshoot these problems? And tested network > configurations? Please check/share all relevant logs. If unsure, all of /var/log. Specifically, /var/log/ovirt-hosted-engine-setup/* (also engine-logs subdirs, if relevant) and /var/log/vdsm/* on the hosts. Good luck and best regards, -- Didi ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/L36PZBSSKK4FOE3JWBPMCEMXIOXMM23F/
[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available
On 28/12, Benny Zlotnik wrote: > On Tue, Dec 22, 2020 at 6:33 PM Konstantin Shalygin wrote: > > > > Sandro, FYI we are not against cinderlib integration, more than we are > > upgrade 4.3 to 4.4 due movement to cinderlib. > > > > But (!) current Managed Storage Block realization support only krbd (kernel > > RBD) driver - it's also not a option, because kernel client is always > > lagging behind librbd, and every update\bugfix we should reboot whole host > > instead simple migration of all VMs and then migrate it back. Also with > > krbd host will be use kernel page cache, and will not be unmounted if VM > > will crash (qemu with librbd is one userland process). > > > > There was rbd-nbd support at some point in cinderlib[1] which > addresses your concerns, but it was removed because of some issues > > +Gorka, are there any plans to pick it up again? > > [1] > https://github.com/Akrog/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d0acd1399e4 > Hi, Apologies for the delay on the response, I was on a long PTO and came back just yesterday. There are plans to add it now. ;-) I will add the RBD-NBD support to cinderlib and update this thread once there's an RDO RPM available (which usually happens on the same day the patch merges). If using QEMU to directly connect RBD volumes is the preferred option, then that code would have to be added to oVirt and can be done now without any cinderlib changes. The connection information is provided by cinderlib, and oVirt can check the type of connection that is is and do the connection directly in QEMU for RBD volumes, or call os-brick for all other types of volumes to get a local device before adding it to the instances. Cheers, Gorka. > > > > So for me current situation look like this: > > > > 1. We update deprecated OpenStack code? Why, Its for delete?.. Nevermind, > > just update this code... > > > > 2. Hmm... auth tests doesn't work, to pass test just disable any OpenStack > > project_id related things... and... Done... > > > > 3. I don't care how current cinder + qemu code works, just write new one > > for linux kernel, it's optimal to use userland apps, just add wrappers (no, > > it's not); > > > > 4. Current Cinder integration require zero configuration on oVirt hosts. > > It's lazy, why oVirt administrator do nothing? just write manual how-to > > install packages - oVirt administrators love anything except "reinstall" > > from engine (no, it's not); > > > > 5. We broke old code. New features is "Cinderlib is a Technology Preview > > feature only. Technology Preview features are not supported with Red Hat > > production service level agreements (SLAs), might not be functionally > > complete, and Red Hat does not recommend to use them for production". > > > > 6. Oh, we broke old code. Let's deprecate them and close PRODUCTION issues > > (we didn't see anything). > > > > > > And again, we are not hate new cinderlib integration. We just want that new > > technology don't break all PRODUCTION clustes. Almost two years ago I write > > on this issue https://bugzilla.redhat.com/show_bug.cgi?id=1539837#c6 about > > "before deprecate, let's help to migrate". For now I see that oVirt totally > > will disable QEMU RBD support and want to use kernel RBD module + python > > os-brick + userland mappers + shell wrappers. > > > > > > Thanks, I hope I am writing this for a reason and it will help build > > bridges between the community and the developers. We have been with oVirt > > for almost 10 years and now it is a crossroads towards a different > > virtualization manager. > > > > k > > > > > > So I see only regressions for now, hope we'll found some code owner who can > > catch this oVirt 4.4 only bugs. > > > > I looked at the bugs and I see you've already identified the problem > and have patches attached, if you can submit the patches and verify > them perhaps we can merge the fixes > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SW2B4VGVC4GAVU72WRUKPAWQWI4R3M65/
[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down
As those are brand new, try to install the gluster v8 repo and update the nodes to 8.3 and then rerun the deployment: yum install centos-release-gluster8.noarch yum update Best Regards, Strahil Nikolov В 23:37 + на 13.01.2021 (ср), Charles Lam написа: > Dear Friends: > > I am still stuck at > > task path: > /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volum > es.yml:67 > "One or more bricks could be down. Please execute the command again > after bringing all bricks online and finishing any pending heals", > "Volume heal failed." > > I refined /etc/lvm/lvm.conf to: > > filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-F1kxJk-F1wV-QqOR-Tbb1-Pefh- > 4vod-IVYaz6$|", "a|^/dev/nvme.n1|", "a|^/dev/dm-1.|", "r|.*|"] > > and have also rebuilt the servers again. The output of gluster > volume status shows bricks up but no ports for self-heal daemon: > > [root@fmov1n2 ~]# gluster volume status data > Status of volume: data > Gluster process TCP Port RDMA > Port Online Pid > --- > --- > Brick host1.company.com:/gluster_bricks > /data/data 49153 0 Y > 244103 > Brick host2.company.com:/gluster_bricks > /data/data 49155 0 Y > 226082 > Brick host3.company.com:/gluster_bricks > /data/data 49155 0 Y > 225948 > Self-heal Daemon on > localhost N/A N/AY 224255 > Self-heal Daemon on > host2.company.com N/A N/AY 233992 > Self-heal Daemon on > host3.company.com N/A N/AY 224245 > > Task Status of Volume data > --- > --- > There are no active volume tasks > > The output of gluster volume heal info shows connected to > the local self-heal daemon but transport endpoint is not connected to > the two remote daemons. This is the same for all three hosts. > > I have followed the solutions here: > https://access.redhat.com/solutions/5089741 > and also here: https://access.redhat.com/solutions/3237651 > > with no success. > > I have changed to a different DNS/DHCP server and still have the same > issues. Could this somehow be related to the direct cabling for my > storage/Gluster network (no switch)? /etc/nsswitch.conf is set to > file dns and pings all work, but dig and does not for storage (I > understand this is to be expected). > > Again, as always, any pointers or wisdom is greatly appreciated. I > am out of ideas. > > Thank you! > Charles > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/OE7EUSWMBTRINHCSBQAXCI6L25K6D2OY/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LJSJ74C5RV5EZ62GZUWGVQAXODJHK2TA/
[ovirt-users] Re: Q: New oVirt Node - CentOS 8 or Stream ?
В 17:50 +0200 на 13.01.2021 (ср), Andrei Verovski написа: > Hi, > > > I’m currently adding new oVirt node to existing 4.4 setup. > Which underlying OS version would you recommend for long-term > deployment - CentOS 8 or Stream ? Stream is not used by all RH teams, while CentOS 8 will be dead soon. Both cases it's not nice. If you need to add the node now, use CentOS 8 and later convert it to Stream. > I don’t use pre-built node ISO since I have a number of custom > scripts running on node host OS. > > Thanks in advance. > Andrei Best Regards, Strahil Nikolov ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5RXCJ6T4C3BJWWEEHD72YQOJZ2MJPPSY/
[ovirt-users] Re: VM console does not work with new cluster.
It not work even I stop the firewall on the hosts. So, I think firewall is not the cause. -Original Message- From: Strahil Nikolov Sent: Thursday, January 14, 2021 12:52 PM To: tommy ; matthew.st...@fujitsu.com; eev...@digitaldatatechs.com Cc: users@ovirt.org Subject: Re: [ovirt-users] Re: VM console does not work with new cluster. I don't see the VNC ports at all (5900 and above). Here is my firewall on oVirt 4.3.10: public (active) target: default icmp-block-inversion: no interfaces: enp4s0 enp5s0f0 enp5s0f1 enp7s5f0 enp7s5f1 enp7s6f0 enp7s6f1 ovirtmgmt team0 sources: services: cockpit ctdb dhcpv6-client glusterfs libvirt-tls nfs nfs3 nrpe ovirt-imageio ovirt-storageconsole ovirt-vmconsole rpc-bind samba snmp ssh vdsm ports: 111/tcp 2049/tcp 54321/tcp 5900/tcp 5900-6923/tcp 5666/tcp 16514/tcp 54322/tcp 22/tcp 6081/udp 8080/tcp 963/udp 965/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: Best Regards, Strahil Nikolov В 05:25 +0800 на 13.01.2021 (ср), tommy написа: > I encountered the question too. > > The follow file is the connect file for vm that can connect using > remote viewer: > > [virt-viewer] > type=vnc > host=192.168.10.41 > port=5900 > password=rdXQA4zr/UAY > # Password is valid for 120 seconds. > delete-this-file=1 > fullscreen=0 > title=HostedEngine:%d > toggle-fullscreen=shift+f11 > release-cursor=shift+f12 > secure-attention=ctrl+alt+end > versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0- > 6;rhel6:99.0-1 > newer-version-url= > http://www.ovirt.org/documentation/admin-guide/virt/console-client-res > ources > > [ovirt] > host=ooeng.tltd.com:443 > vm-guid=76f99df2-ef79-45d9-8eea-a32b168f9ef3 > sso-token=4Up7TfLLBjSuQgPkQvRz3D- > fUGZWZg4ynApe2Y7ylkARCFwQWsfEr3dU8FjlK8esctm3Im4tz80mE1DjrNT3XQ > admin=1 > ca=-BEGIN CERTIFICATE- > \nMIIDqDCCApCgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxETA > PBgNVBAoM\nCHRsdGQuY29tMR0wGwYDVQQDDBRvb2VuZy50bHRkLmNvbS4xNzczMDAeFw > 0yMTAxMTAxNjE1NDda\nFw0zMTAxMDkxNjE1NDdaMD8xCzAJBgNVBAYTAlVTMREwDwYDV > QQKDAh0bHRkLmNvbTEdMBsGA1UE\nAwwUb29lbmcudGx0ZC5jb20uMTc3MzAwggEiMA0G > CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCg\nYT9S7hWiXQUzAqFQKbg2nMjwyHDmb/J > mKeJAUVZqNKRg1q80IpWyoM12Zw0nX1eTwMnVY/JtJON4\n13PoEC3So8nniGt+wtHr44 > ysvCWfU0SBk/ZPnKmQ58o5MlSkidHwySChXfVPYLPWeUJ1JUrujna/\nCbi5bmmjx2pqw > LrZXX8Q5NO2MRKOTs0Dtg16Q6z+a3cXLIffVJfhPGS3AkIh6nznNaDeH5gFZZbd\nr3DK > E4xrpdw/7y6CgjmHe4vwGxOIyE+gElZ/lVtqznLMwohz7wgtgsDA36277mujNyMjMbrSF > heu\n5WfbIa9VVSZWEkISVq6eswLOQ1IRaFyJsFN9AgMBAAGjga0wgaowHQYDVR0OBBYE > FDYEqJOMqN8+\nQhCP7DAkqF3RZMFdMGgGA1UdIwRhMF+AFDYEqJOMqN8+QhCP7DAkqF3 > RZMFdoUOkQTA/MQswCQYD\nVQQGEwJVUzERMA8GA1UECgwIdGx0ZC5jb20xHTAbBgNVBA > MMFG9vZW5nLnRsdGQuY29tLjE3NzMw\nggIQADAPBgNVHRMBAf8EBTADAQH/MA4GA1UdD > wEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEA\nAKs0/yQWkoOkGcL0PjF9ijekdMmj > rLZGyh5uLot7h9s/Y2+5l9n9IzEjjx9chi8xwt6MBsR6/nBT\n/skcciv2veM22HwNGjd > rHvhfbZFnZsGe2TU60kGzKjlv1En/8Pgd2aWBcwTlr+SErBXkehNEJRj9\n1saycPgwS4 > pHS04c2+4JMhpe+hxgsO2+N/SYkP95Lf7ZQynVsN/SKx7X3cWybErCqoB7G7McqaHN\nV > Ww+QNXo5islWUXqeDc3RcnW3kq0XUEzEtp6hoeRcLKO99QrAW31zqU/QY+EeZ6Fax1O/j > rDafZn\npTs0KJFNgeVnUhKanB29ONy+tmnUmTAgPMaKKw==\n-END > CERTIFICATE-\n > > the firewall list of the host 192.168.10.41 is: > > [root@ooengh1 ~]# firewall-cmd --list-all public (active) > target: default > icmp-block-inversion: no > interfaces: bond0 ovirtmgmt > sources: > services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt- > vmconsole snmp ssh vdsm > ports: 6900/tcp 22/tcp 6081/udp > protocols: > masquerade: no > forward-ports: > source-ports: > icmp-blocks: > rich rules: > > > > > > > > the follow file is the connect file that vm that cannot connect using > remote viewer: > > [virt-viewer] > type=vnc > host=ohost1.tltd.com > port=5900 > password=4/jWA+RLaSZe > # Password is valid for 120 seconds. > delete-this-file=1 > fullscreen=0 > title=testol:%d > toggle-fullscreen=shift+f11 > release-cursor=shift+f12 > secure-attention=ctrl+alt+end > versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0- > 6;rhel6:99.0-1 > newer-version-url= > http://www.ovirt.org/documentation/admin-guide/virt/console-client-res > ources > > [ovirt] > host=ooeng.tltd.com:443 > vm-guid=2b0eeecf-e561-4f60-b16d-dccddfcc852a > sso-token=4Up7TfLLBjSuQgPkQvRz3D- > fUGZWZg4ynApe2Y7ylkARCFwQWsfEr3dU8FjlK8esctm3Im4tz80mE1DjrNT3XQ > admin=1 > ca=-BEGIN CERTIFICATE- > \nMIIDqDCCApCgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxETA > PBgNVBAoM\nCHRsdGQuY29tMR0wGwYDVQQDDBRvb2VuZy50bHRkLmNvbS4xNzczMDAeFw > 0yMTAxMTAxNjE1NDda\nFw0zMTAxMDkxNjE1NDdaMD8xCzAJBgNVBAYTAlVTMREwDwYDV > QQKDAh0bHRkLmNvbTEdMBsGA1UE\nAwwUb29lbmcudGx0ZC5jb20uMTc3MzAwggEiMA0G > CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCg\nYT9S7hWiXQUzAqFQKbg2nMjwyHDmb/J > mKeJAUVZqNKRg1q80IpWyoM12Zw0nX1eTwMnVY/JtJON4\n13PoEC3So8nniGt+wtHr44 > ysvCWfU0SBk/ZPnKmQ58o5MlSkidHwySChXfVPYLPWeUJ1J
[ovirt-users] Re: VM console does not work with new cluster.
I don't see the VNC ports at all (5900 and above). Here is my firewall on oVirt 4.3.10: public (active) target: default icmp-block-inversion: no interfaces: enp4s0 enp5s0f0 enp5s0f1 enp7s5f0 enp7s5f1 enp7s6f0 enp7s6f1 ovirtmgmt team0 sources: services: cockpit ctdb dhcpv6-client glusterfs libvirt-tls nfs nfs3 nrpe ovirt-imageio ovirt-storageconsole ovirt-vmconsole rpc-bind samba snmp ssh vdsm ports: 111/tcp 2049/tcp 54321/tcp 5900/tcp 5900-6923/tcp 5666/tcp 16514/tcp 54322/tcp 22/tcp 6081/udp 8080/tcp 963/udp 965/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: Best Regards, Strahil Nikolov В 05:25 +0800 на 13.01.2021 (ср), tommy написа: > I encountered the question too. > > The follow file is the connect file for vm that can connect using > remote viewer: > > [virt-viewer] > type=vnc > host=192.168.10.41 > port=5900 > password=rdXQA4zr/UAY > # Password is valid for 120 seconds. > delete-this-file=1 > fullscreen=0 > title=HostedEngine:%d > toggle-fullscreen=shift+f11 > release-cursor=shift+f12 > secure-attention=ctrl+alt+end > versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0- > 6;rhel6:99.0-1 > newer-version-url= > http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources > > [ovirt] > host=ooeng.tltd.com:443 > vm-guid=76f99df2-ef79-45d9-8eea-a32b168f9ef3 > sso-token=4Up7TfLLBjSuQgPkQvRz3D- > fUGZWZg4ynApe2Y7ylkARCFwQWsfEr3dU8FjlK8esctm3Im4tz80mE1DjrNT3XQ > admin=1 > ca=-BEGIN CERTIFICATE- > \nMIIDqDCCApCgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxETA > PBgNVBAoM\nCHRsdGQuY29tMR0wGwYDVQQDDBRvb2VuZy50bHRkLmNvbS4xNzczMDAeFw > 0yMTAxMTAxNjE1NDda\nFw0zMTAxMDkxNjE1NDdaMD8xCzAJBgNVBAYTAlVTMREwDwYDV > QQKDAh0bHRkLmNvbTEdMBsGA1UE\nAwwUb29lbmcudGx0ZC5jb20uMTc3MzAwggEiMA0G > CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCg\nYT9S7hWiXQUzAqFQKbg2nMjwyHDmb/J > mKeJAUVZqNKRg1q80IpWyoM12Zw0nX1eTwMnVY/JtJON4\n13PoEC3So8nniGt+wtHr44 > ysvCWfU0SBk/ZPnKmQ58o5MlSkidHwySChXfVPYLPWeUJ1JUrujna/\nCbi5bmmjx2pqw > LrZXX8Q5NO2MRKOTs0Dtg16Q6z+a3cXLIffVJfhPGS3AkIh6nznNaDeH5gFZZbd\nr3DK > E4xrpdw/7y6CgjmHe4vwGxOIyE+gElZ/lVtqznLMwohz7wgtgsDA36277mujNyMjMbrSF > heu\n5WfbIa9VVSZWEkISVq6eswLOQ1IRaFyJsFN9AgMBAAGjga0wgaowHQYDVR0OBBYE > FDYEqJOMqN8+\nQhCP7DAkqF3RZMFdMGgGA1UdIwRhMF+AFDYEqJOMqN8+QhCP7DAkqF3 > RZMFdoUOkQTA/MQswCQYD\nVQQGEwJVUzERMA8GA1UECgwIdGx0ZC5jb20xHTAbBgNVBA > MMFG9vZW5nLnRsdGQuY29tLjE3NzMw\nggIQADAPBgNVHRMBAf8EBTADAQH/MA4GA1UdD > wEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEA\nAKs0/yQWkoOkGcL0PjF9ijekdMmj > rLZGyh5uLot7h9s/Y2+5l9n9IzEjjx9chi8xwt6MBsR6/nBT\n/skcciv2veM22HwNGjd > rHvhfbZFnZsGe2TU60kGzKjlv1En/8Pgd2aWBcwTlr+SErBXkehNEJRj9\n1saycPgwS4 > pHS04c2+4JMhpe+hxgsO2+N/SYkP95Lf7ZQynVsN/SKx7X3cWybErCqoB7G7McqaHN\nV > Ww+QNXo5islWUXqeDc3RcnW3kq0XUEzEtp6hoeRcLKO99QrAW31zqU/QY+EeZ6Fax1O/j > rDafZn\npTs0KJFNgeVnUhKanB29ONy+tmnUmTAgPMaKKw==\n-END > CERTIFICATE-\n > > the firewall list of the host 192.168.10.41 is: > > [root@ooengh1 ~]# firewall-cmd --list-all public (active) > target: default > icmp-block-inversion: no > interfaces: bond0 ovirtmgmt > sources: > services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt- > vmconsole snmp ssh vdsm > ports: 6900/tcp 22/tcp 6081/udp > protocols: > masquerade: no > forward-ports: > source-ports: > icmp-blocks: > rich rules: > > > > > > > > the follow file is the connect file that vm that cannot connect using > remote viewer: > > [virt-viewer] > type=vnc > host=ohost1.tltd.com > port=5900 > password=4/jWA+RLaSZe > # Password is valid for 120 seconds. > delete-this-file=1 > fullscreen=0 > title=testol:%d > toggle-fullscreen=shift+f11 > release-cursor=shift+f12 > secure-attention=ctrl+alt+end > versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0- > 6;rhel6:99.0-1 > newer-version-url= > http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources > > [ovirt] > host=ooeng.tltd.com:443 > vm-guid=2b0eeecf-e561-4f60-b16d-dccddfcc852a > sso-token=4Up7TfLLBjSuQgPkQvRz3D- > fUGZWZg4ynApe2Y7ylkARCFwQWsfEr3dU8FjlK8esctm3Im4tz80mE1DjrNT3XQ > admin=1 > ca=-BEGIN CERTIFICATE- > \nMIIDqDCCApCgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxETA > PBgNVBAoM\nCHRsdGQuY29tMR0wGwYDVQQDDBRvb2VuZy50bHRkLmNvbS4xNzczMDAeFw > 0yMTAxMTAxNjE1NDda\nFw0zMTAxMDkxNjE1NDdaMD8xCzAJBgNVBAYTAlVTMREwDwYDV > QQKDAh0bHRkLmNvbTEdMBsGA1UE\nAwwUb29lbmcudGx0ZC5jb20uMTc3MzAwggEiMA0G > CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCg\nYT9S7hWiXQUzAqFQKbg2nMjwyHDmb/J > mKeJAUVZqNKRg1q80IpWyoM12Zw0nX1eTwMnVY/JtJON4\n13PoEC3So8nniGt+wtHr44 > ysvCWfU0SBk/ZPnKmQ58o5MlSkidHwySChXfVPYLPWeUJ1JUrujna/\nCbi5bmmjx2pqw > LrZXX8Q5NO2MRKOTs0Dtg16Q6z+a3cXLIffVJfhPGS3AkIh6nznNaDeH5gFZZbd\nr3DK > E4xrpdw/7y6CgjmHe4vwGxOIyE+gElZ/lVtqznLMwohz7wgtgsDA36277mujNyMjMbrSF > heu\n5WfbIa9VVSZWEkISVq6eswLOQ1IRaFyJsFN9AgMBAAGjga0wgaowHQYDVR0OBBYE > FDYEqJOMqN8+\nQhCP7DAkqF3RZMFdMGgGA1UdIwRhMF+AFDYEqJOMqN8+QhCP7DAkqF3 > RZMFdoUOkQTA/MQswCQYD\nVQQGEwJVUzERMA8GA1UECgwIdGx
[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down
Dear Friends: I am still stuck at task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67 "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals", "Volume heal failed." I refined /etc/lvm/lvm.conf to: filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-F1kxJk-F1wV-QqOR-Tbb1-Pefh-4vod-IVYaz6$|", "a|^/dev/nvme.n1|", "a|^/dev/dm-1.|", "r|.*|"] and have also rebuilt the servers again. The output of gluster volume status shows bricks up but no ports for self-heal daemon: [root@fmov1n2 ~]# gluster volume status data Status of volume: data Gluster process TCP Port RDMA Port Online Pid -- Brick host1.company.com:/gluster_bricks /data/data 49153 0 Y 244103 Brick host2.company.com:/gluster_bricks /data/data 49155 0 Y 226082 Brick host3.company.com:/gluster_bricks /data/data 49155 0 Y 225948 Self-heal Daemon on localhost N/A N/AY 224255 Self-heal Daemon on host2.company.com N/A N/AY 233992 Self-heal Daemon on host3.company.com N/A N/AY 224245 Task Status of Volume data -- There are no active volume tasks The output of gluster volume heal info shows connected to the local self-heal daemon but transport endpoint is not connected to the two remote daemons. This is the same for all three hosts. I have followed the solutions here: https://access.redhat.com/solutions/5089741 and also here: https://access.redhat.com/solutions/3237651 with no success. I have changed to a different DNS/DHCP server and still have the same issues. Could this somehow be related to the direct cabling for my storage/Gluster network (no switch)? /etc/nsswitch.conf is set to file dns and pings all work, but dig and does not for storage (I understand this is to be expected). Again, as always, any pointers or wisdom is greatly appreciated. I am out of ideas. Thank you! Charles ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OE7EUSWMBTRINHCSBQAXCI6L25K6D2OY/
[ovirt-users] Networking question on setting up self-hosting engine
Dear list, I have tried setting up self-hosting engine on a host with ONE Nic (Ovirt 4.4 CentOS 8 Stream). I followed the Quick Start Guide, and tried the command line self-host setup, but ended up with the following error: {u'msg': u'There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results I tried on another host with TWO NICs (Ovirt 4.3 Oracle Linux 7 Update 9). This time I setup a bridge BR0 and disable EM1 (the first Ethernet interface on the host), and then created Bond0 on-top of BR0. Both Bond0 and EM2 (the second Ethernet interface on the host) were up. And then I tried again using Ovirt-Cockpit wizard, with the Engine VM set on BR0, and the deployment of Engine VM simply failed. The Engine and Host are using the same network numbers (192.168.2.0/24) and they resolved correctly. I read the logs in var/log/ovirt-engine/engine.log but there wasn't any error reported. I have already tried many times for the past few days and I'm at my wits-end. May I know: 1) Is it possible to install Self-hosted engine with just ONE NIC? 2) Any suggestion how to troubleshoot these problems? And tested network configurations? Hope to hear from you. Thanks. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5WRCCSULCNW4BMDUGTUZYW43WDQIATI/
[ovirt-users] Re: Q: New oVirt Node - CentOS 8 or Stream ?
CentOS 8 support ends Dec 2021. So CentOS 8 Stream seems to be the natural choice. On Thu, Jan 14, 2021 at 12:01 AM Andrei Verovski wrote: > Hi, > > > I’m currently adding new oVirt node to existing 4.4 setup. > Which underlying OS version would you recommend for long-term deployment - > CentOS 8 or Stream ? > > I don’t use pre-built node ISO since I have a number of custom scripts > running on node host OS. > > Thanks in advance. > Andrei > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSUIBES6BLKUB277W7JGRGZFXCM32YLJ/ > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TK25VE5FSYNTC3NTKI46T6G66V35B7W7/
[ovirt-users] Q: New oVirt Node - CentOS 8 or Stream ?
Hi, I’m currently adding new oVirt node to existing 4.4 setup. Which underlying OS version would you recommend for long-term deployment - CentOS 8 or Stream ? I don’t use pre-built node ISO since I have a number of custom scripts running on node host OS. Thanks in advance. Andrei ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSUIBES6BLKUB277W7JGRGZFXCM32YLJ/
[ovirt-users] Re: image upload on Managed Block Storage
It really works well~! Thanks, Benny, Sincerely 2021년 1월 13일 (수) 오후 5:30, Benny Zlotnik 님이 작성: > The workaround I tried with ceph is to use `rbd import` and replace > the volume created by ovirt, the complete steps are: > 1. Create an MBS disk in ovirt and find its ID > 2. rbd import --dest-pool > 3. rbd rm volume- --pool > 4. rbd mv volume- --pool > > I only tried it with raw images > > > > On Wed, Jan 13, 2021 at 10:12 AM Henry lol > wrote: > > > > yeah, I'm using ceph as a backend, > > then can oVirt discover/import existing volumes in ceph? > > > > 2021년 1월 13일 (수) 오후 5:00, Benny Zlotnik 님이 작성: > >> > >> It's not implemented yet, there are ways to workaround it with either > >> backend specific tools (like rbd) or by attaching the volume, are you > >> using ceph? > >> > >> On Wed, Jan 13, 2021 at 4:13 AM Henry lol > wrote: > >> > > >> > Hello, > >> > > >> > I've just checked I can't upload an image into the MBS block through > either UI or restAPI. > >> > > >> > So, is there any method to do that? > >> > > >> > ___ > >> > Users mailing list -- users@ovirt.org > >> > To unsubscribe send an email to users-le...@ovirt.org > >> > Privacy Statement: https://www.ovirt.org/privacy-policy.html > >> > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > >> > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIXBQGJJEWM4UXL676NRPLISVLQN4V6V/ > >> > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QKN2AMZDWLVOJNF6PWUANGHZ7E3RGUQO/
[ovirt-users] Re: image upload on Managed Block Storage
The workaround I tried with ceph is to use `rbd import` and replace the volume created by ovirt, the complete steps are: 1. Create an MBS disk in ovirt and find its ID 2. rbd import --dest-pool 3. rbd rm volume- --pool 4. rbd mv volume- --pool I only tried it with raw images On Wed, Jan 13, 2021 at 10:12 AM Henry lol wrote: > > yeah, I'm using ceph as a backend, > then can oVirt discover/import existing volumes in ceph? > > 2021년 1월 13일 (수) 오후 5:00, Benny Zlotnik 님이 작성: >> >> It's not implemented yet, there are ways to workaround it with either >> backend specific tools (like rbd) or by attaching the volume, are you >> using ceph? >> >> On Wed, Jan 13, 2021 at 4:13 AM Henry lol >> wrote: >> > >> > Hello, >> > >> > I've just checked I can't upload an image into the MBS block through >> > either UI or restAPI. >> > >> > So, is there any method to do that? >> > >> > ___ >> > Users mailing list -- users@ovirt.org >> > To unsubscribe send an email to users-le...@ovirt.org >> > Privacy Statement: https://www.ovirt.org/privacy-policy.html >> > oVirt Code of Conduct: >> > https://www.ovirt.org/community/about/community-guidelines/ >> > List Archives: >> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIXBQGJJEWM4UXL676NRPLISVLQN4V6V/ >> ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YJZZZFO57RPCJKMDGXLKS3DTDD7YCFFK/
[ovirt-users] Re: image upload on Managed Block Storage
yeah, I'm using ceph as a backend, then can oVirt discover/import existing volumes in ceph? 2021년 1월 13일 (수) 오후 5:00, Benny Zlotnik 님이 작성: > It's not implemented yet, there are ways to workaround it with either > backend specific tools (like rbd) or by attaching the volume, are you > using ceph? > > On Wed, Jan 13, 2021 at 4:13 AM Henry lol > wrote: > > > > Hello, > > > > I've just checked I can't upload an image into the MBS block through > either UI or restAPI. > > > > So, is there any method to do that? > > > > ___ > > Users mailing list -- users@ovirt.org > > To unsubscribe send an email to users-le...@ovirt.org > > Privacy Statement: https://www.ovirt.org/privacy-policy.html > > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIXBQGJJEWM4UXL676NRPLISVLQN4V6V/ > > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQ4W6J5UY2EV5M3QVQFOQJMP2A465AZB/
[ovirt-users] Re: image upload on Managed Block Storage
It's not implemented yet, there are ways to workaround it with either backend specific tools (like rbd) or by attaching the volume, are you using ceph? On Wed, Jan 13, 2021 at 4:13 AM Henry lol wrote: > > Hello, > > I've just checked I can't upload an image into the MBS block through either > UI or restAPI. > > So, is there any method to do that? > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIXBQGJJEWM4UXL676NRPLISVLQN4V6V/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4EEGGZRSPPSQGM7GSRQN3YO4PTIHBLH/