[ovirt-users] Ovirt engine down , can we manually start vm
Our Ovirt engine is down and can't be restored because it had no babckup Is it possible to manually start a vm on a ovirt node ? Thanc Abisai Matangira Africom +2638644004138 whatsapp / Tel ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Ovirt 4.0 Login Issue
I am running 4.0 on CentOS 7.2. Sometimes when I first log in to the admin page, it will give me and error that says "Request state does not match session state." Then if I go through the process of logging in again, it will go through with no issue. It doesn't do this every time but it does do it quite often. Any ideas on why? - MeLLy ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] oVirt Community Newsletter: June 2016
Not only was June the occasion of the Red Hat Summit event in San Francisco, where oVirt was well-represented in the event and within Community Central, but it was also the month where the year's biggest release, oVirt 4.0! The latest release of oVirt features: * A New Administration Portal * Improved Live-Migration Performance * Improved Image Features * Container Support * New and Improved oVirt Node Check out oVirt 4.0 today, and in the meantime, here's what happened in June, 2016: - Software Releases - oVirt 4.0.0 Final Release is now available http://lists.ovirt.org/pipermail/announce/2016-June/000268.html New oVirt-Live (4.0.0) is available for download http://lists.ovirt.org/pipermail/users/2016-June/040577.html oVirt 3.6.7 Fifth Release Candidate is now available for testing http://lists.ovirt.org/pipermail/announce/2016-June/000265.html In the Community oVirt 4.0 is released! http://www.ovirt.org/blog/2016/06/ovirt-40-release/ Go Upstream at Summit Community Central http://red.ht/29ipa6z oVirt Meetup Boston - MA http://www.meetup.com/Boston-oVirt-Community/ A New oVirt Dashboard for Disaster Recovery https://bitbucket.org/chocomango/ovirt-dashboard P.L. Ferrari Deploys Red Hat Enterprise Virtualization for a More Efficient IT Infrastructure http://red.ht/1V12049 Red Hat Summit Community Track Day One http://red.ht/29mAH5m oVirt 4.0 mit neuem Dashboard (Deutsch) http://www.pro-linux.de/news/1/23702/ovirt-40-mit-neuem-dashboard.html Deep Dives and Technical Discussions Supercharge Your Network Throughput via Single Root I/O Virtualization (SR-IOV) http://red.ht/1RX3Qkx oVirt Release 4.0: New opportunities [Russian] [Video] https://youtu.be/I4hQAH08Dlg How to Install oVirt (Open Virtualisation) on Centos 7 [Indonesian] [Video] https://youtu.be/eO0tzmQ9LCk Fedora 24 Server & oVirt 4.0 [Video Playlist] http://bit.ly/29iqDtk Creating a KVM VM through oVirt 4.0 with Intel Skull Canyon [Video] https://youtu.be/AjmstE30sM0 Unedited Install of Windows 2012 R2 on KVM oVirt with Nuctastic Intel Skull Canyon [Video] https://youtu.be/DnhutZdLYT4 3 Minute Provisioning VM from KVM oVirt Template [Video] https://youtu.be/uH7QraPPFnw Restart of KVM Ovirt Windows 2016 VM in less than 2 seconds [Video] https://youtu.be/Ea_QeOv9mD8 -- Brian Proffitt Principal Community Analyst Open Source and Standards @TheTechScribe 574.383.9BKP ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] what to do with 3.6 repos when upgrading to 4.0?
On Tue, Jul 5, 2016 at 5:24 PM, Gianluca Cecchiwrote: > On Tue, Jul 5, 2016 at 2:31 PM, Yaniv Dary wrote: >> >> >> On Tue, Jul 5, 2016 at 10:08 AM, Gianluca Cecchi >> wrote: >>> >>> Hello, >>> having an engine at 3.6.5 and following >>> https://www.ovirt.org/release/4.0.0/ >>> it is not clear in my opinion what to do with 3.6 repos if I want to >>> upgrade to 4.0 >>> In fact if you strictly follow what indicated, you should only run >>> >>> yum update "ovirt-engine-setup*" >>> >>> but this eventually shows 3.6.7 packages. >>> And I presume that also with 3.6 repos in places and already being in >>> 3.6.7 the command above doesn't bring you to 4.0 >>> >>> Also, you cannot run something like >>> >>> yum update ovirt-release >>> >>> because they are different packages: >>> ovirt-release40 >>> and >>> ovirt-release36 >>> >>> In my flow I ran >>> >>> yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >>> and then disable ovrt-3.6, otherwise >>> yum update "ovirt-engine-setup*" >> >> >> Did you run "yum clean all"? >> > > > I reproduced on clean env where I installed only engine in 3.6.7 (can I test > an install of 3.6.5 after 3.6.6 and 3.6.7 has come out?) and seems to work > ok. > > See below steps done and some comments. > The engine is a VM inside virt-manager where I installed CentOS 7.2 + > updates, configured as infrastructure server. > > yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm > yum install ovirt-engine > > engine-setup > > After this I have 3.6.7 and can log in into webadmin portal. > > Strange that with "yum update" now I see ovirt-release36 version 3.6.7 > proposed > It seems I already have 3.6.7 installed, but 3.6.6 as release rpm package... > ? Right. The release rpm in pub/yum-repo was for 3.6.6. The one in the 3.6 repos was more up-to-date. Since both point at the same ovirt-3.6 repo on the ovirt site, you got same release. There were some other minor changes in the release rpm between 3.6.6 and 3.6.7, mostly related to gluster repos, as you can see in [1]. Just in case, you can update your release rpm, then try 'yum update' again (perhaps following 'yum clean all'), to see if there are actual updates you missed due to the differences. In any case, the rpm in yum-repo is now updated. > > my webdamin about page says: > oVirt Engine Version: 3.6.7.5-1.el7.centos > > [root@ovengstand ovirt-engine]# yum update ovirt-release36 > Loaded plugins: fastestmirror, langpacks, versionlock > Loading mirror speeds from cached hostfile > * base: mirrors.prometeus.net > * extras: mirrors.prometeus.net > * ovirt-3.6: ftp.plusline.net > * ovirt-3.6-epel: mirror.23media.de > * updates: mirrors.prometeus.net > Resolving Dependencies > --> Running transaction check > ---> Package ovirt-release36.noarch 1:3.6.6-1 will be updated > ---> Package ovirt-release36.noarch 1:3.6.7-1 will be an update > --> Finished Dependency Resolution > > Dependencies Resolved > > === > Package Arch Version > Repository Size > === > Updating: > ovirt-release36 noarch 1:3.6.7-1 > ovirt-3.610 k > > Transaction Summary > === > Upgrade 1 Package > > Total download size: 10 k > Is this ok [y/d/N]: > Exiting on user command > > > [root@ovengstand ~]# yum update "ovirt-engine-setup*" > Loaded plugins: fastestmirror, langpacks, versionlock > base > | 3.6 kB 00:00:00 > centos-ovirt36 > | 2.9 kB 00:00:00 > extras > | 3.4 kB 00:00:00 > ovirt-3.6 > | 2.9 kB 00:00:00 > ovirt-3.6-centos-gluster37 > | 2.9 kB 00:00:00 > ovirt-3.6-epel/x86_64/metalink > | 22 kB 00:00:00 > ovirt-3.6-epel > | 4.3 kB 00:00:00 > ovirt-3.6-patternfly1-noarch-epel > | 3.0 kB 00:00:00 > updates > | 3.4 kB 00:00:00 > virtio-win-stable > | 3.0 kB 00:00:00 > (1/12): base/7/x86_64/group_gz > | 155 kB 00:00:00 > (2/12): extras/7/x86_64/primary_db > | 150 kB 00:00:00 > (3/12): ovirt-3.6/7/primary_db > | 230 kB 00:00:00 > (4/12): centos-ovirt36/x86_64/primary_db > | 125 kB 00:00:00 > (5/12): ovirt-3.6-epel/x86_64/group_gz > | 170 kB 00:00:00 > (6/12): ovirt-3.6-centos-gluster37/7/x86_64/primary_db > | 53 kB 00:00:00 > (7/12): ovirt-3.6-epel/x86_64/updateinfo > | 576 kB 00:00:00 > (8/12): ovirt-3.6-patternfly1-noarch-epel/x86_64/primary_db > | 2.2 kB 00:00:00 > (9/12): base/7/x86_64/primary_db > | 5.3 MB 00:00:01 > (10/12): virtio-win-stable/primary_db > | 2.0 kB 00:00:00 > (11/12): updates/7/x86_64/primary_db > | 5.7 MB 00:00:01 > (12/12): ovirt-3.6-epel/x86_64/primary_db > | 4.2 MB 00:00:01 > Determining fastest mirrors
[ovirt-users] added values of cluster level 4.0
Hello, is there a list of new features I should gain if I set as 4.0 the level of a cluster in oVirt 4.0? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Help Storage domain
On Tue, Jul 5, 2016 at 7:04 PM, Massimo Madwrote: > How is the right procedure for remove a storage domain. > I remove the storage domain from the gui, now how is the procedure for > remove the LUN fc from the hosts. First you must make the luns inaccessible from these hosts - otherwise the host will discover them again. Then remove the multipath devices and the underlying paths as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Storage_Administration_Guide/#removing_devices Nir ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Help Storage domain
On Tue, Jul 5, 2016 at 7:04 PM, Massimo Madwrote: > How is the right procedure for remove a storage domain. > I remove the storage domain from the gui, now how is the procedure for > remove the LUN fc from the hosts. First you must make the luns inaccessible from these hosts - otherwise the host will discover them again. Then remove the multipath devices and the underlying paths as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Storage_Administration_Guide/#removing_devices Nir ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] 3.6 -> 4.0 upgrade fails on schema refresh
OK some update on this. Removed the db-migrate-script package and reinstalled ovirt-engine and ovirt-engine-setup. I still have that error and this is the loggingpart: CONTEXT: SQL statement "DROP INDEX IF EXISTS idx_vm_static_template_version_name; CREATE INDEX idx_vm_static_template_version_nam$ PL/pgSQL function fn_db_create_index(character varying,character varying,text,text) line 12 at EXECUTE statement psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_00_0140_convert_memory_snapshots_to_disks.sql:93: ERROR: insert or update on table "image_storage_domain_map" violates foreign key constraint "fk_image_storage_domain_map_storage_domain_static" DETAIL: Key (storage_domain_id)=(006552b0-cae3-4ccb-9baa-ee8c3b8e42cf) is not present in table "storage_domain_static". FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_00_0140_convert_memory_snapshots_to_disks.sql 2016-07-05 19:40:29 ERROR otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:313 schema.sh: FATAL: sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_00_0140_convert_memory_snapshots_to_disks.sql 2016-07-05 19:40:29 DEBUG otopi.context context._executeMethod:142 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 315, in _misc raise RuntimeError(_('Engine schema refresh failed')) RuntimeError: Engine schema refresh failed Any idea ? 2016-07-05 15:25 GMT+02:00 Matt .: > I just found out that the file > > 04_00_0140_convert_memory_snapshots_to_disks.sql > > is not located in: > > /usr/share/ovirt-engine/dbscripts/upgrade/ ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Help Storage domain
How is the right procedure for remove a storage domain. I remove the storage domain from the gui, now how is the procedure for remove the LUN fc from the hosts. Regards Massimo ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] hosted-Engine setup: hostname 'node01.example.com' doesn't uniquely match the interface selected for the management bridge
Hello, I'm trying to install Ovirt 4 on a new set of hosts. During "hosted-engine --deploy" I get the following error: (personal information is replaced with generic placeholders) [ INFO ] Stage: Setup validation [ ERROR ] Failed to execute stage 'Setup validation': hostname 'node01.example.com' doesn't uniquely match the interface 'ens802f1' selected for the management bridge; it matches also interface with IP set(['192.168.99.10']). Please make sure that the hostname got from the interface for the management network resolves only there. [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160705144908.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue, fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160705144711-tl98lx.log That IP "192.168.99.10" doesn't resolve to anything, because I haven't added it to the DNS server. It's also not in /etc/hosts. It's just the IP for the storage network that doesn't use DNS at all. >From the log: 2016-07-05 14:49:08 DEBUG otopi.plugins.gr_he_common.network.bridge bridge._get_hostname_from_bridge_if:274 Network info: {'netmask': u'255.255.255.0', 'ipaddr': u'192.168.10.194', 'gateway': u'192.168.10.2'} 2016-07-05 14:49:08 DEBUG otopi.plugins.gr_he_common.network.bridge bridge._get_hostname_from_bridge_if:310 hostname: 'node01.example.com', aliaslist: '[]', ipaddrlist: '['192.168.99.10', '192.168.10.194']' 2016-07-05 14:49:08 DEBUG otopi.context context._executeMethod:142 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/network/bridge.py", line 327, in _get_hostname_from_bridge_if o=other_ip, RuntimeError: hostname 'node01.example.comh' doesn't uniquely match the interface 'ens802f1' selected for the management bridge; it matches also interface with IP set(['192.168.99.10']). Please make sure that the hostname got from the interface for the management network resolves only there. 2016-07-05 14:49:08 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Setup validation': hostname 'node01.example.com' doesn't uniquely match the interface 'ens802f1' selected for the management bridge; it matches also interface with IP set(['192.168.99.10']). Please make sure that the hostname got from the interface for the management network resolves only there. The output for dig: [root@node01 ~]# dig node01.example.com ; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> node01.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45269 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;node01.example.com. IN A ;; ANSWER SECTION: node01.example.com. 3600 IN A 192.168.10.194 ;; AUTHORITY SECTION: example.com 900 IN NS dns.example.com. ;; ADDITIONAL SECTION: dns.example.com. 900 IN A 192.168.10.61 ;; Query time: 3 msec ;; SERVER: 192.168.10.61#53(192.168.10.61) ;; WHEN: Die Jul 05 15:14:48 CEST 2016 ;; MSG SIZE rcvd: 110 Output for nslookup: [root@node01 ~]# nslookup 192.168.99.10 Server: 192.168.10.61 Address: 192.168.10.61#53 ** server can't find 10.99.168.192.in-addr.arpa.: NXDOMAIN Why does the setup script think that my hostname resolves to 192.168.99.10? signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] what to do with 3.6 repos when upgrading to 4.0?
On Tue, Jul 5, 2016 at 2:31 PM, Yaniv Darywrote: > > On Tue, Jul 5, 2016 at 10:08 AM, Gianluca Cecchi < > gianluca.cec...@gmail.com> wrote: > >> Hello, >> having an engine at 3.6.5 and following >> https://www.ovirt.org/release/4.0.0/ >> it is not clear in my opinion what to do with 3.6 repos if I want to >> upgrade to 4.0 >> In fact if you strictly follow what indicated, you should only run >> >> yum update "ovirt-engine-setup*" >> >> but this eventually shows 3.6.7 packages. >> And I presume that also with 3.6 repos in places and already being in >> 3.6.7 the command above doesn't bring you to 4.0 >> >> Also, you cannot run something like >> >> yum update ovirt-release >> >> because they are different packages: >> ovirt-release40 >> and >> ovirt-release36 >> >> In my flow I ran >> >> yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm >> and then disable ovrt-3.6, otherwise >> yum update "ovirt-engine-setup*" >> > > Did you run "yum clean all"? > > I reproduced on clean env where I installed only engine in 3.6.7 (can I test an install of 3.6.5 after 3.6.6 and 3.6.7 has come out?) and seems to work ok. See below steps done and some comments. The engine is a VM inside virt-manager where I installed CentOS 7.2 + updates, configured as infrastructure server. yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm yum install ovirt-engine engine-setup After this I have 3.6.7 and can log in into webadmin portal. Strange that with "yum update" now I see ovirt-release36 version 3.6.7 proposed It seems I already have 3.6.7 installed, but 3.6.6 as release rpm package... ? my webdamin about page says: oVirt Engine Version: 3.6.7.5-1.el7.centos [root@ovengstand ovirt-engine]# yum update ovirt-release36 Loaded plugins: fastestmirror, langpacks, versionlock Loading mirror speeds from cached hostfile * base: mirrors.prometeus.net * extras: mirrors.prometeus.net * ovirt-3.6: ftp.plusline.net * ovirt-3.6-epel: mirror.23media.de * updates: mirrors.prometeus.net Resolving Dependencies --> Running transaction check ---> Package ovirt-release36.noarch 1:3.6.6-1 will be updated ---> Package ovirt-release36.noarch 1:3.6.7-1 will be an update --> Finished Dependency Resolution Dependencies Resolved === Package Arch Version Repository Size === Updating: ovirt-release36 noarch 1:3.6.7-1 ovirt-3.610 k Transaction Summary === Upgrade 1 Package Total download size: 10 k Is this ok [y/d/N]: Exiting on user command [root@ovengstand ~]# yum update "ovirt-engine-setup*" Loaded plugins: fastestmirror, langpacks, versionlock base | 3.6 kB 00:00:00 centos-ovirt36 | 2.9 kB 00:00:00 extras | 3.4 kB 00:00:00 ovirt-3.6 | 2.9 kB 00:00:00 ovirt-3.6-centos-gluster37 | 2.9 kB 00:00:00 ovirt-3.6-epel/x86_64/metalink | 22 kB 00:00:00 ovirt-3.6-epel | 4.3 kB 00:00:00 ovirt-3.6-patternfly1-noarch-epel | 3.0 kB 00:00:00 updates | 3.4 kB 00:00:00 virtio-win-stable | 3.0 kB 00:00:00 (1/12): base/7/x86_64/group_gz | 155 kB 00:00:00 (2/12): extras/7/x86_64/primary_db | 150 kB 00:00:00 (3/12): ovirt-3.6/7/primary_db | 230 kB 00:00:00 (4/12): centos-ovirt36/x86_64/primary_db | 125 kB 00:00:00 (5/12): ovirt-3.6-epel/x86_64/group_gz | 170 kB 00:00:00 (6/12): ovirt-3.6-centos-gluster37/7/x86_64/primary_db | 53 kB 00:00:00 (7/12): ovirt-3.6-epel/x86_64/updateinfo | 576 kB 00:00:00 (8/12): ovirt-3.6-patternfly1-noarch-epel/x86_64/primary_db | 2.2 kB 00:00:00 (9/12): base/7/x86_64/primary_db | 5.3 MB 00:00:01 (10/12): virtio-win-stable/primary_db | 2.0 kB 00:00:00 (11/12): updates/7/x86_64/primary_db | 5.7 MB 00:00:01 (12/12): ovirt-3.6-epel/x86_64/primary_db | 4.2 MB 00:00:01 Determining fastest mirrors * base: mirrors.prometeus.net * extras: mirrors.prometeus.net * ovirt-3.6: ftp.plusline.net * ovirt-3.6-epel: mirror.23media.de * updates: mirrors.prometeus.net No packages marked for update [root@ovengstand ~]# So I do have to install ovirt-release40, as in clean install, so that the guide page has to be changed. [root@ovengstand ~]# yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm Loaded plugins: fastestmirror, langpacks, versionlock ovirt-release40.rpm | 8.0 kB 00:00:00 Examining /var/tmp/yum-root-pdbhfG/ovirt-release40.rpm: ovirt-release40-4.0.0-5.noarch Marking /var/tmp/yum-root-pdbhfG/ovirt-release40.rpm to be installed
Re: [ovirt-users] 3.6 -> 4.0 upgrade fails on schema refresh
I just found out that the file 04_00_0140_convert_memory_snapshots_to_disks.sql is not located in: /usr/share/ovirt-engine/dbscripts/upgrade/ ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] 3.6 -> 4.0 upgrade fails on schema refresh
Hi, I'm upgrading to oVirt 4.0 from 3.6.7 and I didn't found any usable solution for this error and rollback: [ ERROR ] schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_00_0140_convert_memory_snapshots_to_disks.sql [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema refresh failed I see one bugreport but it doesn't clear it out for me, is this a known thing ? Thanks, Matt ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] 'Image does not exist in domain' while moving disks
I upgraded first the manager to the latest 3.6 but nothing changed. It seems that upgrading the SPM host fixed the issue SPM node (CentOS Linux release 7.2.1511) packages: ovirt-vmconsole-1.0.2-1.el7.centos.noarch ovirt-release36-3.6.7-1.noarch vdsm-jsonrpc-4.17.32-0.el7.centos.noarch vdsm-4.17.32-0.el7.centos.noarch vdsm-python-4.17.32-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.17.32-0.el7.centos.noarch vdsm-xmlrpc-4.17.32-0.el7.centos.noarch vdsm-yajsonrpc-4.17.32-0.el7.centos.noarch vdsm-cli-4.17.32-0.el7.centos.noarch vdsm-infra-4.17.32-0.el7.centos.noarch Regards, Dael. On 04/07/16 12:36, Dael Maselli wrote: Hi, I'm trying to move disks between storage domain but I get this error: "VDSM command failed: Image does not exist in domain: u'image=6bf41b4e-3184-40c1-9db0-e304b39f34d0, domain=ae2ab96e-c2bf-44bb-beba-6c4d7cb9f5cc'" I tried live and shutting down the vms. I also can't make snapshot of the same vms. It happens with a lot of vms but not all. Here is the log on SPM node: e56e0346-7f23-4b40-bc9c-06bc8d32a4b4::ERROR::2016-07-04 11:48:47,279::blockVolume::459::Storage.Volume::(validateImagePath) Unexpected error e56e0346-7f23-4b40-bc9c-06bc8d32a4b4::ERROR::2016-07-04 11:48:47,280::task::866::Storage.TaskManager.Task::(_setError) Task=`e56e0346-7f23-4b40-bc9c-06bc8d32a4b4`::Unexpected error jsonrpc.Executor/7::ERROR::2016-07-04 11:48:53,310::hsm::1510::Storage.HSM::(deleteImage) Empty or not found image 6bf41b4e-3184-40c1-9db0-e304b39f34d0 in SD ae2ab96e-c2bf-44bb-beba-6c4d7cb9f5cc. {'6bd52f1c-d623-4d68-b3c6-a870c1daa9ce': ImgsPar(imgs=['c3b39b18-bb73-4743-9dec-719ed781b7d1'], parent='----'), '2fb324d2-da23-4a67-998f-8078b0c2b391': ImgsPar(imgs=['d7059fd4-f45b-4f1a-bf48-67b2a6f26994'], parent='----'), '1d971362-911b-43db-bed7-2871ea13807b': ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], parent='6a568c44-5280-4e96-b2b4-3fc7c1905d25'), 'a3dcc1ac-8683-4f31-9160-eb892ddb8a4f': ImgsPar(imgs=['fab23a76-7b8f-4e3c-b2ff-5d9525e2d173'], parent='----'), 'd211f70c-3df0-44a1-a792-2d638c6c139d': ImgsPar(imgs=['bb715d7f-2c66-4438-a222-69d0c1e4858b'], parent='----'), '0c24c0e1-03e9-4743-85c7-d10607d92735': ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], parent='0f27ed74-1193-4f37-8886-6679b2e6f230'), '1ec37fac-8894-4608-b14f-7236f365e6c3': ImgsPar(imgs=['9b6c698f-e0a2-4b56-b6b4-498f19aaaed0'], parent='----'), 'f3aa49d1-1760-4375-b1d8-396e618c0c8f': ImgsPar(imgs=['521febb4-db23-4f00-b592-bed7cb90e6be'], parent='----'), 'ca333dfc-6cd2-45f9-b7a2-9a546a9cfaad': ImgsPar(imgs=['3ea3326e-4a60-4d8c-b40d-7e448e54cc54'], parent='7716f924-09a1-4f9a-841d-be0104aa4b66'), 'ca193146-12b5-4530-8d64-d4597a7775dd': ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], parent='3142ec55-1809-4f35-adf3-dcea037d2432'), 'd41061ab-b8d2-45bf-9778-0ce8658d673c': ImgsPar(imgs=['65cdb44f-01b6-43b8-8abc-534d007b6b1e'], parent='----'), '45e3590c-5e4f-4baf-a809-e71707887c2a': ImgsPar(imgs=['c7275aaa-df06-4218-b125-6acb895df6e8'], parent='----'), '19355639-35fa-4ea9-b61a-3cbfb3407a17': ImgsPar(imgs=['204831e3-1664-4818-a7b8-8b3f0a4942c8'], parent='----'), '97fef998-7297-4946-9d2b-fd6cbb20f666': ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], parent='1d971362-911b-43db-bed7-2871ea13807b'), 'd6f6d76c-964b-45fc-9d6c-c7ca21fa464a': ImgsPar(imgs=['3ea3326e-4a60-4d8c-b40d-7e448e54cc54'], parent='fe5443cb-7789-42b2-be1d-8938a1f30bc3'), '5ccce161-0bc4-447e-8873-728c9757b2e8': ImgsPar(imgs=['3086ac4c-f0e1-4c2e-a186-8993dfb9ad8c'], parent='----'), 'b8b2ad19-99e1-4697-8ba2-89ef9135496e': ImgsPar(imgs=['d74a71dc-133e-46ba-90c3-038764006fc5'], parent='----'), 'cf9d937f-f306-4147-946a-4cdc7fcbcd6b': ImgsPar(imgs=['12037b0b-13e1-44a1-ba39-e397060e7598'], parent='----'), '746face0-5da3-48bb-a373-0823158559a5': ImgsPar(imgs=['204831e3-1664-4818-a7b8-8b3f0a4942c8'], parent='b4e3c7ad-89db-49ba-a04c-3b47fc9c80a7'), '6bf74f7f-8e6a-4156-bcc3-db9c207d2421': ImgsPar(imgs=['1cedd1ef-1657-48bd-9da5-bf2989511f6d'], parent='97fef998-7297-4946-9d2b-fd6cbb20f666'), 'f4ce6e1c-8c52-484b-a5d1-964873b0051b': ImgsPar(imgs=['728c6308-7090-45a9-869f-9a9cb51d3bf5'], parent='----'), '04a20a84-dd0e-421f-b47b-5136733a69fb': ImgsPar(imgs=['07d3b9a5-bece-458e-9e58-7c72593c91d5'], parent='----'), '2c60571d-46e2-438a-8492-3c34b2a86df4': ImgsPar(imgs=['3ea3326e-4a60-4d8c-b40d-7e448e54cc54'], parent='1464fa93-ba8a-4ba8-a09f-4686cd1d4437'), 'b4e3c7ad-89db-49ba-a04c-3b47fc9c80a7':
Re: [ovirt-users] what to do with 3.6 repos when upgrading to 4.0?
Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 7692306 8272306 Email: yd...@redhat.com IRC : ydary On Tue, Jul 5, 2016 at 10:08 AM, Gianluca Cecchiwrote: > Hello, > having an engine at 3.6.5 and following > https://www.ovirt.org/release/4.0.0/ > it is not clear in my opinion what to do with 3.6 repos if I want to > upgrade to 4.0 > In fact if you strictly follow what indicated, you should only run > > yum update "ovirt-engine-setup*" > > but this eventually shows 3.6.7 packages. > And I presume that also with 3.6 repos in places and already being in > 3.6.7 the command above doesn't bring you to 4.0 > > Also, you cannot run something like > > yum update ovirt-release > > because they are different packages: > ovirt-release40 > and > ovirt-release36 > > In my flow I ran > > yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm > and then disable ovrt-3.6, otherwise > yum update "ovirt-engine-setup*" > Did you run "yum clean all"? > > anyway brings in some packages from 3.6 repos (3.6.7 I think) that seems > to be identical with 4.0 ones. > > Gianluca > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Do vm's on host start up after crash if the Engine is down
The engine is needed for HA, this is why we have hosted engine to provide HA for engine as well. Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 7692306 8272306 Email: yd...@redhat.com IRC : ydary On Mon, Jul 4, 2016 at 2:22 PM, Jonas Kirk Pedersenwrote: > Hello I am curious to what happens if ovirt and our hosts goes down due > to power failure and the ovirt-engine will not start up. Does the vm's > that was on the hosts before the crash start up again when the host is > online or does the hosts need the engine for starting the vm's Is there > some kind of cache that tells the hosts to run these vm's? Maybe from > the vmsd. Not hosted ovirt-engine but dedicated hardware for ovirt-engine. > > -- > Jonas Kirk Pedersen > ASOM-Net > Systemadministrator > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Is it possible to disable qxl video?
Adding spice-list. Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 7692306 8272306 Email: yd...@redhat.com IRC : ydary On Mon, Jul 4, 2016 at 12:52 PM, Arman Khalatyanwrote: > > Hi, > I am doing PCI Passthrough for GPUs. > Is it possible somehow to disable/remove default video qxl? > thanks, > Arman,. > > > > *** > > Dr. Arman Khalatyan eScience -SuperComputing > Leibniz-Institut für Astrophysik Potsdam (AIP) > An der Sternwarte 16, 14482 Potsdam, Germany > > *** > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Ovirt 3.6.x upgrade to 4.0: HE VM not shutdownable - hyperkonverged r3a1 setup
On Mon, Jul 4, 2016 at 12:02 PM, Roy Golanwrote: > reformatting cause the command I gave you is missing quotes > > > ```bash > su - postgres > psql engine -c "update vm_static set cluster_id = (select cluster_id from > cluster where name = 'NEWCLUSTERNAME') where vm_name = 'HostedEngine'; " > ``` > > > On Mon, Jul 4, 2016 at 12:02 PM, Roy Golan wrote: > >> >> >> On Thu, Jun 30, 2016 at 12:33 PM, Roy Golan wrote: >> >>> >>> >>> On Wed, Jun 29, 2016 at 10:25 AM, wrote: >>> Hello all, yesterday I upgraded my 3.6.x to 4.0 and got stuck with obviously not being able to shutdown HE VM itself ("shutdown all VMs" required for increasing the compatibility level to 4.0). As I luckily already had a 4th pure computing node, rgolan friendly walked me through the process of creating a new V4 cluster and moving that 4th node to it. The purpose was to free up the default cluster, to be prepared to increase to V4. The next step was to migrate the HE VM to the new cluster ("automatically chose host", as there is only one host for that new cluster). Everything went well, but (as rgolan asked to check) the cluster assignment for the HE VM in the edit VM dialogue still shows "Default" and an attempt to increase the compatibility of "Default" cluster of course didn't work ("first shut down all VMs!"). The migration of another VM to that new cluster on the other hand worked without a glitch. Attached is the engine.log. Relevant entries should start at around " 2016-06-29 00:53:20 ", it should be the time I created the new cluster and moved pure computing node " slp-ovirtnode-04 " to it. Please ignore the glusterfs brick entries, I have additional disks ready to be added, but this is another story ;) Hopefully there is a way to entirely move the HE VM to the new cluster. Thanks and regards mikelupe ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users >>> At the end of the migration we prevent that action on the hosted engine >>> vm specifically >>> >>> 2016-06-28 21:55:20,734 WARN >>> [org.ovirt.engine.core.bll.ChangeVMClusterCommand] >>> (ForkJoinPool-1-worker-0) [3ce2a3ad] Validation of action 'ChangeVMCluster' >>> failed for user SYSTEM. Reasons: >>> VAR__ACTION__UPDATE,VAR__TYPE__VM__CLUSTER,ACTION_TYPE_FAILED_CANNOT_RUN_ACTION_ON_NON_MANAGED_VM >>> >>> I opened https://bugzilla.redhat.com/show_bug.cgi?id=1351533 >>> >>> Mike if you want meanwhile I can supply a script to help you with that. >>> >>> >>> >> ```bash >> su - postgres >> psql engine -c "update vm_static set cluster_id = (select cluster_id from >> cluster where name = 'NEWCLUSTERNAME') where vm_name = 'HostedEngine'; >> >> > one of your hosts is still reporting the engine vm as down and the engine can't remove that vm . if that's the situation, go to that host and remove it manually vdsClient -s 0 list table if its there with status "Down " vdsClient -s 0 destroy VMID ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] messages file filled by vdsm logs in 4.0
Hello, since updating to 4.0 yesterday, my /var/log/messages on hypervisor seem filled by vdsm messages sudo awk -F ":" '{print $4}' /var/log/messages | sort | uniq -c| sort -rnk 1,1 gives these top lines 126052 INFO ovirt_hosted_engine_ha.broker.listener.ConnectionHandler 46278 INFO ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine 34364 INFO ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore 25773 INFO ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config 25737 INFO ovirt_hosted_engine_ha.lib.storage_server.StorageServer 16579 vdsm SchemaCache WARNING Parameter disktotal is not int type 16579 vdsm SchemaCache WARNING Parameter diskfree is not int type 16567 INFO mem_free.MemFree 15288 vdsm SchemaCache WARNING Following parameters ['isoprefix'] were not recognized 15194 vdsm SchemaCache WARNING Provided parameters {'displayInfo' [{'tlsPort' 13184 vdsm SchemaCache WARNING Following parameters ['type'] were not recognized 8574 INFO ovirt_hosted_engine_ha.lib.image.Image 8088 vdsm SchemaCache WARNING Required property allocType is not provided when calling Volume.getInfo 8088 vdsm SchemaCache WARNING Provided value "2" not defined in DiskType enum for Volume.getInfo 8088 vdsm SchemaCache WARNING Parameter truesize is not uint type 8088 vdsm SchemaCache WARNING Parameter mtime is not uint type 8088 vdsm SchemaCache WARNING Parameter ctime is not int type 8088 vdsm SchemaCache WARNING Parameter capacity is not uint type 8088 vdsm SchemaCache WARNING Parameter apparentsize is not uint type 8026 vdsm SchemaCache WARNING No default value specified for systemVersion parameter in Host.getHardwareInfo 8026 vdsm SchemaCache WARNING No default value specified for systemUUID parameter in Host.getHardwareInfo 8026 vdsm SchemaCache WARNING No default value specified for systemSerialNumber parameter in Host.getHardwareInfo 8026 vdsm SchemaCache WARNING No default value specified for systemProductName parameter in Host.getHardwareInfo 8026 vdsm SchemaCache WARNING No default value specified for systemManufacturer parameter in Host.getHardwareInfo 8026 vdsm SchemaCache WARNING No default value specified for systemFamily parameter in Host.getHardwareInfo 7998 vdsm vds.dispatcher WARNING unhandled close event 7989 vdsm vds.dispatcher ERROR SSL error during reading data (104, 'Connection reset by peer') vdsm SchemaCache WARNING Parameter version is not int type 5096 vdsm SchemaCache WARNING Required property domainType is not provided when calling StoragePool.getInfo 5096 vdsm SchemaCache WARNING Parameter spmLver is not int type 5096 vdsm SchemaCache WARNING Parameter lver is not int type 3787 vdsm SchemaCache WARNING Provided parameters {'vcpuCount' '2', 'displayInfo' 3245 INFO cpu_load_no_engine.EngineHealth 3190 INFO ping.Ping 3011 INFO mgmt_bridge.MgmtBridge 2869 INFO engine_health.CpuLoadNoEngine 2681 vdsm SchemaCache WARNING Required property spm_id is not provided when calling StorageDomain.getInfo 2681 vdsm SchemaCache WARNING Required property master_ver is not provided when calling StorageDomain.getInfo 2681 vdsm SchemaCache WARNING Required property lver is not provided when calling StorageDomain.getInfo 2681 vdsm SchemaCache WARNING Required property domainType is not provided when calling StorageDomain.getInfo 2681 vdsm SchemaCache WARNING Required property domainClass is not provided when calling StorageDomain.getInfo 2681 vdsm SchemaCache WARNING Following parameters ['remotePath', 'type', 'class'] were not recognized 1335 vdsm SchemaCache WARNING Provided value "1" not defined in StorageDomainType enum for StoragePool.connectStorageServer 1333 vdsm SchemaCache WARNING Provided parameters {u'protocol_version' 3, u'connection' If I limit to messages related to today (11 hours): 26108 INFO ovirt_hosted_engine_ha.broker.listener.ConnectionHandler 10905 vdsm SchemaCache WARNING Parameter disktotal is not int type 10905 vdsm SchemaCache WARNING Parameter diskfree is not int type 10036 vdsm SchemaCache WARNING Provided parameters {'displayInfo' [{'tlsPort' 10032 vdsm SchemaCache WARNING Following parameters ['isoprefix'] were not recognized 8636 vdsm SchemaCache WARNING Following parameters ['type'] were not recognized 5292 vdsm SchemaCache WARNING Required property allocType is not provided when calling Volume.getInfo 5292 vdsm SchemaCache WARNING Provided value "2" not defined in DiskType enum for Volume.getInfo 5292 vdsm SchemaCache WARNING Parameter truesize is not uint type 5292 vdsm SchemaCache WARNING Parameter mtime is not uint type 5292 vdsm SchemaCache WARNING Parameter ctime is not int type 5292 vdsm SchemaCache WARNING Parameter capacity is not uint type 5292 vdsm SchemaCache WARNING Parameter apparentsize is not uint type 5255 vdsm SchemaCache WARNING No default
[ovirt-users] serial console problem in 4.0
Hello, I have problems configuring and testing serial console in 4.0 As soon as in web admin portal (connected as admin) I click on top right "admin@internal-authz" --> options to add the publc key I get this in engine.log (no errors yet in gui): 2016-07-05 10:59:21,667 ERROR [org.ovirt.engine.core.bll.GetUserProfileQuery] (default task-63) [] Query 'GetUserProfileQuery' failed: PreparedStatementCallback; bad SQL grammar [select * from getuserprofilebyuserid(?)]; nested exception is org.postgresql.util.PSQLException: The column name user_portal_vm_auto_login was not found in this ResultSet. 2016-07-05 10:59:21,668 ERROR [org.ovirt.engine.core.bll.GetUserProfileQuery] (default task-63) [] Exception: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [select * from getuserprofilebyuserid(?)]; nested exception is org.postgresql.util.PSQLException: The column name user_portal_vm_auto_login was not found in this ResultSet. at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:99) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:645) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:680) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:712) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:762) [spring-jdbc.jar:4.2.4.RELEASE] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:154) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:120) [dal.jar:] at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) [spring-jdbc.jar:4.2.4.RELEASE] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:147) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:109) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeRead(SimpleJdbcCallsHandler.java:101) [dal.jar:] at org.ovirt.engine.core.dao.UserProfileDaoImpl.getByUserId(UserProfileDaoImpl.java:49) [dal.jar:] at org.ovirt.engine.core.bll.GetUserProfileQuery.executeQueryCommand(GetUserProfileQuery.java:19) [bll.jar:] at org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:101) [bll.jar:] at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) [dal.jar:] at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:559) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:530) [bll.jar:] at sun.reflect.GeneratedMethodAccessor72.invoke(Unknown Source) [:1.8.0_91] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_91] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_91] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437) at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70) [wildfly-weld-10.0.0.Final.jar:10.0.0.Final] at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80) [wildfly-weld-10.0.0.Final.jar:10.0.0.Final] at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93) [wildfly-weld-10.0.0.Final.jar:10.0.0.Final] at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437) at org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13) [bll.jar:] at sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source) [:1.8.0_91] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_91] at
Re: [ovirt-users] Upgrade from 3.6 to 4.0
Two final steps I've done that are only necessary in my environment, where the host itslef is providing the NFS service for storage domains: After install you have to make a dependency so that VDSM Broker starts after NFS Server In /usr/lib/systemd/system/ovirt-ha-broker.service Added in section [Unit] the line: After=nfs-server.service Also, for vdsmd service, in file vdsmd.service changed from: After=multipathd.service libvirtd.service iscsid.service rpcbind.service \ supervdsmd.service sanlock.service vdsm-network.service to: After=multipathd.service libvirtd.service iscsid.service rpcbind.service \ supervdsmd.service sanlock.service vdsm-network.service \ nfs-server.service NOTE: the files will be overwritten by future updates, so you have to keep in mind... ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Upgrade from 3.6 to 4.0
On Mon, Jul 4, 2016 at 1:56 PM, Arman Khalatyanwrote: > about a n hour ago I just tested to setup ovirt 3.6, then upgrade it, 2 > nodes: host and ovirt on separated machines. > very smoothly upgraded, first engine upgrade then host upgrade. > The only trouble on hosts is that some pacages are still reffering to > ovirt-3.6 repository even ovirt-4.0 is there. > it is simple to fix: yum list installed| grep ovirt-3.6 then removing all > possible packages > next reinstall host from GUI. > Thats it. > > > > For the repo part I posted a question for clarification: http://lists.ovirt.org/pipermail/users/2016-July/040910.html see there for follow ups In my case for single host environment with both host and hosted engine (deployed as appliance) on CentOS 7.2 and storage domains on NFS served by the host itself, this is the flow I followed to bring a 3.6.5 environment to 4.0. Problem still here is inability to upgrade the only existing cluster to 4.0 and then the datacenter to 4.0 Currently not possible due to : https://bugzilla.redhat.com/show_bug.cgi?id=1351533 To update I kept valid what explained in the thread started here: http://lists.ovirt.org/pipermail/users/2016-June/040649.html yum update doesn't propose ovirt 4.0 packages So I installed ovirt 4.0 repo (as in clean install description) and disabled 3.6 ones. yum update "ovirt-engine-setup*" gives Dependencies Resolved === PackageArch Version Repository Size === Updating: ovirt-engine-setup noarch 4.0.0.6-1.el7.centosovirt-4.0 8.6 k ovirt-engine-setup-basenoarch 4.0.0.6-1.el7.centosovirt-4.095 k ovirt-engine-setup-plugin-ovirt-engine noarch 4.0.0.6-1.el7.centosovirt-4.0 161 k ovirt-engine-setup-plugin-ovirt-engine-common noarch 4.0.0.6-1.el7.centosovirt-4.080 k ovirt-engine-setup-plugin-vmconsole-proxy-helper noarch 4.0.0.6-1.el7.centosovirt-4.027 k ovirt-engine-setup-plugin-websocket-proxy noarch 4.0.0.6-1.el7.centosovirt-4.026 k Installing for dependencies: antlr-tool noarch 2.7.7-30.el7 base357 k apache-commons-collections noarch 3.2.1-22.el7_2 updates 509 k bea-stax noarch 1.2.0-9.el7 base176 k dom4j noarch 1.6.1-20.el7 base277 k hsqldb noarch 1:1.8.1.3-13.el7base950 k isorelax noarch 1:0-0.15.release20050331.el7base 75 k jaxen noarch 1.1.3-11.el7 base204 k jdom noarch 1.1.3-6.el7 base174 k msv-msvnoarch 1:2013.5.1-6.el7base3.7 M msv-xsdlib noarch 1:2013.5.1-6.el7base1.1 M ovirt-engine-dwh noarch 4.0.0-2.git38f5db5.el7.centos ovirt-4.0 2.1 M ovirt-engine-dwh-setup noarch 4.0.0-2.git38f5db5.el7.centos ovirt-4.069 k postgresql-jdbcnoarch 9.2.1002-5.el7 base447 k relaxngDatatypenoarch 1.0-11.el7 base 15 k ws-jaxme noarch 0.5.2-10.el7 base1.1 M xpp3 noarch 1.1.3.8-11.el7 base336 k Updating for dependencies: otopi noarch 1.5.0-1.el7.centos ovirt-4.0 160 k otopi-java noarch 1.5.0-1.el7.centos ovirt-4.025 k ovirt-engine-lib noarch 4.0.0.6-1.el7.centosovirt-4.028 k Transaction Summary === Install ( 16 Dependent packages) Upgrade 6 Packages (+ 3 Dependent packages) engine-setup brings in the DWH database that I didn't have in 3.6 and now seems to be required (probably for dashboard?). [root@ractorshe ~]# engine-setup ... --== DATABASE CONFIGURATION ==-- Where is the
[ovirt-users] relationship between sanlock and wdmd services
Hello, sometimes one could be in need to keep hypervisor up (CentOS 7.2 in my case) but with all ovirt releated services stopped. I see that sanlock and wdmd systemd units are bot part of sanlock rpm package. In these cases, having only one single host in a lab environment, I follow this comment by Joop http://lists.ovirt.org/pipermail/users/2016-June/040214.html So, I stop all a VMs, put env in global maintenance and then on host: systemctl stop ovirt-ha-agent systemctl stop ovirt-ha-broker shutdown engine vm On host again: systemctl stop vdsmd systemctl stop sanlock.service At this point sometimes I can work, sometimes after some minutes the host restarts itself, I presume due to wdmd In fact I see in messages: Jul 4 17:05:47 ractor wdmd[1258]: test failed rem 26 now 804 ping 760 close 770 renewal 697 expire 777 client 1285 sanlock_2025c2ea-6205-4bc1-b29d-745b47f8f806:1 Jul 4 17:05:48 ractor wdmd[1258]: test failed rem 25 now 805 ping 760 close 770 renewal 697 expire 777 client 1285 sanlock_2025c2ea-6205-4bc1-b29d-745b47f8f806:1 Jul 4 17:05:49 ractor wdmd[1258]: test failed rem 24 now 806 ping 760 close 770 renewal 697 expire 777 client 1285 sanlock_2025c2ea-6205-4bc1-b29d-745b47f8f806:1 Jul 4 17:05:50 ractor wdmd[1258]: test failed rem 23 now 807 ping 760 close 770 renewal 697 expire 777 client 1285 sanlock_2025c2ea-6205-4bc1-b29d-745b47f8f806:1 Jul 4 17:05:51 ractor wdmd[1258]: test failed rem 22 now 808 ping 760 close 770 renewal 697 expire 777 client 1285 sanlock_2025c2ea-6205-4bc1-b29d-745b47f8f806:1 Jul 4 17:05:51 ractor systemd[1]: wdmd.service stop-sigterm timed out. Killing. Jul 4 17:05:51 ractor systemd[1]: wdmd.service: main process exited, code=killed, status=9/KILL Jul 4 17:05:51 ractor systemd[1]: Stopped Watchdog Multiplexing Daemon. Jul 4 17:05:51 ractor systemd[1]: Unit wdmd.service entered failed state. Jul 4 17:05:51 ractor systemd[1]: wdmd.service failed. In systemd unit file for sanlock: [Unit] Description=Shared Storage Lease Manager After=syslog.target Wants=wdmd.service Nothing special instead for wdmd. I tried also to stop it but server still rebooted. Also, it seems to me that sometimes sanlock is ale to stop, someties exits with "failed". So the question is if wdmd is able to be stopped or if it is the same behavior of old watchdogd on Linux Thanks in advance, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt 4.0 hosted engine deploy
Thanks, I have solved this problem. It seems that IPv6 has not been disabled actually. From: qinglong.d...@horebdata.cn Date: 2016-07-05 14:57 To: Roy Golan CC: users Subject: Re: Re: [ovirt-users] ovirt 4.0 hosted engine deploy Yes, IPv6 was enabled before. Now I think it has been disabled. output of "vdsClient -s 0 getVdsCaps": [root@node ~]# vdsClient -s 0 getVdsCaps HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:2bcb704cbce4'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:2bcb704cbce4' additionalFeatures = [] autoNumaBalancing = 0 bondings = {'bond0': {'active_slave': '', 'addr': '', 'cfg': {'BONDING_OPTS': 'mode=0', 'BOOTPROTO': 'none'}, 'dhcpv4': False, 'dhcpv6': False, 'gateway': '', 'hwaddr': 'd2:96:d0:98:31:62', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': True, 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'opts': {'mode': '0'}, 'slaves': [], 'switch': 'legacy'}} bridges = {} clusterLevels = ['3.5', '3.6', '4.0'] cpuCores = '4' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,lahf_lm,ida,arat,epb,pln,pts,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,fsgsbase,smep,erms,xsaveopt,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_IvyBridge,model_Westmere,model_n270,model_SandyBridge' cpuModel = 'Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz' cpuSockets = '1' cpuSpeed = '2324.496' cpuThreads = '8' dnss = ['140.207.198.6'] emulatedMachines = ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.2.0', 'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc', 'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0', 'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0'] guestOverhead = '65' hooks = {'before_device_create': {'50_vmfex': {'md5': 'e05994261acaea7dcf4b88ea0e81f1f5'}}, 'before_device_migrate_destination': {'50_vmfex': {'md5': 'e05994261acaea7dcf4b88ea0e81f1f5'}}, 'before_nic_hotplug': {'50_vmfex': {'md5': 'e05994261acaea7dcf4b88ea0e81f1f5'}}, 'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}} hostdevPassthrough = 'false' kdumpStatus = 0 kernelArgs = 'BOOT_IMAGE=/vmlinuz-3.10.0-327.22.2.el7.x86_64 root=UUID=de9c1960-b3a0-435c-9e77-05b08ebda832 ro crashkernel=auto rhgb quiet LANG=en_US.UTF-8' kvmEnabled = 'true' lastClient = '::1' lastClientIface = 'lo' liveMerge = 'true' liveSnapshot = 'true' memSize = '31907' netConfigDirty = 'False' networks = {} nics = {'enp6s0f0': {'addr': '192.168.128.60', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DEVICE': 'enp6s0f0', 'IPADDR': '192.168.128.60', 'IPV4_FAILURE_FATAL': 'no', 'IPV6INIT': 'no', 'NAME': 'enp6s0f0', 'NETMASK': '255.255.255.0', 'ONBOOT': 'yes', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '64215e21-e12e-4a2b-9c1e-3e636df61487'}, 'dhcpv4': False, 'dhcpv6': False, 'gateway': '', 'hwaddr': '00:1e:67:a5:1b:ee', 'ipv4addrs': ['192.168.128.60/24'],
[ovirt-users] what to do with 3.6 repos when upgrading to 4.0?
Hello, having an engine at 3.6.5 and following https://www.ovirt.org/release/4.0.0/ it is not clear in my opinion what to do with 3.6 repos if I want to upgrade to 4.0 In fact if you strictly follow what indicated, you should only run yum update "ovirt-engine-setup*" but this eventually shows 3.6.7 packages. And I presume that also with 3.6 repos in places and already being in 3.6.7 the command above doesn't bring you to 4.0 Also, you cannot run something like yum update ovirt-release because they are different packages: ovirt-release40 and ovirt-release36 In my flow I ran yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm and then disable ovrt-3.6, otherwise yum update "ovirt-engine-setup*" anyway brings in some packages from 3.6 repos (3.6.7 I think) that seems to be identical with 4.0 ones. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt 4.0 hosted engine deploy
Yes, IPv6 was enabled before. Now I think it has been disabled. output of "vdsClient -s 0 getVdsCaps": [root@node ~]# vdsClient -s 0 getVdsCaps HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:2bcb704cbce4'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:2bcb704cbce4' additionalFeatures = [] autoNumaBalancing = 0 bondings = {'bond0': {'active_slave': '', 'addr': '', 'cfg': {'BONDING_OPTS': 'mode=0', 'BOOTPROTO': 'none'}, 'dhcpv4': False, 'dhcpv6': False, 'gateway': '', 'hwaddr': 'd2:96:d0:98:31:62', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': True, 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'opts': {'mode': '0'}, 'slaves': [], 'switch': 'legacy'}} bridges = {} clusterLevels = ['3.5', '3.6', '4.0'] cpuCores = '4' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,lahf_lm,ida,arat,epb,pln,pts,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,fsgsbase,smep,erms,xsaveopt,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_IvyBridge,model_Westmere,model_n270,model_SandyBridge' cpuModel = 'Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz' cpuSockets = '1' cpuSpeed = '2324.496' cpuThreads = '8' dnss = ['140.207.198.6'] emulatedMachines = ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.2.0', 'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc', 'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0', 'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0'] guestOverhead = '65' hooks = {'before_device_create': {'50_vmfex': {'md5': 'e05994261acaea7dcf4b88ea0e81f1f5'}}, 'before_device_migrate_destination': {'50_vmfex': {'md5': 'e05994261acaea7dcf4b88ea0e81f1f5'}}, 'before_nic_hotplug': {'50_vmfex': {'md5': 'e05994261acaea7dcf4b88ea0e81f1f5'}}, 'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}} hostdevPassthrough = 'false' kdumpStatus = 0 kernelArgs = 'BOOT_IMAGE=/vmlinuz-3.10.0-327.22.2.el7.x86_64 root=UUID=de9c1960-b3a0-435c-9e77-05b08ebda832 ro crashkernel=auto rhgb quiet LANG=en_US.UTF-8' kvmEnabled = 'true' lastClient = '::1' lastClientIface = 'lo' liveMerge = 'true' liveSnapshot = 'true' memSize = '31907' netConfigDirty = 'False' networks = {} nics = {'enp6s0f0': {'addr': '192.168.128.60', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DEVICE': 'enp6s0f0', 'IPADDR': '192.168.128.60', 'IPV4_FAILURE_FATAL': 'no', 'IPV6INIT': 'no', 'NAME': 'enp6s0f0', 'NETMASK': '255.255.255.0', 'ONBOOT': 'yes', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '64215e21-e12e-4a2b-9c1e-3e636df61487'}, 'dhcpv4': False, 'dhcpv6': False, 'gateway': '', 'hwaddr': '00:1e:67:a5:1b:ee', 'ipv4addrs': ['192.168.128.60/24'], 'ipv6addrs': [], 'ipv6autoconf': True, 'ipv6gateway': '::', 'mtu': '1500', 'netmask':