Re: [ovirt-users] Testing ovirt 3.6 Beta 3
On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi wrote: > I will try this afternoon to do this, but just to clarify something. > > the hosted_engine setup creates it's own DC the hosted_DC, which contains > the hosted engine storage domain, I am correct? > No, ovirt-hosted-engine-setup doesn't create a special datacenter. The default is to add the host to the Default datacenter in the default cluster. You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected. In setup there is a question like this: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name which ask about 'Local storage datacenter' which is basically the description we were using for the storage pool. > if yes, where will I import the hostedengine storage domain, into the > default DC? > > 2015-09-02 8:47 GMT+01:00 Roy Golan : > >> >> >> On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi >> wrote: >> >>> I could finally terminate the installation, but still no vm engine on >>> webui >>> >>> I added a data domain, the default DC is up, but no engine VM. >>> >> >> >> >> Good now you need to import the HostedEngine storage domain. Try to go to >> >> *Storage -> Import Domain and put the path to the domain which you used >> in the hosted-engine setup.* >> >> >> *After the domain is imported, the engine will be imported automatically. >> * >> >> *This whole process will become automatic eventually. (patch is written >> currently)* >> >> >>> >>> 2015-09-01 21:22 GMT+01:00 wodel youchi : >>> Something mounted on /rhev/data-center/mnt I'm not sure. there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev (ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup. but the mount command didn't show anything, unlike nfs, when I used it the mount and df commands showed up the engine's VM mount point. 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi : > > > On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi > wrote: > >> Hi, >> >> After removing the -x from the sql files, the installation terminated >> successfully, but ... >> >> I had a problem with vdsm, and error about permission denied with KVM >> module, so I restarted my machine. >> After the reboot the ovirt-ha-agent stops complaining about the >> vm.conf file not present in /var/rum/ovirt-hosted-engine-ha >> >> And the mount command doesn't show any iscsi mount, the disk is >> detected via fdisk -l >> the lvs command returns all logical volumes created. >> >> I think it's a mount problem, but since there are many lv, I don't >> how to mount them manually. >> > > Do you have something mounted under /rhev/data-center/mnt ? > If not you probably hit this bug: https://bugzilla.redhat.com/1258465 > > > >> LV VG >> Attr LSize Pool Origin Data% >> Meta% Move >> Log Cpy%Sync Convert >> >> 3b894e23-429d-43bf-b6cd-6427a387799a >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao 128,00m >> >> >> >> be78c0fd-52bf-445a-9555-64061029c2d9 >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 1,00g >> >> >> >> c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 40,00g >> >> >> >> feede664-5754-4ca2-aeb3-af7aff32ed42 >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 128,00m >> >> >> >> ids >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-ao 128,00m >> >> >> inbox >>5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a- 128,00m >> >> >> leases >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a- 2,00g >> >> >> master >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a- 1,00g >> >> >> metadata >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a- >> 512,00m >> >> >> outbox >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a- 128,00m >> >> >> >> >> 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi : >> >>> >>> >>> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi >> > wrote: >>> Hi again, I tried with the snapshot repository, but I am having this error while executing engine-setup [ INFO ] Creating/refreshing Engi
Re: [ovirt-users] ovirt 3.6 Failed to execute stage 'Environment setup'
It turns out that there has been a problem in Intels EFI on R1304GZ4GC server systems. As long as EFI optimizations were on dmidecode did not work. Disabling EFI optimization is one option but updating to the latest firmware did do the trick too in this case. Cheers Richard On 09/01/2015 11:25 AM, Richard Neuboeck wrote: > The dmidecode problem really only affects those three machines I'm > trying to set up as ovirt hosts. Since I couldn't find anything > wrong in the CentOS setup I tried Fedora 22 and got the same result. > /dev/mem: Operation not permitted > > I'm sorry to have bothered you guys with this problem that is > obviously not ovirt related. I'll try to find a solution and keep > you posted. > > All the best > Richard > > On 09/01/2015 11:14 AM, Simone Tiraboschi wrote: >> Adding Oved >> >> On Tue, Sep 1, 2015 at 10:47 AM, Richard Neuboeck >> mailto:h...@tbi.univie.ac.at>> wrote: >> >> On 09/01/2015 09:55 AM, Simone Tiraboschi wrote: >> > Indeed you had: >> > Thread-64::DEBUG::2015-08-31 >> > 12:33:15,127::utils::661::root::(execCmd) /usr/bin/sudo -n >> > /usr/sbin/dmidecode -s system-uuid (cwd None) >> > Thread-64::DEBUG::2015-08-31 >> > 12:33:15,153::utils::679::root::(execCmd) FAILED: = >> '/dev/mem: >> > Operation not permitted\n'; = 1 >> > Thread-64::WARNING::2015-08-31 >> > 12:33:15,154::utils::812::root::(getHostUUID) Could not find host >> UUID. >> > Thread-64::DEBUG::2015-08-31 >> > >> > Can you please try executing? >> > /usr/sbin/dmidecode -s system-uuid >> >> dmidecode always fails and according to the things I've read >> this is >> caused by the kernel restricting access to /dev/mem. But this >> shouldn't affect the root user. It does anyway. I've tried with >> selinux on and off. Is there a way around this problem? So far I >> didn't find anything really helpful by googling around. This >> problem >> seems only to affect these kind of machines. CentOS 7.1 >> installations with the same kernel on another machine lets >> me run >> dmidecode without problems. I guess I'm missing something. >> >> [root@cube-one tmp]# dmidecode >> # dmidecode 2.12 >> # SMBIOS entry point at 0xbafbaca0 >> /dev/mem: Operation not permitted >> >> [root@cube-one tmp]# /usr/sbin/dmidecode -s system-uuid >> /dev/mem: Operation not permitted >> >> >> dmidecode -s system-uuid is failing cause it cannot read /dev/mem so >> VDSM fails getting the host UUID and so vdscli raise an exception >> about a None value on 'uuid' when hosted-engine >> calls getVdsCapabilities. >> Any hint on that or any workaround? >> >> [root@cube-one tmp]# grep DEVMEM >> /boot/config-3.10.0-229.11.1.el7.x86_64 >> CONFIG_STRICT_DEVMEM=y >> >> Cheers >> Richard >> >> -- >> /dev/null >> >> >> > > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- /dev/null signature.asc Description: OpenPGP digital signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Testing ovirt 3.6 Beta 3
I will try this afternoon to do this, but just to clarify something. the hosted_engine setup creates it's own DC the hosted_DC, which contains the hosted engine storage domain, I am correct? if yes, where will I import the hostedengine storage domain, into the default DC? 2015-09-02 8:47 GMT+01:00 Roy Golan : > > > On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi > wrote: > >> I could finally terminate the installation, but still no vm engine on >> webui >> >> I added a data domain, the default DC is up, but no engine VM. >> > > > > Good now you need to import the HostedEngine storage domain. Try to go to > > *Storage -> Import Domain and put the path to the domain which you used in > the hosted-engine setup.* > > > *After the domain is imported, the engine will be imported automatically. * > > *This whole process will become automatic eventually. (patch is written > currently)* > > >> >> 2015-09-01 21:22 GMT+01:00 wodel youchi : >> >>> Something mounted on /rhev/data-center/mnt I'm not sure. >>> >>> there were directories, and under these directories there were other >>> directories (dom_md, ha_agent, images), and under them there were symbolic >>> links to devices under /dev >>> (ids, inbox, leases, etc...) the devices pointed to the lvm partitions >>> created by the setup. >>> >>> but the mount command didn't show anything, unlike nfs, when I used it >>> the mount and df commands showed up the engine's VM mount point. >>> >>> >>> 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi : >>> On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi wrote: > Hi, > > After removing the -x from the sql files, the installation terminated > successfully, but ... > > I had a problem with vdsm, and error about permission denied with KVM > module, so I restarted my machine. > After the reboot the ovirt-ha-agent stops complaining about the > vm.conf file not present in /var/rum/ovirt-hosted-engine-ha > > And the mount command doesn't show any iscsi mount, the disk is > detected via fdisk -l > the lvs command returns all logical volumes created. > > I think it's a mount problem, but since there are many lv, I don't how > to mount them manually. > Do you have something mounted under /rhev/data-center/mnt ? If not you probably hit this bug: https://bugzilla.redhat.com/1258465 > LV VG > Attr LSize Pool Origin Data% > Meta% Move > Log Cpy%Sync Convert > > 3b894e23-429d-43bf-b6cd-6427a387799a > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao 128,00m > > > > be78c0fd-52bf-445a-9555-64061029c2d9 > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 1,00g > > > > c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 40,00g > > > > feede664-5754-4ca2-aeb3-af7aff32ed42 > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 128,00m > > > > ids > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-ao 128,00m > > > inbox >5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a- 128,00m > > > leases > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a- 2,00g > > > master > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a- 1,00g > > > metadata > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a- > 512,00m > > > outbox > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a- 128,00m > > > > > 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi : > >> >> >> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi >> wrote: >> >>> Hi again, >>> >>> I tried with the snapshot repository, but I am having this error >>> while executing engine-setup >>> >>> [ INFO ] Creating/refreshing Engine database schema >>> [ ERROR ] Failed to execute stage 'Misc configuration': Command >>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>> te >>> [ INFO ] DNF Performing DNF transaction rollback >>> [ INFO ] Rolling back database schema >>> [ INFO ] Clearing Engine database engine >>> [ ERROR ] Engine database rollback failed: must be owner of schema >>> pg_catalog >>> [ INFO ] Stage: Clean up >>> Log file is located at >>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>> [ INFO ] Generating answer file >>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> [ ERROR ] Execution of setup failed >>>
Re: [ovirt-users] Testing ovirt 3.6 Beta 3
On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi wrote: > I could finally terminate the installation, but still no vm engine on webui > > I added a data domain, the default DC is up, but no engine VM. > Good now you need to import the HostedEngine storage domain. Try to go to *Storage -> Import Domain and put the path to the domain which you used in the hosted-engine setup.* *After the domain is imported, the engine will be imported automatically. * *This whole process will become automatic eventually. (patch is written currently)* > > 2015-09-01 21:22 GMT+01:00 wodel youchi : > >> Something mounted on /rhev/data-center/mnt I'm not sure. >> >> there were directories, and under these directories there were other >> directories (dom_md, ha_agent, images), and under them there were symbolic >> links to devices under /dev >> (ids, inbox, leases, etc...) the devices pointed to the lvm partitions >> created by the setup. >> >> but the mount command didn't show anything, unlike nfs, when I used it >> the mount and df commands showed up the engine's VM mount point. >> >> >> 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi : >> >>> >>> >>> On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi >>> wrote: >>> Hi, After removing the -x from the sql files, the installation terminated successfully, but ... I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine. After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha And the mount command doesn't show any iscsi mount, the disk is detected via fdisk -l the lvs command returns all logical volumes created. I think it's a mount problem, but since there are many lv, I don't how to mount them manually. >>> >>> Do you have something mounted under /rhev/data-center/mnt ? >>> If not you probably hit this bug: https://bugzilla.redhat.com/1258465 >>> >>> >>> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao 128,00m be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 1,00g c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 40,00g feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 128,00m ids 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao 128,00m inbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 128,00m leases 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 2,00g master 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 1,00g metadata 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 512,00m outbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a- 128,00m 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi : > > > On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi > wrote: > >> Hi again, >> >> I tried with the snapshot repository, but I am having this error >> while executing engine-setup >> >> [ INFO ] Creating/refreshing Engine database schema >> [ ERROR ] Failed to execute stage 'Misc configuration': Command >> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >> te >> [ INFO ] DNF Performing DNF transaction rollback >> [ INFO ] Rolling back database schema >> [ INFO ] Clearing Engine database engine >> [ ERROR ] Engine database rollback failed: must be owner of schema >> pg_catalog >> [ INFO ] Stage: Clean up >> Log file is located at >> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >> [ INFO ] Generating answer file >> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Execution of setup failed >> >> >> >> and in the deployement log I have these errors >> >> Saving custom users permissions on database objects... >> upgrade script detected a change in Config, View or Stored >> Procedure... >> Running upgrade shell script >> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/_config.sql'... >> Running upgrade shell script >> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >> Running upgrade shell script