Re: [ovirt-users] Remove host from hosted engine configuration
On 06/16/2017 08:17 AM, Mike Farnam wrote: I had 3 hosts running in a hosted engine setup, oVirt Engine Version: 4.1.2.2-1.el7.centos, using FC storage. One of my hosts went unresponsive in the GUI, and attempts to bring it back were fruitless. I eventually decided to just remove it and have gotten it removed from the GUI, but it still shows in “hosted-engine —vm-status” command on the other 2 hosts. The 2 good nodes show it as the following: --== Host 3 status ==-- conf_on_shared_storage : True Status up-to-date : False Hostname : host3.my.lab Host ID: 3 Engine status : unknown stale-data Score : 0 stopped: False Local maintenance : True crc32 : bce9a8c5 local_conf_timestamp : 2605898 Host timestamp : 2605882 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=2605882 (Thu Jun 15 15:18:13 2017) host-id=3 score=0 vm_conf_refresh_time=2605898 (Thu Jun 15 15:18:29 2017) conf_on_shared_storage=True maintenance=True state=LocalMaintenance stopped=False you can use the command 'hosted-engine --clean-metadata --host-id= --force-clean' so that this node does not show up in hosted-engine --vm-status. How can I either remove this host altogether from the configuration, or repair it so that it is back in a good state? The host is up, but due to my removal attempts earlier, reports “unknown stale data” for all 3 hosts in the config. Thanks ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Deploy Ovirt VM's By Ansible Playbook Issue
On Thu, Jun 15, 2017 at 9:30 PM, khalid mahmoodwrote: > Dear Users > > *Procedure :* > 1- create clean volume replica 2 distributed with glusterfs . > Only replica 3 or replica 3 with arbiter is supported as storage domain in oVirt > 2- create clean ovirt-engine machine . > 3- create clean vm from scratch then create template from this vm. > 4- then create two vm from this template (vm1) & (vm2). > 5- then delete the two vm . > 6- create new two vm with the same name (vm1) & (vm2) from the template . > 7- till now the two vm stable and work correctly . > 8- repeat no (7) three time all vm's is working correctly . > > *issue :* > i have ansible playbook to deploy vm's to our ovirt , my playbook use the > above template to deploy the vm's . > my issue is after ansible script deploy the vm's , all vm's disk crash and > the template disk is crash also and the script make change into the > template checksum hash . > > you can look at ansible parameters : > > - hosts: localhost > connection: local > gather_facts: false > tasks: > - name: entering > ovirt_auth: > url: https://ovirt-engine.elcld.net:443/ovirt-engine/api > username: admin@internal > password: pass > insecure: yes > - name: creating > ovirt_vms: > auth: "{{ ovirt_auth }}" > name: myvm05 > template: mahdi > #state: present > cluster: Cluster02 > memory: 4GiB > cpu_cores: 2 > comment: Dev > #type: server > cloud_init: > host_name: vm01 > user_name: root > root_password: pass > nic_on_boot: true > nic_boot_protocol: static > nic_name: eth0 > dns_servers: 109.224.19.5 > dns_search: elcld.net > nic_ip_address: 10.10.20.2 > nic_netmask: 255.255.255.0 > nic_gateway: 10.10.20.1 > - name: Revoke > ovirt_auth: > state: absent > ovirt_auth: "{{ ovirt_auth }}" > > can you assist me with this issue by checking if that any missing in my > ansible . > > best regards > > > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Error while installing oVirt Self Hosted Engine
Thanks - makes sense. I've worked on this a bit more and have pushed a bit further, but from looking through my new log, it looks like the engine is error-ing out because my engine FQDN is cannot be resolved to an IP address. *The error: * [ ERROR ] Host name is not valid: *engine.example.rocks* did not resolve into an IP address *engine.example.rocks* is the FQDN I supplied when answering the following: Please provide the FQDN you would like to use for the engine appliance. > Note: This will be the FQDN of the engine VM you are now going to launch. > It should not point to the base host or to any other existing machine. > Engine VM FQDN: (leave it empty to skip): *My /etc/hosts file: * 192.168.1.44 host.example.rocks host 192.168.1.45 engine.example.rocks engine I can see why it's erroring, but I'm not sure what I need to do now to get it working. The IP 192.168.1.45 is one I just made up, because the only system I have access to is the one I'm currently using (192.168.1.44) Jon On Thu, Jun 15, 2017 at 12:01 PM, Simone Tiraboschiwrote: > > > On Thu, Jun 15, 2017 at 4:08 PM, Jon Bornstein < > bornstein.jonat...@gmail.com> wrote: > >> My lack of Linux proficiency is going to show here, but.. >> >> I guess I'm a bit confused on how to correctly configure my network >> interface(s) for oVirt. >> >> I currently have two network interfaces: >> >> enp0s25 - >> This is my Ethernet interface, but it is unused. It currently is set to >> DHCP and has no IP address. However, it is the only interface that oVirt >> suggests I use when configuring which nic to set the bridge on. >> >> wlo1 - >> My wireless interface, and IS how i'm connecting to the internet. This >> is the IP address that I was using in my /etc/hosts file. >> >> Is it it not possible have a system that can run oVirt as well as >> maintain an internet connection? >> >> > oVirt by default works in bridge mode. > This means that is going to create a bridge on your hosts and the vnic of > your VMs will be connected to that bridge as well. > > oVirt is composed by a central engine managing physical hosts trough an > agent deployed on each host. > So the engine has to be able to reach the managed hosts, this happens > though what we call management network. > > hosted-engine is special deployment where for ha reasons the oVirt engine > is going to run on a VM hosted on the host that it's managing. > So, wrapping up, with hosted-engine setup you are going to create a VM for > the engine, the engine VM will have a nic on the management network and > this mean that you have a management bridge on your host. > The host has to have an address over the management network in order to > have the engine able to reach your host. > > That's why hosted-engine-setup is checking the address of the interface > you choose for the management network. > > > >> >> On Thu, Jun 15, 2017 at 9:02 AM, Simone Tiraboschi >> wrote: >> >>> >>> >>> On Thu, Jun 15, 2017 at 2:55 PM, Jon Bornstein < >>> bornstein.jonat...@gmail.com> wrote: >>> Hi Marton, Here is the log: https://gist.github.com/a nonymous/ac777a70b8e8fc23016c0b6731f24706 >>> >>> >>> You tried to create the management bridge over enp0s25 but it wasn't >>> configured with an IP address for your host. >>> Could you please configure it or choose a correctly configured interface? >>> >>> >>> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >>> cloud_init._getMyIPAddress:115 Acquiring 'enp0s25' address >>> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >>> plugin.executeRaw:813 execute: ('/sbin/ip', 'addr', 'show', 'enp0s25'), >>> executable='None', cwd='None', env=None >>> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >>> plugin.executeRaw:863 execute-result: ('/sbin/ip', 'addr', 'show', >>> 'enp0s25'), rc=0 >>> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >>> plugin.execute:921 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25') >>> stdout: >>> 2: enp0s25: mtu 1500 qdisc >>> pfifo_fast state DOWN qlen 1000 >>> link/ether c4:34:6b:26:6a:d1 brd ff:ff:ff:ff:ff:ff >>> >>> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >>> plugin.execute:926 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25') >>> stderr: >>> >>> >>> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >>> cloud_init._getMyIPAddress:132 address: None >>> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:142 >>> method exception >>> Traceback (most recent call last): >>> File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, >>> in _executeMethod >>> method['method']() >>> File >>> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/cloud_init.py", >>> line 781, in _customize_vm_networking >>> self._customize_vm_addressing() >>> File >>>
[ovirt-users] Remove host from hosted engine configuration
I had 3 hosts running in a hosted engine setup, oVirt Engine Version: 4.1.2.2-1.el7.centos, using FC storage. One of my hosts went unresponsive in the GUI, and attempts to bring it back were fruitless. I eventually decided to just remove it and have gotten it removed from the GUI, but it still shows in “hosted-engine —vm-status” command on the other 2 hosts. The 2 good nodes show it as the following: --== Host 3 status ==-- conf_on_shared_storage : True Status up-to-date : False Hostname : host3.my.lab Host ID: 3 Engine status : unknown stale-data Score : 0 stopped: False Local maintenance : True crc32 : bce9a8c5 local_conf_timestamp : 2605898 Host timestamp : 2605882 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=2605882 (Thu Jun 15 15:18:13 2017) host-id=3 score=0 vm_conf_refresh_time=2605898 (Thu Jun 15 15:18:29 2017) conf_on_shared_storage=True maintenance=True state=LocalMaintenance stopped=False How can I either remove this host altogether from the configuration, or repair it so that it is back in a good state? The host is up, but due to my removal attempts earlier, reports “unknown stale data” for all 3 hosts in the config. Thanks ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [ovirt-devel] Lowering the bar for wiki contribution?
Hi all, Came back to this thread due to a need to post some design documentation. After fetching the ovirt-site and looking up where to start the document, I remembered why I stopped using it. After exploring several options, including the GitHub wiki, I think that for the development documentation we can just go with the minimum: Use a repo to just post markdown and image files, letting GitHub rendering/view of such files to do the job for us. We can still review the documents and have discussions on the content, and provide access to all who wants to use it (to perform the merges). The fact it uses markdown and images, can allow its content to be relocated to any other solutions that will come later on, including adding the content back on ovirt-site. Here is a simple example: https://github.com/EdDev/ovirt-devwiki/blob/initial-structure/index.md it uses simple markdown md files with relative links to other pages. Adding images is also simple. What do you think? Thanks, Edy. On Tue, Feb 7, 2017 at 12:42 PM, Michal Skrivanek < michal.skriva...@redhat.com> wrote: > > On 16 Jan 2017, at 11:13, Roy Golanwrote: > > > > On 11 January 2017 at 17:06, Marc Dequènes (Duck) wrote: > >> Quack, >> >> On 01/08/2017 06:39 PM, Barak Korren wrote: >> > On 8 January 2017 at 10:17, Roy Golan wrote: >> >> Adding infra which I forgot to add from the beginning >> >> Thanks. >> >> > I don't think this is an infra issue, more of a community/working >> > procedures one. >> >> I do thin it is. We are involved in the tooling, for their maintenance, >> for documenting where things are, for suggesting better solutions, >> ensuring security… >> >> > On the one hand, the developers need a place where they create and >> > discuss design documents and road maps. That please needs to be as >> > friction-free as possible to allow developers to work on the code >> > instead of on the documentation tools. >> >> As for code, I think there is need for review, even more for design >> documents, so I don't see why people are bothered by PRs, which is a >> tool they already know fairly well. >> > > because it takes ages to get attention and get it in, even in cases when > the text/update is more of an FYI and doesn’t require feedback. > That leads to frustration, and that leads to loss of any motivation to > contribute anything at all. > You can see people posting on their own platforms, blogs, just to run away > from this one > > >> For people with few git knowledge, the GitHub web interface allows to >> edit files. >> >> > On the other hand, the user community needs a good, up to date source >> > of information about oVirt and how to use it. >> >> Yes, this official entry point and it needs to be clean. >> > > yep, you’re right about the entry point -like pages > > >> > Having said the above, I don't think the site project's wiki is the >> > best place for this. The individual project mirrors on GitHub may be >> > better for this >> >> We could indeed split the technical documentation. If people want to >> experiment with the GH wiki pages, I won't interfere. >> >> I read several people in this thread really miss the old wiki, so I >> think it is time to remember why we did not stay in paradise. I was not >> there at the time, but I know the wiki was not well maintained. People >> are comparing our situation to the MediaWiki site, but the workforce is >> nowhere to be compared. There is already no community manager, and noone >> is in charge of any part really, whereas Mediawiki has people in charge >> of every corner of the wiki. Also they developed tools over years to >> monitor, correct, revert… and we don't have any of this. So without any >> process then it was a total mess. More than one year later there was >> still much cleanup to do, and having contributed to it a little bit, I >> fear a sentimental rush to go back to a solution that was abandoned. >> > > it was also a bit difficult to edit, plus a barrier of ~1 month it took to > get an account > > >> Having a header telling if this is a draft or published is far from >> being sufficient. If noone cares you just pile up content that gets >> obsolete, then useless, then misleading for newcomers. You may prefer >> review a posteriori, but in this case you need to have the proper tool >> to be able to search for things to be reviewed, and a in-content >> pseudo-header is really not an easy way to get a todolist. >> >> As for the current builder, it checks every minute for new content to >> build. The current tool (Middleman) is a bit slow, and the machine is >> not ultra speedy, but even in the worst case it should not take more >> than half an hour to see the published result. So I don't know why >> someone suggested to build "at least once a day". There is also an >> experimentation to improve this part. >> >> So to sum up: >> - the most needed thing here is not a tool but people in charge to >> review the content
[ovirt-users] Deploy Ovirt VM's By Ansible Playbook Issue
Dear Users Procedure :1- create clean volume replica 2 distributed with glusterfs .2- create clean ovirt-engine machine .3- create clean vm from scratch then create template from this vm.4- then create two vm from this template (vm1) & (vm2).5- then delete the two vm .6- create new two vm with the same name (vm1) & (vm2) from the template .7- till now the two vm stable and work correctly .8- repeat no (7) three time all vm's is working correctly . issue :i have ansible playbook to deploy vm's to our ovirt , my playbook use the above template to deploy the vm's .my issue is after ansible script deploy the vm's , all vm's disk crash and the template disk is crash also and the script make change into the template checksum hash . you can look at ansible parameters : - hosts: localhost connection: local gather_facts: false tasks: - name: entering ovirt_auth: url: https://ovirt-engine.elcld.net:443/ovirt-engine/api username: admin@internal password: pass insecure: yes - name: creating ovirt_vms: auth: "{{ ovirt_auth }}" name: myvm05 template: mahdi #state: present cluster: Cluster02 memory: 4GiB cpu_cores: 2 comment: Dev #type: server cloud_init: host_name: vm01 user_name: root root_password: pass nic_on_boot: true nic_boot_protocol: static nic_name: eth0 dns_servers: 109.224.19.5 dns_search: elcld.net nic_ip_address: 10.10.20.2 nic_netmask: 255.255.255.0 nic_gateway: 10.10.20.1 - name: Revoke ovirt_auth: state: absent ovirt_auth: "{{ ovirt_auth }}" can you assist me with this issue by checking if that any missing in my ansible . best regards ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Error while installing oVirt Self Hosted Engine
On Thu, Jun 15, 2017 at 4:08 PM, Jon Bornsteinwrote: > My lack of Linux proficiency is going to show here, but.. > > I guess I'm a bit confused on how to correctly configure my network > interface(s) for oVirt. > > I currently have two network interfaces: > > enp0s25 - > This is my Ethernet interface, but it is unused. It currently is set to > DHCP and has no IP address. However, it is the only interface that oVirt > suggests I use when configuring which nic to set the bridge on. > > wlo1 - > My wireless interface, and IS how i'm connecting to the internet. This is > the IP address that I was using in my /etc/hosts file. > > Is it it not possible have a system that can run oVirt as well as maintain > an internet connection? > > oVirt by default works in bridge mode. This means that is going to create a bridge on your hosts and the vnic of your VMs will be connected to that bridge as well. oVirt is composed by a central engine managing physical hosts trough an agent deployed on each host. So the engine has to be able to reach the managed hosts, this happens though what we call management network. hosted-engine is special deployment where for ha reasons the oVirt engine is going to run on a VM hosted on the host that it's managing. So, wrapping up, with hosted-engine setup you are going to create a VM for the engine, the engine VM will have a nic on the management network and this mean that you have a management bridge on your host. The host has to have an address over the management network in order to have the engine able to reach your host. That's why hosted-engine-setup is checking the address of the interface you choose for the management network. > > On Thu, Jun 15, 2017 at 9:02 AM, Simone Tiraboschi > wrote: > >> >> >> On Thu, Jun 15, 2017 at 2:55 PM, Jon Bornstein < >> bornstein.jonat...@gmail.com> wrote: >> >>> Hi Marton, >>> >>> Here is the log: https://gist.github.com/anonymous/ac777a70b8e8fc23016c0 >>> b6731f24706 >>> >> >> >> You tried to create the management bridge over enp0s25 but it wasn't >> configured with an IP address for your host. >> Could you please configure it or choose a correctly configured interface? >> >> >> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >> cloud_init._getMyIPAddress:115 Acquiring 'enp0s25' address >> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >> plugin.executeRaw:813 execute: ('/sbin/ip', 'addr', 'show', 'enp0s25'), >> executable='None', cwd='None', env=None >> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >> plugin.executeRaw:863 execute-result: ('/sbin/ip', 'addr', 'show', >> 'enp0s25'), rc=0 >> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >> plugin.execute:921 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25') >> stdout: >> 2: enp0s25: mtu 1500 qdisc >> pfifo_fast state DOWN qlen 1000 >> link/ether c4:34:6b:26:6a:d1 brd ff:ff:ff:ff:ff:ff >> >> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >> plugin.execute:926 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25') >> stderr: >> >> >> 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init >> cloud_init._getMyIPAddress:132 address: None >> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:142 method >> exception >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in >> _executeMethod >> method['method']() >> File >> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/cloud_init.py", >> line 781, in _customize_vm_networking >> self._customize_vm_addressing() >> File >> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/cloud_init.py", >> line 215, in _customize_vm_addressing >> my_ip = self._getMyIPAddress() >> File >> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/cloud_init.py", >> line 136, in _getMyIPAddress >> _('Cannot acquire nic/bridge address') >> RuntimeError: Cannot acquire nic/bridge address >> >> >> >> >>> >>> On Thu, Jun 15, 2017 at 4:13 AM, Maton, Brett >>> wrote: >>> Hi Jon, There will be people on this list far more able to help you than I can, but the contents of the engine setup log (/var/log/ovirt-hosted-engine- setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log) would help On 14 June 2017 at 21:56, Jon Bornstein wrote: > This is roughly my 10th attempt at installing some version of oVirt > and I think where I get stuck every time is with the networking aspect. > > I'm simply trying to test out oVirt on an old laptop wiht 16GB RAM and > an external HDD. I'm connected via WiFi. > > After failing 100 times with the regular ovirt engine, It was >
[ovirt-users] HostedEngine VM not visible, but running
Hi, I've migrated from a bare-metal engine to a hosted engine. There were no errors during the install, however, the hosted engine did not get started. I tried running: hosted-engine --status on the host I deployed it on, and it returns nothing (exit code is 1 however). I could not ping it either. So I tried starting it via 'hosted-engine --vm-start' and it returned: Virtual machine does not exist But it then became available. I logged into it successfully. It is not in the list of VMs however. Any ideas why the hosted-engine commands fail, and why it is not in the list of virtual machines? Thanks for any help, Cam ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] moVirt 2.0 RC 1 released!
you can have only one version installed... On 15 Jun 2017 5:25 pm, "Gianluca Cecchi"wrote: > On Thu, Jun 15, 2017 at 4:06 PM, Filip Krepinsky > wrote: > >> Hia, >> >> the first RC of moVirt 2.0 has been released! >> >> You can get it from our GitHub [1]; the play store will be upgraded >> after considered stable. >> >> The main feature of this release is a support for managing multiple oVirt >> installations from one moVirt. >> > > Nice! > Do I have to deinstall current one to test it or can I install both > versions together? > > Gianluca > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] moVirt 2.0 RC 1 released!
On Thu, Jun 15, 2017 at 4:06 PM, Filip Krepinskywrote: > Hia, > > the first RC of moVirt 2.0 has been released! > > You can get it from our GitHub [1]; the play store will be upgraded after > considered stable. > > The main feature of this release is a support for managing multiple oVirt > installations from one moVirt. > Nice! Do I have to deinstall current one to test it or can I install both versions together? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Error while installing oVirt Self Hosted Engine
My lack of Linux proficiency is going to show here, but.. I guess I'm a bit confused on how to correctly configure my network interface(s) for oVirt. I currently have two network interfaces: enp0s25 - This is my Ethernet interface, but it is unused. It currently is set to DHCP and has no IP address. However, it is the only interface that oVirt suggests I use when configuring which nic to set the bridge on. wlo1 - My wireless interface, and IS how i'm connecting to the internet. This is the IP address that I was using in my /etc/hosts file. Is it it not possible have a system that can run oVirt as well as maintain an internet connection? On Thu, Jun 15, 2017 at 9:02 AM, Simone Tiraboschiwrote: > > > On Thu, Jun 15, 2017 at 2:55 PM, Jon Bornstein < > bornstein.jonat...@gmail.com> wrote: > >> Hi Marton, >> >> Here is the log: https://gist.github.com/anonymous/ac777a70b8e8fc23016c0 >> b6731f24706 >> > > > You tried to create the management bridge over enp0s25 but it wasn't > configured with an IP address for your host. > Could you please configure it or choose a correctly configured interface? > > > 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init > cloud_init._getMyIPAddress:115 Acquiring 'enp0s25' address > 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init > plugin.executeRaw:813 execute: ('/sbin/ip', 'addr', 'show', 'enp0s25'), > executable='None', cwd='None', env=None > 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init > plugin.executeRaw:863 execute-result: ('/sbin/ip', 'addr', 'show', > 'enp0s25'), rc=0 > 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init > plugin.execute:921 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25') > stdout: > 2: enp0s25: mtu 1500 qdisc pfifo_fast > state DOWN qlen 1000 > link/ether c4:34:6b:26:6a:d1 brd ff:ff:ff:ff:ff:ff > > 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init > plugin.execute:926 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25') > stderr: > > > 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init > cloud_init._getMyIPAddress:132 address: None > 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:142 method > exception > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in > _executeMethod > method['method']() > File "/usr/share/ovirt-hosted-engine-setup/scripts/../ > plugins/gr-he-common/vm/cloud_init.py", line 781, in > _customize_vm_networking > self._customize_vm_addressing() > File "/usr/share/ovirt-hosted-engine-setup/scripts/../ > plugins/gr-he-common/vm/cloud_init.py", line 215, in > _customize_vm_addressing > my_ip = self._getMyIPAddress() > File "/usr/share/ovirt-hosted-engine-setup/scripts/../ > plugins/gr-he-common/vm/cloud_init.py", line 136, in _getMyIPAddress > _('Cannot acquire nic/bridge address') > RuntimeError: Cannot acquire nic/bridge address > > > > >> >> On Thu, Jun 15, 2017 at 4:13 AM, Maton, Brett >> wrote: >> >>> Hi Jon, >>> >>> There will be people on this list far more able to help you than I >>> can, but the contents of the engine setup log (/var/log/ovirt-hosted-engine- >>> setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log) would help >>> >>> On 14 June 2017 at 21:56, Jon Bornstein >>> wrote: >>> This is roughly my 10th attempt at installing some version of oVirt and I think where I get stuck every time is with the networking aspect. I'm simply trying to test out oVirt on an old laptop wiht 16GB RAM and an external HDD. I'm connected via WiFi. After failing 100 times with the regular ovirt engine, It was recommended I try the self-hosted engine. ** Anyway, here are the contents of my /etc/hosts file: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.114.13.144 host.example.rocks host 10.114.13.145 engine.example.rocks engine ** After running hosted-engine --deploy, I get to the prompt below in which the installer fails afterwards: You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:5b:3a:73]: [ ERROR ] Failed to execute stage 'Environment customization': Cannot acquire nic/bridge address [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- setup/answers/answers-20170614163222.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed Log file is located at /var/log/ovirt-hosted-engine-s etup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log **
[ovirt-users] moVirt 2.0 RC 1 released!
Hia, the first RC of moVirt 2.0 has been released! You can get it from our GitHub [1]; the play store will be upgraded after considered stable. The main feature of this release is a support for managing multiple oVirt installations from one moVirt. Other changes: - actions/lists/dashboard are now filtered by selecting accounts/clusters in the left drawer - event lists now correctly working for single vm, host, etc.. - better syncing and network detection - cleaner ui design and connection settings - enhanced security and support for certificates (import from file) - sorting and filtering made more usable - more descriptive errors - other small enhancements and bug fixes We would greatly appreciate if you could try it and share your feedback with us. [2] Let us know about any suggestions you might have; and as usual, patches are also welcome :) Have a great day Filip [1]: https://github.com/oVirt/moVirt/releases/download/v2.0-rc1/moVirt-release.apk [2]: https://github.com/oVirt/moVirt ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Hosted engine
Good morning, Requested info below. Along with some additional info. You'll notice the data volume is not mounted. Any help in getting HE back running would be greatly appreciated. Thank you, Joel [root@ovirt-hyp-01 ~]# hosted-engine --vm-status --== Host 1 status ==-- conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt-hyp-01.example.lan Host ID: 1 Engine status : unknown stale-data Score : 3400 stopped: False Local maintenance : False crc32 : 5558a7d3 local_conf_timestamp : 20356 Host timestamp : 20341 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=20341 (Fri Jun 9 14:38:57 2017) host-id=1 score=3400 vm_conf_refresh_time=20356 (Fri Jun 9 14:39:11 2017) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False --== Host 2 status ==-- conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt-hyp-02.example.lan Host ID: 2 Engine status : unknown stale-data Score : 3400 stopped: False Local maintenance : False crc32 : 936d4cf3 local_conf_timestamp : 20351 Host timestamp : 20337 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=20337 (Fri Jun 9 14:39:03 2017) host-id=2 score=3400 vm_conf_refresh_time=20351 (Fri Jun 9 14:39:17 2017) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False --== Host 3 status ==-- conf_on_shared_storage : True Status up-to-date : False Hostname : ovirt-hyp-03.example.lan Host ID: 3 Engine status : unknown stale-data Score : 3400 stopped: False Local maintenance : False crc32 : f646334e local_conf_timestamp : 20391 Host timestamp : 20377 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=20377 (Fri Jun 9 14:39:37 2017) host-id=3 score=3400 vm_conf_refresh_time=20391 (Fri Jun 9 14:39:51 2017) conf_on_shared_storage=True maintenance=False state=EngineStop stopped=False timeout=Thu Jan 1 00:43:08 1970 [root@ovirt-hyp-01 ~]# gluster peer status Number of Peers: 2 Hostname: 192.168.170.143 Uuid: b2b30d05-cf91-4567-92fd-022575e082f5 State: Peer in Cluster (Connected) Other names: 10.0.0.2 Hostname: 192.168.170.147 Uuid: 4e50acc4-f3cb-422d-b499-fb5796a53529 State: Peer in Cluster (Connected) Other names: 10.0.0.3 [root@ovirt-hyp-01 ~]# gluster volume info all Volume Name: data Type: Replicate Volume ID: 1d6bb110-9be4-4630-ae91-36ec1cf6cc02 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 192.168.170.141:/gluster_bricks/data/data Brick2: 192.168.170.143:/gluster_bricks/data/data Brick3: 192.168.170.147:/gluster_bricks/data/data (arbiter) Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 1 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 network.ping-timeout: 30 performance.strict-o-direct: on cluster.granular-entry-heal: enable Volume Name: engine Type: Replicate Volume ID: b160f0b2-8bd3-4ff2-a07c-134cab1519dd Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 192.168.170.141:/gluster_bricks/engine/engine Brick2: 192.168.170.143:/gluster_bricks/engine/engine Brick3: 192.168.170.147:/gluster_bricks/engine/engine (arbiter) Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off
Re: [ovirt-users] Error while installing oVirt Self Hosted Engine
On Thu, Jun 15, 2017 at 2:55 PM, Jon Bornsteinwrote: > Hi Marton, > > Here is the log: https://gist.github.com/anonymous/ > ac777a70b8e8fc23016c0b6731f24706 > You tried to create the management bridge over enp0s25 but it wasn't configured with an IP address for your host. Could you please configure it or choose a correctly configured interface? 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init cloud_init._getMyIPAddress:115 Acquiring 'enp0s25' address 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:813 execute: ('/sbin/ip', 'addr', 'show', 'enp0s25'), executable='None', cwd='None', env=None 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.executeRaw:863 execute-result: ('/sbin/ip', 'addr', 'show', 'enp0s25'), rc=0 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:921 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25') stdout: 2: enp0s25: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether c4:34:6b:26:6a:d1 brd ff:ff:ff:ff:ff:ff 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init plugin.execute:926 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25') stderr: 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init cloud_init._getMyIPAddress:132 address: None 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:142 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/cloud_init.py", line 781, in _customize_vm_networking self._customize_vm_addressing() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/cloud_init.py", line 215, in _customize_vm_addressing my_ip = self._getMyIPAddress() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/cloud_init.py", line 136, in _getMyIPAddress _('Cannot acquire nic/bridge address') RuntimeError: Cannot acquire nic/bridge address > > On Thu, Jun 15, 2017 at 4:13 AM, Maton, Brett > wrote: > >> Hi Jon, >> >> There will be people on this list far more able to help you than I can, >> but the contents of the engine setup log (/var/log/ovirt-hosted-engine- >> setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log) would help >> >> On 14 June 2017 at 21:56, Jon Bornstein >> wrote: >> >>> This is roughly my 10th attempt at installing some version of oVirt and >>> I think where I get stuck every time is with the networking aspect. >>> >>> I'm simply trying to test out oVirt on an old laptop wiht 16GB RAM and >>> an external HDD. I'm connected via WiFi. >>> >>> After failing 100 times with the regular ovirt engine, It was >>> recommended I try the self-hosted engine. >>> >>> ** >>> >>> Anyway, here are the contents of my /etc/hosts file: >>> 127.0.0.1 localhost localhost.localdomain localhost4 >>> localhost4.localdomain4 >>> ::1 localhost localhost.localdomain localhost6 >>> localhost6.localdomain6 >>> 10.114.13.144 host.example.rocks host >>> 10.114.13.145 engine.example.rocks engine >>> >>> ** >>> >>> After running hosted-engine --deploy, I get to the prompt below in which >>> the installer fails afterwards: >>> >>> You may specify a unicast MAC address for the VM or accept a randomly >>> generated default [00:16:3e:5b:3a:73]: >>> >>> [ ERROR ] Failed to execute stage 'Environment customization': Cannot >>> acquire nic/bridge address >>> [ INFO ] Stage: Clean up >>> [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- >>> setup/answers/answers-20170614163222.conf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> [ ERROR ] Hosted Engine deployment failed >>> Log file is located at /var/log/ovirt-hosted-engine-s >>> etup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log >>> >>> ** >>> >>> Here is a tail of the log: >>> >>> [root@engine /]# tail /var/log/ovirt-hosted-engine-s >>> etup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log >>> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:134 >>> condition False >>> 2017-06-14 16:32:22 INFO otopi.context context.runSequence:687 Stage: >>> Termination >>> 2017-06-14 16:32:22 DEBUG otopi.context context.runSequence:691 STAGE >>> terminate >>> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage >>> terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate >>> 2017-06-14 16:32:22 ERROR otopi.plugins.gr_he_common.core.misc >>> misc._terminate:178 Hosted Engine deployment failed >>> 2017-06-14 16:32:22 DEBUG otopi.plugins.otopi.dialog.human >>> dialog.__logString:204 DIALOG:SEND Log file is located at >>>
Re: [ovirt-users] Error while installing oVirt Self Hosted Engine
Hi Marton, Here is the log: https://gist.github.com/anonymous/ac777a70b8e8fc23016c0b6731f24706 On Thu, Jun 15, 2017 at 4:13 AM, Maton, Brettwrote: > Hi Jon, > > There will be people on this list far more able to help you than I can, > but the contents of the engine setup log (/var/log/ovirt-hosted-engine- > setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log) would help > > On 14 June 2017 at 21:56, Jon Bornstein > wrote: > >> This is roughly my 10th attempt at installing some version of oVirt and I >> think where I get stuck every time is with the networking aspect. >> >> I'm simply trying to test out oVirt on an old laptop wiht 16GB RAM and an >> external HDD. I'm connected via WiFi. >> >> After failing 100 times with the regular ovirt engine, It was recommended >> I try the self-hosted engine. >> >> ** >> >> Anyway, here are the contents of my /etc/hosts file: >> 127.0.0.1 localhost localhost.localdomain localhost4 >> localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 >> localhost6.localdomain6 >> 10.114.13.144 host.example.rocks host >> 10.114.13.145 engine.example.rocks engine >> >> ** >> >> After running hosted-engine --deploy, I get to the prompt below in which >> the installer fails afterwards: >> >> You may specify a unicast MAC address for the VM or accept a randomly >> generated default [00:16:3e:5b:3a:73]: >> >> [ ERROR ] Failed to execute stage 'Environment customization': Cannot >> acquire nic/bridge address >> [ INFO ] Stage: Clean up >> [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- >> setup/answers/answers-20170614163222.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Hosted Engine deployment failed >> Log file is located at /var/log/ovirt-hosted-engine-s >> etup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log >> >> ** >> >> Here is a tail of the log: >> >> [root@engine /]# tail /var/log/ovirt-hosted-engine-s >> etup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log >> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:134 >> condition False >> 2017-06-14 16:32:22 INFO otopi.context context.runSequence:687 Stage: >> Termination >> 2017-06-14 16:32:22 DEBUG otopi.context context.runSequence:691 STAGE >> terminate >> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage >> terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate >> 2017-06-14 16:32:22 ERROR otopi.plugins.gr_he_common.core.misc >> misc._terminate:178 Hosted Engine deployment failed >> 2017-06-14 16:32:22 DEBUG otopi.plugins.otopi.dialog.human >> dialog.__logString:204 DIALOG:SEND Log file is located at >> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup >> -20170614160653-2vuu7h.log >> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage >> terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate >> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage >> terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate >> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:134 >> condition False >> 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage >> terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate >> >> ** >> >> This is my first time writing to this list, so hopefully I'm doing it >> right. Thanks in advance - this is driving me crazy! >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] quick question on hosted engine storage
Thanks again Michael On Thu, Jun 15, 2017 at 10:08 AM, Martin Sivakwrote: > Hi, > > the current code does not officially allow using the hosted engine > storage domain for other VMs. We are currently working on removing > that limitation. > > Best regards > > Martin Sivak > > On Thu, Jun 15, 2017 at 10:59 AM, cmc wrote: >> If you choose fibre channel for the hosted engine storage, can this >> storage be shared later by other VMs? I assume you don't need a >> dedicated LUN, just one that isn't in use before hand. >> >> Thanks, >> >> C >> >> On Wed, Jun 14, 2017 at 6:28 PM, cmc wrote: >>> Thanks Martin. >>> >>> On Wed, Jun 14, 2017 at 4:15 PM, Martin Sivak wrote: Hi, the storage is not migrated automatically. Hosted engine VM will keep using the storage domain you configured during the setup phase. Best regards -- Martin Sivak SLA / oVirt On Wed, Jun 14, 2017 at 5:02 PM, cmc wrote: > Hi, > > When building a hosted engine VM, and choosing 'nfs' for storage, it > does the install to this nfs share. Once the host is setup with, e.g., > fibre channel as storage for VMs, does the hosted engine get migrated > automatically to this storage? When does this actually happen? > > Thanks, > > Cam > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] quick question on hosted engine storage
Sorry, Thanks again Martin! On Thu, Jun 15, 2017 at 11:51 AM, cmcwrote: > Thanks again Michael > > On Thu, Jun 15, 2017 at 10:08 AM, Martin Sivak wrote: >> Hi, >> >> the current code does not officially allow using the hosted engine >> storage domain for other VMs. We are currently working on removing >> that limitation. >> >> Best regards >> >> Martin Sivak >> >> On Thu, Jun 15, 2017 at 10:59 AM, cmc wrote: >>> If you choose fibre channel for the hosted engine storage, can this >>> storage be shared later by other VMs? I assume you don't need a >>> dedicated LUN, just one that isn't in use before hand. >>> >>> Thanks, >>> >>> C >>> >>> On Wed, Jun 14, 2017 at 6:28 PM, cmc wrote: Thanks Martin. On Wed, Jun 14, 2017 at 4:15 PM, Martin Sivak wrote: > Hi, > > the storage is not migrated automatically. Hosted engine VM will keep > using the storage domain you configured during the setup phase. > > Best regards > > -- > Martin Sivak > SLA / oVirt > > > On Wed, Jun 14, 2017 at 5:02 PM, cmc wrote: >> Hi, >> >> When building a hosted engine VM, and choosing 'nfs' for storage, it >> does the install to this nfs share. Once the host is setup with, e.g., >> fibre channel as storage for VMs, does the hosted engine get migrated >> automatically to this storage? When does this actually happen? >> >> Thanks, >> >> Cam >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] quick question on hosted engine storage
Hi, the current code does not officially allow using the hosted engine storage domain for other VMs. We are currently working on removing that limitation. Best regards Martin Sivak On Thu, Jun 15, 2017 at 10:59 AM, cmcwrote: > If you choose fibre channel for the hosted engine storage, can this > storage be shared later by other VMs? I assume you don't need a > dedicated LUN, just one that isn't in use before hand. > > Thanks, > > C > > On Wed, Jun 14, 2017 at 6:28 PM, cmc wrote: >> Thanks Martin. >> >> On Wed, Jun 14, 2017 at 4:15 PM, Martin Sivak wrote: >>> Hi, >>> >>> the storage is not migrated automatically. Hosted engine VM will keep >>> using the storage domain you configured during the setup phase. >>> >>> Best regards >>> >>> -- >>> Martin Sivak >>> SLA / oVirt >>> >>> >>> On Wed, Jun 14, 2017 at 5:02 PM, cmc wrote: Hi, When building a hosted engine VM, and choosing 'nfs' for storage, it does the install to this nfs share. Once the host is setup with, e.g., fibre channel as storage for VMs, does the hosted engine get migrated automatically to this storage? When does this actually happen? Thanks, Cam ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] quick question on hosted engine storage
If you choose fibre channel for the hosted engine storage, can this storage be shared later by other VMs? I assume you don't need a dedicated LUN, just one that isn't in use before hand. Thanks, C On Wed, Jun 14, 2017 at 6:28 PM, cmcwrote: > Thanks Martin. > > On Wed, Jun 14, 2017 at 4:15 PM, Martin Sivak wrote: >> Hi, >> >> the storage is not migrated automatically. Hosted engine VM will keep >> using the storage domain you configured during the setup phase. >> >> Best regards >> >> -- >> Martin Sivak >> SLA / oVirt >> >> >> On Wed, Jun 14, 2017 at 5:02 PM, cmc wrote: >>> Hi, >>> >>> When building a hosted engine VM, and choosing 'nfs' for storage, it >>> does the install to this nfs share. Once the host is setup with, e.g., >>> fibre channel as storage for VMs, does the hosted engine get migrated >>> automatically to this storage? When does this actually happen? >>> >>> Thanks, >>> >>> Cam >>> ___ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Error while installing oVirt Self Hosted Engine
Hi Jon, There will be people on this list far more able to help you than I can, but the contents of the engine setup log (/var/log/ovirt-hosted-engine- setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log) would help On 14 June 2017 at 21:56, Jon Bornsteinwrote: > This is roughly my 10th attempt at installing some version of oVirt and I > think where I get stuck every time is with the networking aspect. > > I'm simply trying to test out oVirt on an old laptop wiht 16GB RAM and an > external HDD. I'm connected via WiFi. > > After failing 100 times with the regular ovirt engine, It was recommended > I try the self-hosted engine. > > ** > > Anyway, here are the contents of my /etc/hosts file: > 127.0.0.1 localhost localhost.localdomain localhost4 > localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > 10.114.13.144 host.example.rocks host > 10.114.13.145 engine.example.rocks engine > > ** > > After running hosted-engine --deploy, I get to the prompt below in which > the installer fails afterwards: > > You may specify a unicast MAC address for the VM or accept a randomly > generated default [00:16:3e:5b:3a:73]: > > [ ERROR ] Failed to execute stage 'Environment customization': Cannot > acquire nic/bridge address > [ INFO ] Stage: Clean up > [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- > setup/answers/answers-20170614163222.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed > Log file is located at /var/log/ovirt-hosted-engine- > setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log > > ** > > Here is a tail of the log: > > [root@engine /]# tail /var/log/ovirt-hosted-engine- > setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log > 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:134 > condition False > 2017-06-14 16:32:22 INFO otopi.context context.runSequence:687 Stage: > Termination > 2017-06-14 16:32:22 DEBUG otopi.context context.runSequence:691 STAGE > terminate > 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage > terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate > 2017-06-14 16:32:22 ERROR otopi.plugins.gr_he_common.core.misc > misc._terminate:178 Hosted Engine deployment failed > 2017-06-14 16:32:22 DEBUG otopi.plugins.otopi.dialog.human > dialog.__logString:204 DIALOG:SEND Log file is located at > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine- > setup-20170614160653-2vuu7h.log > 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage > terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate > 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage > terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate > 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:134 > condition False > 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage > terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate > > ** > > This is my first time writing to this list, so hopefully I'm doing it > right. Thanks in advance - this is driving me crazy! > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt storage best practise
On Wed, Jun 14, 2017 at 4:18 PM, FERNANDO FREDIANI < fernando.fredi...@upx.com> wrote: > I normally assume that any performance gain from directlly attaching a LUN > to a Virtual Machine then using it in the traditional way are so little to > compensate the extra hassle to do that. I would avoid as much as I cacn use > it, unless it is for some very special reason where you cannot do in any > other way. The only real usage for it so far was Microsoft SQL Server > Clustering requirements. > I tend to agree (from performance perspective), though I don't have numbers to back it up. It probably doesn't matter that much. There are however other reasons to use a direct LUN - use of storage-side features, such as replication, QoS, encryption, compression, etc. that you may wish to apply (or disable) per storage. Also, there are some strange SCSI commands that some strange applications need that require direct LUN and SCSI pass-through. Clustering (via SCSI reservations is certainly the first and foremost but not the only one). Y. > Fernando > > On 14/06/2017 03:23, Idan Shaby wrote: > > Direct luns are disks that are not managed by oVirt. Ovirt communicates > directly with the lun itself, without any other layer in between (like lvm > in image disks). > The advantage of the direct lun is that it should have better performance > since there's no overhead of another layer in the middle. > The disadvantage is that you can't take a snapshot of it (when attached to > a vm, of course), can't make it a part of a template, export it, and in > general - you don't manage it. > > > Regards, > Idan > > On Mon, Jun 12, 2017 at 10:10 PM, Stefano Bovinawrote: > >> Thank you very much. >> What about "direct lun" usage and database example? >> >> >> 2017-06-08 16:40 GMT+02:00 Elad Ben Aharon : >> >>> Hi, >>> Answer inline >>> >>> On Thu, Jun 8, 2017 at 1:07 PM, Stefano Bovina wrote: >>> Hi, does a storage best practise document for oVirt exist? Some examples: oVirt allows to extend an existing storage domain: Is it better to keep a 1:1 relation between LUN and oVirt storage domain? >>> What do you mean by 1:1 relation? Between storage domain and the number >>> of LUNs the domain reside on? >>> If not, is it better to avoid adding LUNs to an already existing storage domain? >>> No problems with storage domain extension. >>> Following the previous questions: Is it better to have 1 Big oVirt storage domain or many small oVirt storage domains? >>> Depends on your needs, be aware to the following: >>> - Each domain has its own metadata which allocates ~5GB of the domain >>> size. >>> - Each domain is being constatntly monitored by the system, so large >>> number of domain can decrease the system performance. >>> There are also downsides with having big domains, like less flexability >>> >>> There is a max num VM/disks for storage domain? In which case is it better to use "direct attached lun" with respect to an image on an oVirt storage domain? >>> >>> Example: Simple web server: > image Large database (simple example): - root,swap etc: 30GB > image? - data disk: 500GB-> (direct or image?) Regards, Stefano ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users >>> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > ___ > Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt and Cloud-Init
On Wed, Jun 14, 2017 at 4:11 PM, Luca 'remix_tj' Lorenzetto < lorenzetto.l...@gmail.com> wrote: > On Wed, Jun 14, 2017 at 3:08 PM, Adam Millswrote: > > Hello oVirt Users! > > > > Recent the team that I work on began investigating oVirt as a > virtualization > > platform. It is extremely promising, however we have some questions about > > how oVirt does provisioning using Cloud-Init. > > Very good! > +1. Welcome aboard. > > > > > Specifically the question is: On the wiki the it states "We are most > > interested in using config-drive version 2 [2], which is also in > supported > > by OpenStack". Is that currently how Cloud-Init is providing the > datasource > > to the machine is via a config-drive being mounted? > > Yes. If you start the vm through run-once and specify to use cloud > init (with any configuration specified) a second cdrom drive is > attached automatically containing cloud-init infos. Cloud-init will > read that drive and apply the configuration accordingly. > > > > > > And the last question: How frequent is too frequent to ask questions to > this > > mailing group :D > > You can ask as much as you want. I case of excessive mailing, i'll > filter out your request :-P (joking) > My request would be a different thread per question topic and sensible subject line. That would make it easier for all to read and respond. Y. > > > Luca > > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet è la più grande biblioteca del mondo. > Ma il problema è che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.l...@gmail.com> > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Error while installing oVirt Self Hosted Engine
This is roughly my 10th attempt at installing some version of oVirt and I think where I get stuck every time is with the networking aspect. I'm simply trying to test out oVirt on an old laptop wiht 16GB RAM and an external HDD. I'm connected via WiFi. After failing 100 times with the regular ovirt engine, It was recommended I try the self-hosted engine. ** Anyway, here are the contents of my /etc/hosts file: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.114.13.144 host.example.rocks host 10.114.13.145 engine.example.rocks engine ** After running hosted-engine --deploy, I get to the prompt below in which the installer fails afterwards: You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:5b:3a:73]: [ ERROR ] Failed to execute stage 'Environment customization': Cannot acquire nic/bridge address [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20170614163222.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log ** Here is a tail of the log: [root@engine /]# tail /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:134 condition False 2017-06-14 16:32:22 INFO otopi.context context.runSequence:687 Stage: Termination 2017-06-14 16:32:22 DEBUG otopi.context context.runSequence:691 STAGE terminate 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate 2017-06-14 16:32:22 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:178 Hosted Engine deployment failed 2017-06-14 16:32:22 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170614160653-2vuu7h.log 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:134 condition False 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:128 Stage terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate ** This is my first time writing to this list, so hopefully I'm doing it right. Thanks in advance - this is driving me crazy! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users