[Users] virt-v2v
Is virt-v2v something i can leverage to get foreign virtual guests into my ovirt/node environment? Thanks, jonathan This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] .ova image file
Is there a way to convert a .ova image file into ovirt? Thanks! jonathan This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Testing ovirt all in one on F18 gives error on DB creation
On 12/06/2012 06:56 PM, Gianluca Cecchi wrote: In the mean time my tests on f18 with nightly build (ovirt-engine-3.2.0-1.20121204.git5d79c41.fc18.noarch) went ahead ... PREFACE: this is a f18 vm in a fedora 17 host, where I put this in xml SandyBridge Intel It goes through in installation, but now it fails at instaling the host Creating Database... [ DONE ] Updating the Default Data Center Storage Type... [ DONE ] Editing oVirt Engine Configuration... [ DONE ] Editing Postgresql Configuration...[ DONE ] Configuring the Default ISO Domain... [ DONE ] Configuring Firewall (iptables)... [ DONE ] Starting ovirt-engine Service... [ DONE ] Configuring HTTPD... [ DONE ] AIO: Creating storage directory... [ DONE ] AIO: Adding Local Datacenter and cluster...[ DONE ] AIO: Adding Local host (This may take several minutes)... [ ERROR ] Error: Timed out while waiting for host to start Please check log file /var/log/ovirt-engine/engine-setup_2012_12_05_16_57_11.log for more information NOTE: In log file I found: 1) problem with loading files to iso domain 2012-12-05 16:59:51::ERROR::engine-setup::1694::root:: Traceback (most recent call last): File "/usr/bin/engine-setup", line 1691, in _loadFilesToIsoDomain utils.copyFile(filename, targetPath, basedefs.CONST_VDSM_UID, basedefs.CONST_KVM_GID) File "/usr/share/ovirt-engine/scripts/common_utils.py", line 653, in copyFile shutil.copy2(fileSrc, destination) File "/usr/lib64/python2.7/shutil.py", line 128, in copy2 copyfile(src, dst) File "/usr/lib64/python2.7/shutil.py", line 82, in copyfile with open(src, 'rb') as fsrc: IOError: [Errno 2] No such file or directory: '/usr/share/virtio-win/virtio-win.vfd' 2012-12-05 16:59:51::ERROR::engine-setup::1695::root:: Failed to copy files to iso domain ---> probably to include some deendency when you install packages? 2) Install of host fails 2012-12-05 17:00:24::DEBUG::all_in_one_100::279::root:: current host status is: installing 2012-12-05 17:00:24::DEBUG::all_in_one_100::290::root:: Traceback (most recent call last): File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 287, in isHostUp raise Exception(INFO_CREATE_HOST_WAITING_UP) Exception: Waiting for the host to start 2012-12-05 17:00:29::DEBUG::all_in_one_100::276::root:: Waiting for host to become operational 2012-12-05 17:00:29::DEBUG::all_in_one_100::279::root:: current host status is: installing 2012-12-05 17:00:29::DEBUG::all_in_one_100::290::root:: Traceback (most recent call last): File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 287, in isHostUp raise Exception(INFO_CREATE_HOST_WAITING_UP) Exception: Waiting for the host to start 2012-12-05 17:01:15::DEBUG::all_in_one_100::276::root:: Waiting for host to become operational 2012-12-05 17:01:15::DEBUG::all_in_one_100::279::root:: current host status is: installing 2012-12-05 17:01:15::DEBUG::all_in_one_100::290::root:: Traceback (most recent call last): File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 287, in isHostUp raise Exception(INFO_CREATE_HOST_WAITING_UP) Exception: Waiting for the host to start 2012-12-05 17:01:20::DEBUG::all_in_one_100::276::root:: Waiting for host to become operational 2012-12-05 17:01:20::DEBUG::all_in_one_100::279::root:: current host status is: install_failed 2012-12-05 17:01:20::DEBUG::all_in_one_100::290::root:: Traceback (most recent call last): 3) Installation failure probably caused by tuned-adm error under /var/log/ovirt-engine/host-deploy ovirt-20121205170118-f18aio.localdomain.local.log 2012-12-05 17:01:18 DEBUG otopi.plugins.ovirt_host_deploy.tune.tuned plugin.execute:393 execute-output: ('/sbin/tuned-adm', 'profile', 'vir tual-host') stderr: 2012-12-05 17:00:53,193 ERRORdbus.proxies: Introspect error on :1.43:/Tuned: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. ERROR:dbus.proxies:Introspect error on :1.43:/Tuned: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. DBus call to Tuned daemon failed (org.freedesktop.DBus.Error.NoReply: Did not receive
Re: [Users] Auto-start vms on boot?
On 12/07/2012 06:23 PM, Adrian Gibanel wrote: My use case is that I just don't want to start manually the virtual machines when the host starts and, also, if the host is shutdown it should guest-shutdown the virtual machines. Any doc on that pin option? How one is supposed to pin a virtual machine to a host? just to be clear, we still don't have the behavior i described. I just stated the only use case i'm familiar for a similar requirement. (pinning a VM to host is done via the edit vm dialog). question on your use case - how would the engine know if the admin just shutdown a VM manually from a VM which should be auto started (should we add such a checkbox). in the use case i described, we would be adding a 'start/stop VM with host' for a VM pinned to a host. Thank you. - Mensaje original - De: "Itamar Heim" Para: "Adrian Gibanel" CC: "users" Enviados: Viernes, 7 de Diciembre 2012 15:49:36 Asunto: Re: [Users] Auto-start vms on boot? On 12/06/2012 10:34 PM, Adrian Gibanel wrote: It would seem that oVirt does not provide an standard way of forcing boot of virtual machines at boot. Pools can have pre-started vms as stated here: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.1/html/Administration_Guide/Prestarting_Virtual_Machines_in_a_Pool.html but pools imply state-less virtual machines and I am talking more about normal virtual machines. I've found this script: https://github.com/iranzo/rhevm-utils/blob/master/rhev-vm-start.py which could do to the trick if run at host boot. I've also thought (but not tried) to mark a virtual machine as "Highly Available" even if I have only one host (I mean, usually HA only makes sense when you have two hosts). Marking a VM as H.A. would do the trick? Any special reason why there isn't and standard way of marking which vms should be auto-started at boot? Just wanted to hear your thoughts before filling an RFE. Thank you. what exactly is your use case? the one i'm familiar with is to tie the VM life cycle to a specific host, so a VM which is pinned to a specific host for a certain task (say, IDS), is always starting when the host starts, and will be automatically shutdown when host is moved to maintenance. so only relevant for VMs which are pinned to a host. Thanks, Itamar ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] iscsi mpio
Im using a Equallogic iscsi SAN, and it has a special multi path driver because it uses a virtual IP for the initiator to log into, and the actual traffic ip addresses are not exposed to the initiator. I tried to install the rpm but as far as i can get is: [root@vmnode02 x86_64]# rpm -Uvh equallogic-host-tools-1.2.0-2.el6.x86_64.rpm --nodeps warning: equallogic-host-tools-1.2.0-2.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 84fd3ef3: NOKEY error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Read-only file system) Is there a way to unlock the file system so i can try to install and configure this driver? This is the current output of multipath -ll on my node: [root@vmnode02 x86_64]# multipath -ll 36019cb3131d6f276d80a45bb6501f0c9 dm-10 size=500G features='0' hwhandler='0' wp=rw 36001e4f03dd42300183eaac404afd5ce dm-0 DELL,PERC 6/i size=279G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active `- 2:2:0:0 sda 8:0 active ready running The EQ san does show only 1 single tcp connection into the LUN, so i know its not multipathing at this time. This will be a huge downer if i have to go to production without multi path support. Is there anything i can do here? On 12/7/12 8:42 AM, "René Koch (ovido)" wrote: >Hi, > >I can give you some information on multipathing on RHEV-H, as I don't >have ovirt-node installed at the moment. > >Multipath packages are installed on the hypervisor, but vdsm overwrites >multipath.conf. > >To prevent this you have to set the following tags: ># RHEV REVISION 0.7 ># RHEV PRIVATE > >Search for MPATH_CONF_TAG in multipath.py file to get correct RHEV >REVISION number. > >After changing multipath.conf to fit your needs make it persistent for >reboots: ># persist /etc/multipath.conf > > >You can check multipath status with: > ># multipath -ll >mpathb (b00015521f5d4) dm-0 Intel,Multi-Flex >size=96G features='0' hwhandler='0' wp=rw >|-+- policy='round-robin 0' prio=50 status=active >| `- 0:0:0:1 sdb 8:16 active ready running >mpatha (222140001553601d1) dm-1 Intel,Multi-Flex >size=20G features='0' hwhandler='0' wp=rw >|-+- policy='round-robin 0' prio=50 status=active >| `- 0:0:0:0 sda 8:0 active ready running > >I hope this helps. >Don't really know if the tags are the same in oVirt, as I don't have a >working oVirt setup at the moment. > > >-- >Best Regards, > >DI (FH) René Koch >Senior Solution Architect > > >ovido gmbh - "Das Linux Systemhaus" >Brünner Straße 163, A-1210 Wien > >Phone: +43 720 / 530 670 >Mobile: +43 660 / 512 21 31 >E-Mail: r.k...@ovido.at > > > >On Fri, 2012-12-07 at 13:55 +, Jonathan Horne wrote: >> >> >> From: Jonathan Horne >> Date: Thursday, December 6, 2012 1:04 PM >> To: users >> Subject: [Users] iscsi mpio >> >> >> >> Hello, >> >> >> I just rebooted my iscsi SAN and the ovirt node 2.5.5fc17 did not >> enjoy that one bit. Does ovirt node support multi path i/o? Any >> information about this would be greatly appreciated. >> >> >> Thanks, >> jonathan >> >> >> >> >> >> >> I attempted to install the multi path driver on node 2.5.5-0.fc17, but >> the file system is read only and i can't make changes. >> >> >> I really need to find out about multipathing, because yesterday i >> upgraded the firmware on my iscsi SAN yesterday, and one of my nodes >> freaked out and rebooted (oddly, the other 2 didn't). In understand >> there may already be multi path configured within the node, but how do >> i verify it with my set up? >> >> >> Thanks, >> jonathan >> >> >> __ >> This is a PRIVATE message. If you are not the intended recipient, >> please delete without copying and kindly advise us by e-mail of the >> mistake in delivery. NOTE: Regardless of content, this e-mail shall >> not operate to bind SKOPOS to any order or other contract unless >> pursuant to explicit written agreement or government initiative >> expressly permitting the use of e-mail for such purpose. >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > >___ >Users mailing list >Users@ovirt.org >http://lists.ovirt.org/mailman/listinfo/users This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Auto-start vms on boot?
My use case is that I just don't want to start manually the virtual machines when the host starts and, also, if the host is shutdown it should guest-shutdown the virtual machines. Any doc on that pin option? How one is supposed to pin a virtual machine to a host? Thank you. - Mensaje original - > De: "Itamar Heim" > Para: "Adrian Gibanel" > CC: "users" > Enviados: Viernes, 7 de Diciembre 2012 15:49:36 > Asunto: Re: [Users] Auto-start vms on boot? > On 12/06/2012 10:34 PM, Adrian Gibanel wrote: > > It would seem that oVirt does not provide an standard way of > > forcing boot of virtual machines at boot. > > > > Pools can have pre-started vms as stated here: > > https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.1/html/Administration_Guide/Prestarting_Virtual_Machines_in_a_Pool.html > > but pools imply state-less virtual machines and I am talking more > > about normal virtual machines. > > > > I've found this script: > > > > https://github.com/iranzo/rhevm-utils/blob/master/rhev-vm-start.py > > > > which could do to the trick if run at host boot. > > > > I've also thought (but not tried) to mark a virtual machine as > > "Highly Available" even if I have only one host (I mean, usually > > HA only makes sense when you have two hosts). > > > > Marking a VM as H.A. would do the trick? > > Any special reason why there isn't and standard way of marking > > which vms should be auto-started at boot? > > > > Just wanted to hear your thoughts before filling an RFE. > > > > > > Thank you. > > > what exactly is your use case? > the one i'm familiar with is to tie the VM life cycle to a specific > host, so a VM which is pinned to a specific host for a certain task > (say, IDS), is always starting when the host starts, and will be > automatically shutdown when host is moved to maintenance. > so only relevant for VMs which are pinned to a host. > Thanks, > Itamar -- -- Adrián Gibanel I.T. Manager +34 675 683 301 www.btactic.com Ens podeu seguir a/Nos podeis seguir en: i Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El medio ambiente es cosa de todos. AVIS: El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge per error, us agrairem que ho feu saber immediatament al remitent i que procediu a destruir el missatge . AVISO: El contenido de este mensaje y de sus anexos es confidencial. Si no es el destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si han recibido este mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al remitente y que procedan a destruir el mensaje . ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Manage users without Red Hat Directory Server or IBM Tivoli Directory Server?
On 12/06/2012 10:35 PM, Charlie wrote: Supporting non-Kerberos LDAP with simple authentication and no DNS integration would significantly decrease the work required for people like Dennis. Instead of having to set up Kerberos and DNS and an LDAP provider that integrates with both, he could just set up a very simple LDAP server and use a physically secured network or SSL with self-signed keys to protect his authentication traffic. There are already LDAP servers that use simple backends, including an OpenLDAP variant that uses /etc/passwd and /etc/shadow instead of a db. If the requirement for Kerberos and DNS directory integration were removed, and simple authentication worked, you would be able to support pretty much anything out there in the linux/unix world. That way oVirt wouldn't have to reinvent any wheels, and people like Dennis would have significantly less costly and time-consuming rebuilding of their networks to do before being able to implement oVirt. I agree. hopefully we'll get to fix this soon. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Auto-start vms on boot?
On 12/06/2012 10:34 PM, Adrian Gibanel wrote: It would seem that oVirt does not provide an standard way of forcing boot of virtual machines at boot. Pools can have pre-started vms as stated here: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.1/html/Administration_Guide/Prestarting_Virtual_Machines_in_a_Pool.html but pools imply state-less virtual machines and I am talking more about normal virtual machines. I've found this script: https://github.com/iranzo/rhevm-utils/blob/master/rhev-vm-start.py which could do to the trick if run at host boot. I've also thought (but not tried) to mark a virtual machine as "Highly Available" even if I have only one host (I mean, usually HA only makes sense when you have two hosts). Marking a VM as H.A. would do the trick? Any special reason why there isn't and standard way of marking which vms should be auto-started at boot? Just wanted to hear your thoughts before filling an RFE. Thank you. what exactly is your use case? the one i'm familiar with is to tie the VM life cycle to a specific host, so a VM which is pinned to a specific host for a certain task (say, IDS), is always starting when the host starts, and will be automatically shutdown when host is moved to maintenance. so only relevant for VMs which are pinned to a host. Thanks, Itamar ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] iscsi mpio
Hi, I can give you some information on multipathing on RHEV-H, as I don't have ovirt-node installed at the moment. Multipath packages are installed on the hypervisor, but vdsm overwrites multipath.conf. To prevent this you have to set the following tags: # RHEV REVISION 0.7 # RHEV PRIVATE Search for MPATH_CONF_TAG in multipath.py file to get correct RHEV REVISION number. After changing multipath.conf to fit your needs make it persistent for reboots: # persist /etc/multipath.conf You can check multipath status with: # multipath -ll mpathb (b00015521f5d4) dm-0 Intel,Multi-Flex size=96G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=50 status=active | `- 0:0:0:1 sdb 8:16 active ready running mpatha (222140001553601d1) dm-1 Intel,Multi-Flex size=20G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=50 status=active | `- 0:0:0:0 sda 8:0 active ready running I hope this helps. Don't really know if the tags are the same in oVirt, as I don't have a working oVirt setup at the moment. -- Best Regards, DI (FH) René Koch Senior Solution Architect ovido gmbh - "Das Linux Systemhaus" Brünner Straße 163, A-1210 Wien Phone: +43 720 / 530 670 Mobile: +43 660 / 512 21 31 E-Mail: r.k...@ovido.at On Fri, 2012-12-07 at 13:55 +, Jonathan Horne wrote: > > > From: Jonathan Horne > Date: Thursday, December 6, 2012 1:04 PM > To: users > Subject: [Users] iscsi mpio > > > > Hello, > > > I just rebooted my iscsi SAN and the ovirt node 2.5.5fc17 did not > enjoy that one bit. Does ovirt node support multi path i/o? Any > information about this would be greatly appreciated. > > > Thanks, > jonathan > > > > > > > I attempted to install the multi path driver on node 2.5.5-0.fc17, but > the file system is read only and i can't make changes. > > > I really need to find out about multipathing, because yesterday i > upgraded the firmware on my iscsi SAN yesterday, and one of my nodes > freaked out and rebooted (oddly, the other 2 didn't). In understand > there may already be multi path configured within the node, but how do > i verify it with my set up? > > > Thanks, > jonathan > > > __ > This is a PRIVATE message. If you are not the intended recipient, > please delete without copying and kindly advise us by e-mail of the > mistake in delivery. NOTE: Regardless of content, this e-mail shall > not operate to bind SKOPOS to any order or other contract unless > pursuant to explicit written agreement or government initiative > expressly permitting the use of e-mail for such purpose. > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] iscsi mpio
From: Jonathan Horne mailto:jho...@skopos.us>> Date: Thursday, December 6, 2012 1:04 PM To: users mailto:users@ovirt.org>> Subject: [Users] iscsi mpio Hello, I just rebooted my iscsi SAN and the ovirt node 2.5.5fc17 did not enjoy that one bit. Does ovirt node support multi path i/o? Any information about this would be greatly appreciated. Thanks, jonathan I attempted to install the multi path driver on node 2.5.5-0.fc17, but the file system is read only and i can't make changes. I really need to find out about multipathing, because yesterday i upgraded the firmware on my iscsi SAN yesterday, and one of my nodes freaked out and rebooted (oddly, the other 2 didn't). In understand there may already be multi path configured within the node, but how do i verify it with my set up? Thanks, jonathan This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users