[Users] usb pass-through
Does the Windows spice client not support usb pass-through? Also, is extremely slow with IE 8. Any advice on speeding the user or admin console up in IE8? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Server 2k8
Are spice guest tools that will work with Server 2k8 available - especially video drivers? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] High Availability OVirt
Hi, You can use a cluster software like pacemaker or RH Cluster to make your ovirt-engine high available. I personally recommend to virtualise it on a KVM host, as keeping virtual machines high available is much easier then making all services ovirt-engine need high available (but it's possible for sure). So what happens when your engine hosts goes down: - You can't login into webadmin and user portals - high availability and live migration features of your virtual machines aren't working - You can't start/deploy new virtual machines - Your hosts and virtual machines that are already running stay up of course As clustering brings in more complexity I would use a server with RAID, redundant power supplies and redundant network interfaces (bonding) for ovirt-engine instead of clustering. Only when using desktop virtualization where availabilty of the user portal is really important I would cluster ovirt-engine. (But this is just my personal opinion) Regards, René On Fri, 2013-04-19 at 08:19 -0300, victor nunes wrote: > > Is there any way to create redundancy oVirt-engine? > We install oVirt-engine in a machine, and what happens if that burning > machine? > > > Is there anything that can be done to remedy this? > > > Att, > -- > “Encarada do ponto de vista da juventude, a vida parece um futuro > indefinidamente longo, ao passo que, na velhice, ela parece um > passado > deveras curto. Assim, a vida no seu início se apresenta do mesmo modo > que as coisas quando as olhamos através de um binóculo usado ao > contrário; mas, ao > seu final, ela se parece com as coisas tal qual são vistas quando o > binóculo > é usado de modo normal. Um homem precisa ter envelhecido e vivido > bastante para perceber como a vida é curta”. > > (Poema de Arthur Schopenhauer) > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] High Availability OVirt
Is there any way to create redundancy oVirt-engine? We install oVirt-engine in a machine, and what happens if that burning machine? Is there anything that can be done to remedy this? Att, -- “Encarada do ponto de vista da juventude, a vida parece um futuro indefinidamente longo, ao passo que, na velhice, ela parece um passado deveras curto. Assim, a vida no seu início se apresenta do mesmo modo que as coisas quando as olhamos através de um binóculo usado ao contrário; mas, ao seu final, ela se parece com as coisas tal qual são vistas quando o binóculo é usado de modo normal. Um homem precisa ter envelhecido e vivido bastante para perceber como a vida é curta”. (Poema de Arthur Schopenhauer) ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.2.1 - VM in unknown status
Hi, I had a nfs storage problems and one of my VM's is now in unknown state. The VM is not attached to any host. The virsh tool shows that VM is not running now. Is there any way to bring that kind of VM back to life? Best regards, Piotr Hi, I found workaround for this situation. There is a possibility to change the VM status manually in database: update vm_dynamic set status=0 where vm_guid='1d5342a3-9375-49c5-bf2a-2bd444d41d09'; @oVirt developers It seems that in some situations, VM with'unknown' status has no chances to change state. When all hosts in data center are up and the VM is not attached to any host it's a dead end. In this particular situation oVirt engine administrator can't perform any actions on "lost" VM. Is any way to handle this case without database access? the state transition in this case is the engine responsibility and not the user. its done by monitoring all VMs on all Hosts periodically. Whats more interesting is how have you got to that situation and if its worth a bug please be kind to report it. Hi Roy, I reported a bug (953839). If it would be considered as worth to fix I can provide some help. I haven't introduce jet to the oVirt community. So I'm software engineer in the Polish Research and Academic Computer Network. Mainly I'm Java EE developer. We use oVirt to manage our development and testing environments. I'm very enthusiastic about oVirt project. PS sorry for my English. I try to do my best ;-) Best regards, Piotr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt support for backup/restore
On 04/19/2013 11:46 AM, Gianluca Cecchi wrote: On Fri, Apr 19, 2013 at 9:58 AM, Itamar Heim wrote: qemu-guest-agent isn't ovirt/rhev-guest-agent. now that qemu started their own guest agent, ovirt/rhev-guest-agent isn't used for things covered by the qemu-guest-agent. Ah, ok. SO after reading http://wiki.libvirt.org/page/Qemu_guest_agent some questions can we use ootb qemu-guest-agent with ovirt 3.2.1 if we have for example a CentOS 6 guest (with qemu-guest-agent-0.12.1.2-2.355)? Does oVirt create a virtio-serial port to communicate with libvit and does it use libvirt with any options to quiesce fs on guest? iirc, it should work - testing/feedback would be welcome. Can one use as an alternative the command fsfreeze in CentOS 6.4 guest (even if in tech preview) as in : https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Technical_Notes/storage.html what i remember is that it supposed to be pluggable/configurable for you to run the relevant commands you want. ayal may remember better ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt support for backup/restore
On Fri, Apr 19, 2013 at 9:58 AM, Itamar Heim wrote: > > qemu-guest-agent isn't ovirt/rhev-guest-agent. > now that qemu started their own guest agent, ovirt/rhev-guest-agent isn't > used for things covered by the qemu-guest-agent. Ah, ok. SO after reading http://wiki.libvirt.org/page/Qemu_guest_agent some questions can we use ootb qemu-guest-agent with ovirt 3.2.1 if we have for example a CentOS 6 guest (with qemu-guest-agent-0.12.1.2-2.355)? Does oVirt create a virtio-serial port to communicate with libvit and does it use libvirt with any options to quiesce fs on guest? Can one use as an alternative the command fsfreeze in CentOS 6.4 guest (even if in tech preview) as in : https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Technical_Notes/storage.html Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] virt-v2v
The VM is a Linux based Sophos UTM Manager, with a kernel based on version 3.3.8, Do you think it can be a problem? - Original Message - From: "Jonathan Horne" To: supo...@logicworks.pt Sent: Quinta-feira, 18 de Abril de 2013 19:02:32 Subject: Re: [Users] virt-v2v what is the OS of the virtual machine you are migrating? i had some errors myself, they were related to ovirt not providing the necessary virtio driver for a windows guest migration. (i can create a windows machine at the console, but i cannot migrate one in without the virtio package). this isn't the same error as that, but it might be related, so thats why i ask what is your OS of your virtual machine? Skopos Website JONATHAN HORNE | Systems Administrator | Skopos Web e: jho...@skopos.us t: 214.520.4600 x 5042 f: 214.520.5079 LinkedIn From: " supo...@logicworks.pt " < supo...@logicworks.pt > Date: Thursday, April 18, 2013 12:55 PM To: Jonathan Horne < jho...@skopos.us > Subject: Re: [Users] virt-v2v Yes, I got it. Now I have no more the error complaining about the storage pool, but, in fact, it starts the job until 100% and then send another error message: KVM_NAME.qcow2: 100% [=]D 0h09m54s virt-v2v: Failed to launch guestfs appliance. Try running again with LIBGUESTFS_DEBUG=1 for more information Any idea? - Original Message - From: "Jonathan Horne" < jho...@skopos.us > To: supo...@logicworks.pt Sent: Quinta-feira, 18 de Abril de 2013 18:17:40 Subject: RE: [Users] virt-v2v Showmount -e [ipadress-of-nas] From: supo...@logicworks.pt [ mailto:supo...@logicworks.pt ] Sent: Thursday, April 18, 2013 4:26 AM To: Shu Ming Cc: Jonathan Horne; Users@ovirt.org Subject: Re: [Users] virt-v2v Yes, that's right, but the problem was with the syntax. After running the command, at the end I get this error message: virt-v2v: Failed to launch guestfs appliance. Try running again with LIBGUESTFS_DEBUG=1 for more information Any idea? - Original Message - From: "Shu Ming" < shum...@linux.vnet.ibm.com > To: supo...@logicworks.pt Cc: "Jonathan Horne" < jho...@skopos.us >, Users@ovirt.org Sent: Quinta-feira, 18 de Abril de 2013 3:11:10 Subject: Re: [Users] virt-v2v You can get the mount path from the admin portal. system--->storage--->select a storage domain--->In the general tab page, you can have the nfs export path. supo...@logicworks.pt : > well, I run the virt-v2v from the engine server, if I do a showmount > -e it will show me the iso domain. The NFS/export storage is located > at a NAS on the LAN, so I think I need the path to that NAS, how can I > found it? > > > > *De: *"Jonathan Horne" < jho...@skopos.us > > *Para: *supo...@logicworks.pt , Users@ovirt.org > *Enviadas: *Quarta-feira, 17 Abril, 2013 20:33:29 > *Assunto: *RE: virt-v2v > > I imagine “showmount -e” will show you the NFS path. I recently > imported some VMs from KvM, and my syntax was like this: > > virt-v2v -i libvirt -ic qemu+ ssh://root@source-server/system -o rhev > -os ovirt-exportnfs-server:/opt/nfs -of qcow2 -oa sparse -n ovirtmgmt > exactname-of-kvm-vm > > jonathan > > *From: *users-boun...@ovirt.org [ mailto:users-boun...@ovirt.org ] *On > Behalf Of *supo...@logicworks.pt > *Sent:* Wednesday, April 17, 2013 12:06 PM > *To:* Users@ovirt.org > *Subject:* [Users] virt-v2v > > Hi, > > Im trying to migrate a VM from a qemu kvm server. I'm not sure of the > correct syntax: > > virt-v2v -ic qemu+ ssh://root@/server_ip_addr//system -op > /export_domain/ --bridge ovirtmgmt /kvm_vm_name/ > > I have setup a NFS/export storage on the manager, for the export > domain. How can I find the correct path of the export_domain? > > regards > > -- > > > > Jose Ferradeira > http://www.logicworks.pt > > > > This is a PRIVATE message. If you are not the intended recipient, > please delete without copying and kindly advise us by e-mail of the > mistake in delivery. NOTE: Regardless of content, this e-mail shall > not operate to bind SKOPOS to any order or other contract unless > pursuant to explicit written agreement or government initiative > expressly permitting the use of e-mail for such purpose. > I imagine “showmount -e” will show you the NFS path. I recently > imported some VMs from KvM, and my syntax was like this: > > virt-v2v -i libvirt -ic qemu+ ssh://root@source-server/system -o rhev > -os ovirt-exportnfs-server:/opt/nfs -of qcow2 -oa sparse -n ovirtmgmt > exactname-of-kvm-vm > > jonathan > > From: users-boun...@ovirt.org [ mailto:users-boun...@ovirt.org ] On > Behalf Of supo...@logicworks.pt > Sent: Wednesday, April 17, 2013 12:06 PM > To: Users@ovirt.org > Subj
[Users] Master Storage Domain in status locked.
Greetings. Due to an unfortunate storage misconfiguration (iscsi) the Ovirt-node was unable to mount it's master storage domain after a power outage. During the troubleshooting I believe I have managed to get the domain in some weird state where it now remains in status locked. If trying to take (only) host/node online it goes back to non-operational since it's unable to mount the master storage domain. Rebooting the node and or the engine makes no difference. Everything storage wise should be OK now. Any idea on how to recover ? vdsm.log attached With kind regards Jonas Thread-151::DEBUG::2013-04-18 20:40:00,024::BindingXMLRPC::913::vds::(wrapper) client [46.22.124.43]::call getCapabilities with () {} Thread-151::DEBUG::2013-04-18 20:40:00,087::BindingXMLRPC::920::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:a07da5fa539'}], 'FC': []}, 'packages2': {'kernel': {'release': '205.fc18.x86_64', 'buildtime': 1361736602.0, 'version': '3.7.9'}, 'spice-server': {'release': '3.fc18', 'buildtime': 1358784016L, 'version': '0.12.2'}, 'vdsm': {'release': '9.fc18', 'buildtime': 1362046220L, 'version': '4.10.3'}, 'qemu-kvm': {'release': '6.fc18', 'buildtime': 1359843954L, 'version': '1.2.2'}, 'libvirt': {'release': '1.fc18', 'buildtime': 1359405439L, 'version': '0.10.2.3'}, 'qemu-img': {'release': '6.fc18', 'buildtime': 1359843954L, 'version': '1.2.2'}, 'mom': {'release': '1.fc18', 'buildtime': 1349470214L, 'version': '0.3.0'}}, 'cpuModel': 'AMD Opteron(TM) Processor 6276', 'hooks': {}, 'cpuSockets': '4', 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '46.22.124.37', 'cfg': {'IPADDR': '46.22.124.37', 'PEERDNS': 'no', 'GATEWAY': '46.22.124.36', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.224', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.224', 'stp': 'off', 'bridged': True, 'gateway': '46.22.124.36', 'ports': ['em1']}, 'san': {'iface': 'san', 'addr': '192.168.43.11', 'cfg': {'IPADDR': '192.168.43.11', 'PEERDNS': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'san', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports': ['em2']}, 'vminternet': {'iface': 'vminternet', 'addr': '', 'cfg': {'PEERDNS': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'vminternet', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports': ['em4']}}, 'bridges': {'ovirtmgmt': {'addr': '46.22.124.37', 'cfg': {'IPADDR': '46.22.124.37', 'PEERDNS': 'no', 'GATEWAY': '46.22.124.36', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.224', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.224', 'stp': 'off', 'ports': ['em1']}, 'san': {'addr': '192.168.43.11', 'cfg': {'IPADDR': '192.168.43.11', 'PEERDNS': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'san', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['em2']}, 'vminternet': {'addr': '', 'cfg': {'PEERDNS': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'vminternet', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ports': ['em4']}}, 'uuid': '4C4C4544-0042-4710-8030-B3C04F37354A', 'lastClientIface': 'ovirtmgmt', 'nics': {'em4': {'addr': '', 'cfg': {'BRIDGE': 'vminternet', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:20', 'STP': 'no', 'DEVICE': 'em4', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:20', 'speed': 100}, 'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:1a', 'STP': 'no', 'DEVICE': 'em1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1a', 'speed': 1000}, 'em3': {'addr': '', 'cfg': {'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'HWADDR': 'd0:67:e5:f9:2e:1e', 'STP': 'no', 'DEVICE': 'em3', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1e', 'speed': 0}, 'em2': {'addr': '', 'cfg': {'BRIDGE': 'san', 'NM_CONTROLLED': 'no', 'PEERDNS': 'no', 'MTU': '9000', 'HWADDR': 'd0:67:e5:f9:2e:1c', 'STP': 'no', 'DEVICE': 'em2', 'ONBOOT': 'yes'}, 'mtu': '9000', 'netmask': '', 'hwaddr': 'd0:67:e5:f9:2e:1c', 'speed': 1000}}, 'software_revision': '9', ' clusterLevels': ['3.0', '3.1', '3.2'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt
Re: [Users] oVirt support for backup/restore
On 04/19/2013 10:33 AM, Gianluca Cecchi wrote: On Mon, Mar 4, 2013 at 2:02 PM, Itamar Heim wrote: in general, you are correct - you don't care about live snapshot with memory, since you probably mostly care about backing up the disks. you may care, depending on the type of guest, on having a qemu-guest-agent installed, which would help with syncing the writes to the disks before the live snapshot is taken. something to test though is if a daily snapshot does not degrade the performance of the running VM, due to more COW layers (until the backup api and live merge are in place). hello, coming back to this topic. when you say " you may care, depending on the type of guest, on having a qemu-guest-agent installed, which would help with syncing the writes to the disks " do you refer to something in particular already implemented as a command or something similar? I have a CentOS 6 guest with rhev-guest-agent taken by dreyu repoan dwould like to test what available in case. Thanks, Gianluca qemu-guest-agent isn't ovirt/rhev-guest-agent. now that qemu started their own guest agent, ovirt/rhev-guest-agent isn't used for things covered by the qemu-guest-agent. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt support for backup/restore
On Mon, Mar 4, 2013 at 2:02 PM, Itamar Heim wrote: > in general, you are correct - you don't care about live snapshot with > memory, since you probably mostly care about backing up the disks. > you may care, depending on the type of guest, on having a qemu-guest-agent > installed, which would help with syncing the writes to the disks before the > live snapshot is taken. > > something to test though is if a daily snapshot does not degrade the > performance of the running VM, due to more COW layers (until the backup api > and live merge are in place). hello, coming back to this topic. when you say " you may care, depending on the type of guest, on having a qemu-guest-agent installed, which would help with syncing the writes to the disks " do you refer to something in particular already implemented as a command or something similar? I have a CentOS 6 guest with rhev-guest-agent taken by dreyu repoan dwould like to test what available in case. Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users