[Users] Fwd: HostNic Addition with IP Address
Thanks Michael, I had a older version of the python-binding which had bonding in #1 as a compulsory parameter... Seeing your options I updated the sdk and now the error is resolved. I see bonding as an optional param. Regards, Rahul. -- Forwarded message -- From: Michael Pasternak mpast...@redhat.com Date: Thu, Jul 5, 2012 at 7:22 PM Subject: Re: [Users] HostNic Addition with IP Address To: Rahul Upadhyaya rak...@gmail.com Cc: users@ovirt.org Hi Rahul, #1 is the right place for doing this, see method __doc__ [1] #2 used for attaching network to NIC, see __doc__ is [2] * note all methods in sdk well documented describing what parameters-holder to use and how to fill it. [1] ''' @type HostNIC: @param hostnic.network.id|name: string @param hostnic.name: string [@param hostnic.bonding.slaves.host_nic: collection] { [@ivar host_nic.id|name: string] } [@param hostnic.bonding.options.option: collection] { [@ivar option.name: string] [@ivar option.value: string] [@ivar type: string] } [@param hostnic.ip.gateway: string] [@param hostnic.boot_protocol: string] [@param hostnic.mac: string] [@param hostnic.ip.address: string] [@param hostnic.ip.netmask: string] @return HostNIC: ''' [2] ''' @type Action: @param action.network.id|name: string [@param action.async: boolean] [@param action.grace_period.expiry: long] @return Response: ''' On 07/05/2012 03:00 PM, Rahul Upadhyaya wrote: Hi Folks, I am facing issues while adding NIC with Static I.P. to host using the oVirt Python-Bindings. There are two api's 1) HostNics.add :Here it requires binding param for interface bonding as a compulsory param. I dont want to bond more than one interfaces for a network while creating network each time. 2) HostNic.attach: This lets me attach the NIC but does not let me put static IP address to it as soon as I add it. I am using a workaround of updating the NIC with the IP information, but it requires me to put all the hosts on maintenance and shut-down all the running VMs which again is not the best way of doing it. Is it not supported by the API that at the time of addition of the hostNIC we can specify the IP address or is it something that I am missing ? Also,I see that this operation is supported from the manager UI . -- Regards, Rahul === ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Pasternak RedHat, ENG-Virtualization RD -- Regards, Rahul === ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.1 and Glusterfs how-to
On 07/05/2012 10:08 PM, Robert Middleswarth wrote: On 07/05/2012 02:19 PM, Itamar Heim wrote: On 07/05/2012 10:06 AM, Robert Middleswarth wrote: 2) You can not change what interface / IP gluster uses, it will always use the ovirtmgmt network. This is a weakness as many people have an independent network just for storage and they can't use them with 3.1. sorry, i don't understand this one - only the management of gluster is done via this interface. you can define a different logical network in the cluster for storage, configure ip addresses for them, and define the mount point that way. (well, apart from potential bugs on network definitions in 3.1 which may still exist) Unless I am missing something if I setup Gluster to use anything other then the ovirtmgmt network then I can't create volumes because the engine uses the ovirtmgmt networks IP's and Gluster doesn't recognize it. engine uses ovirtmgmt to create them. you can define another network to consume them (assuming you have more than a single nic. otherwise, i'm not sure why it matters). Are you suggesting that I add both Network IP into Gluster. That would work but how would I know what network Gluster would use to sync up with? I can think of two items: 1. network for hosts running VMs to communicate over - I assume will be based on the url of the export you provide (well, for nfs. for native/posix, not sure how redirection will work). maybe worth adding something similar to the 'display network' - telling gluster which interface should be provided for clients to communicate over 2. for communication between gluster nodes for replication, etc. maybe worth defining something like 'live migration network' or 'storage network'[1] - telling nodes which interface they should use for replication between nodes. [1] both storage and live migration network are still not available for ovirt today, just a concept. display network is available today. Thanks Robert ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] linux desktop
On 07/06/2012 11:54 AM, Umarzuki Mochlis wrote: Hi, Is it possible to create some sort of virtual desktop for teaching/vpn purposes where we can use ldap for login credential for every virtual desktop? if you setup ovirt and the virtual desktops with same directory, you would even get SSO (user would need to login to user portal, but not later to the guest as well) ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] The problem with spicevmc not supported in this QEMU binary
On 07/06/2012 06:46 AM, xuejie chen wrote: Hi everyone, I installed the vdsm(4.9.6) in CentOS 6 and the libvirt version is 0.9.4. I create a VM with all default values, But, when I run the vm, it returns failure with the follow error message in WebAdmin. unsupported configuration: spicevmc not supported in this QEMU binary There are logs file in attachment Best wishes, Xuejie Chen ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users I assume you need to get a different version of qemu. cc-ing spice-devel ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] What is it going to take to get EL6 builds?
On 07/06/2012 01:05 AM, Robert Middleswarth wrote: I know there are a few things that don't work under oVirt on EL6 but there are unofficial builds out there and they seem to work pretty well. What is the major stopper from getting EL6 builds? Is it just a mater of getting patches submitted for building the spec files? Is there a need for EL 6 based slaves? Is there a concern about the features that don't work like Live Migration? I guess a good starting point is to build a todo list of what has to be done. just time. i see both el6 and debian distro's as next on the list, but current focus is on getting 3.1 out, then additional distros. help on pushing other distros is welcome of course. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] iSCSI discovery not showing all LUNs - oVirt 3.1
On Fri, Jul 6, 2012 at 8:07 AM, Itamar Heim ih...@redhat.com wrote: On 07/05/2012 06:08 PM, Trey Dockendorf wrote: I have a Promise M300i iSCSI with 2 LUNs. A 2TB LUN with ID 2260-0001-557c-af0a and a 4TB LUN with ID 22d9-0001-553e-4d6a. What's strange, is the very first time I ran discovery I saw both LUNs. I checked the 2TB LUN and storage failed to add, I don't have logs at this time, but I went back to repeat the process and now only 1 LUN shows in the GUI (see attached image). Also the size it reports is way off. Looking at VDSM logs, I get this output when doing the login to a target {'devList': [ {'vendorID': 'Promise', 'capacity': '2188028149760', 'fwrev': '0227', 'partitioned': False, 'vgUUID': 'AZ1iMt-gzBD-2uug-xTih-1z0b-PqPy-xSP0A4', 'pathlist': [ { 'initiatorname': 'default', 'connection': '192.168.203.100', 'iqn': 'iqn.1994-12.com.promise.xxx', 'portal': '1', 'password': '**', 'port': '3260' } ], 'logicalblocksize': '512', 'pathstatus': [ { 'physdev': 'sde', 'type': 'iSCSI', 'state': 'active', 'lun': '0' } ], 'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': 'v2N3ok-wrki-OQQn-1XFL-w69n-8wAF-rmCFWt', 'serial': 'SPromise_VTrak_M300i_F08989F89FFF6C42', 'GUID': '22261557caf0a', 'productID': 'VTrak M300i' }, { 'vendorID': 'Promise', 'capacity': '20246190096384', 'fwrev': '0227', 'partitioned': False, 'vgUUID': '', 'pathlist': [ { 'initiatorname': 'default', 'connection': '192.168.203.100', 'iqn': 'iqn.1994-12.com.promise.xxx', 'portal': '1', 'password': '**', 'port': '3260' } ], 'logicalblocksize': '2048', 'pathstatus': [ { 'physdev': 'sdf', 'type': 'iSCSI', 'state': 'active', 'lun': '1' } ], 'devtype': 'iSCSI', 'physicalblocksize': '2048', 'pvUUID': '', 'serial': 'SPromise_VTrak_M300i_DA3FF8D8099662D7', 'GUID': '222d90001553e4d6a', 'productID': 'VTrak M300i' } ] } In that output both LUNs are seen. I couldn't tell from the code what format the capacity is in, but now the interface shows only the LUN with the 4d6a GUID as being 18TB. I've attached the VDSM Logs from the point of selecting my datacenter to after clicking Login. Any suggestions? node - vdsm-4.10.0-2.el6.x86_64 Thanks - Trey ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users the LUN you don't see is 'dirty' and vdsm filters it. there are some patches for showing all LUNs and just graying them out at ui level (but these are post ovirt 3.1. dd with zeros the start of your LUN should bring it back I re-initialized the RAID array, and attempted adding the storage domain that resulted in failure again. This is the error in Web interface Error: Cannot attach Storage. Storage Domain doesn't exist. I've attached log vdsm that is a snapshot from the time right before clicking Ok and the error. ovirt-engine is 3.1 and vdsm is 4.10.0-4. Both engine and node are CentOS 6.2 I attempted to run the failing command manually # /sbin/lvm pvcreate --config devices { preferred_names = [\^/dev/mapper/\] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \a%1ATA_ST32000644NS_9WM7SV9Y|1ATA_ST32000644NS_9WM7ZXVC|22261557caf0a%\, \r%.*%\ ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } --metadatasize 128m --metadatacopies 2 --metadataignore y /dev/mapper/22261557caf0a Can't open /dev/mapper/22261557caf0a exclusively. Mounted filesystem? What's strange is fuser shows nothing using that path, or the /dev/dm-4 path it's referencing. However the device created in dmesg (/dev/sde) does show usage # ls -la /dev/mapper/ total 0 drwxr-xr-x. 2 root root180 Jul 6 15:23 . drwxr-xr-x. 20 root root 4020 Jul 6 15:27 .. lrwxrwxrwx. 1 root root 7 Jul 6 15:12 1ATA_ST32000644NS_9WM7SV9Y - ../dm-2 lrwxrwxrwx. 1 root root 7 Jul 6 15:12 1ATA_ST32000644NS_9WM7ZXVC - ../dm-3 lrwxrwxrwx. 1 root root 7 Jul 6 15:27 22261557caf0a - ../dm-4 crw-rw. 1 root root 10, 58 Jul 6 15:11 control lrwxrwxrwx. 1 root root 7 Jul 6 15:23 ef7e7c07--f144--4843--8526--4afd0ec33368-metadata - ../dm-5 lrwxrwxrwx. 1 root root 7 Jul 6 15:11 vg_dhv01-lv_root - ../dm-1 lrwxrwxrwx. 1 root root 7 Jul 6 15:11 vg_dhv01-lv_swap - ../dm-0 [root@dhv01 ~]# fuser