[ovirt-users] 3.6.5.3: UI Uncaught Exception: New VM from Template

2016-05-07 Thread Richard Chan
Hello list, I hope you can help with a UI exception from New VM from
Template.

1. User (PowerUser) is able to create New VMs manually
2. New VM from template gives a UI exception:

__gwt$exception: : Cannot read property 'f' of undefined

2016-05-08 11:04:50,875 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-28) [] Permutation name: 56114F8548175924C03C3BC67436871E
2016-05-08 11:04:50,875 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-28) [] Uncaught exception: :
com.google.gwt.core.client.JavaScriptException: (TypeError)
 __gwt$exception: : f is undefined
at
org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocationModel.$updateImageToDestinationDomainMap(DisksAllocationModel.java:288)
at
org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocationModel.$getImageToDestinationDomainMap(DisksAllocationModel.java:130)
at
org.ovirt.engine.ui.uicommonweb.models.templates.VmBaseListModel.$saveNewVm(VmBaseListModel.java:354)
at
org.ovirt.engine.ui.uicommonweb.models.templates.VmBaseListModel.$onSaveVM(VmBaseListModel.java:331)
at
org.ovirt.engine.ui.uicommonweb.models.templates.VmBaseListModel$6.onSuccess(VmBaseListModel.java:309)
at
org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:244)
[frontend.jar:]





-- 
Richard Chan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problems exporting VM's

2016-05-07 Thread Luciano Natale
Hi everyone. I've been having trouble when exporting VM's. I get error when
moving image. I've created a whole new storage domain exclusive for this
issue, and same thing happens. It's not always the same VM that fails, but
once it fails on a certain storage domain, I cannot export it anymore.
Please tell me which logs are relevant so i can post them and any other
relevant iformation I can provide, and maybe someone can help me get
through this problem.

Ovirt version is 3.5.4.2-1.el6. Hosted engines is CentOS 6. Hosts are
CentOS 7. VM's are all CentOS 7, except for two that are CentOS 6 and
Windows 7.

Please excuse my bad english!
Thanks in advance!

-- 
Luciano Natale
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] configuring bonding on host

2016-05-07 Thread Fabrice Bacchella

> Le 7 mai 2016 à 18:18, Juan Hernández  a écrit :
> 
> On 05/06/2016 05:20 PM, Fabrice Bacchella wrote:
>> I'm following the example given
>> in http://www.ovirt.org/develop/api/pythonapi/ for bonding interfaces.
>> 

>> What am I missing ?
>> 
> 
> The example that you mention describes the old and deprecated
> /hosts/{host:id}/nics/setupnetworks action, but you are sending the
> request to /hosts/{host:id}/setupnetworks, which just ignores the
> "host_nics" elements that you are sending. There is an example of how to
> use the newer action here:
> 
> 
> https://jhernand.fedorapeople.org/ovirt-api-explorer/#/services/host/methods/setup-networks

Ok.I got it.

The samples says:  host.nics.setupnetworks(...)
And my code says:  host.setupnetworks(...)

But why does it silently ignore it ? Shouldn't it throw me an error ?

I think I will soon have more questions about what is a modified or removed 
object when creating bond, but I will need to play a little more with it.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] configuring bonding on host

2016-05-07 Thread Juan Hernández
On 05/06/2016 05:20 PM, Fabrice Bacchella wrote:
> I'm following the example given
> in http://www.ovirt.org/develop/api/pythonapi/ for bonding interfaces.
> 
> I'm checking that the network is a plain configuration,
> exporting /api/hosts//nics return :
> 
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/958c40cd-9ddb-4548-8bd8-79f454021c35"
> id="958c40cd-9ddb-4548-8bd8-79f454021c35">
> 
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/958c40cd-9ddb-4548-8bd8-79f454021c35/attach"
> rel="attach"/>
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/958c40cd-9ddb-4548-8bd8-79f454021c35/detach"
> rel="detach"/>
> 
> eth1
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/958c40cd-9ddb-4548-8bd8-79f454021c35/statistics"
> rel="statistics"/>
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/958c40cd-9ddb-4548-8bd8-79f454021c35/labels"
> rel="labels"/>
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/958c40cd-9ddb-4548-8bd8-79f454021c35/networkattachments"
> rel="networkattachments"/>
>  id="db240f83-9266-4892-a6d2-8ac406cadfb1"/>
> 
> 
> none
> 
> down
> 
> 1500
> false
> 
> 
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/87a274e8-9633-45df-9205-1d188bd3ee4c"
> id="87a274e8-9633-45df-9205-1d188bd3ee4c">
> 
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/87a274e8-9633-45df-9205-1d188bd3ee4c/attach"
> rel="attach"/>
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/87a274e8-9633-45df-9205-1d188bd3ee4c/detach"
> rel="detach"/>
> 
> eth0
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/87a274e8-9633-45df-9205-1d188bd3ee4c/statistics"
> rel="statistics"/>
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/87a274e8-9633-45df-9205-1d188bd3ee4c/labels"
> rel="labels"/>
>  href="/api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/nics/87a274e8-9633-45df-9205-1d188bd3ee4c/networkattachments"
> rel="networkattachments"/>
>  id="db240f83-9266-4892-a6d2-8ac406cadfb1"/>
>  id="f429c46c-fed4-4c88-a000-36c021f5d633"/>
> 
>  address="10.83.17.24"/>
> dhcp
> 100
> 
> up
> 
> 9000
> true
> false
> 
> 
> 
> I send my configuration and get :
> 
>> POST /api/hosts/db240f83-9266-4892-a6d2-8ac406cadfb1/setupnetworks
> HTTP/1.1
> ...
>> my configuration
> 
> < HTTP/1.1 200 OK
> 
> < 
> < 
> < 
> < 
> < bond0
> < 
> < ovirtmgmt
> < 
> <  gateway="10.83.31.254"/>
> < 
> < 
> < 
> < 
> < 
> < 
> < 
> < 
> < eth0
> < 
> < 
> < none
> < 9000
> < 
> < 
> < eth1
> < 
> < 
> < none
> < 9000
> < 
> < 
> < 
> < static
> < 9000
> < true
> < 
> < 
> < true
> < false
> <  id="859bc27c-2060-4349-a0f5-dc1dd6333e6c"/>
> < 
> < complete
> < 
> < 
> 
> 
> So every thing is fine, I applied my configuration.
> 
> But in the log, I get :
> 2016-05-06 17:13:22,481 INFO
>  [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand]
> (default task-20) [30e54e04] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[db240f83-9266-4892-a6d2-8ac406cadfb1=  ACTION_TYPE_FAILED_SETUP_NETWORKS_IN_PROGRESS>]',
> sharedLocks='null'}'
> 2016-05-06 17:13:22,555
> INFO  [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand]
> (default task-20) [30e54e04] Running command: HostSetupNetworksCommand
> internal: false. Entities affected :  ID:
> db240f83-9266-4892-a6d2-8ac406cadfb1 Type: VDSAction group
> CONFIGURE_HOST_NETWORK with role type ADMIN
> 2016-05-06 17:13:22,555
> INFO  [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand]
> (default task-20) [30e54e04] No changes were detected in setup networks
> for host 'nb0101' (ID: 'db240f83-9266-4892-a6d2-8ac406cadfb1')
> 2016-05-06 17:13:22,563
> INFO  [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand]
> (default task-20) [30e54e04] Lock freed to object
> 'EngineLock:{exclusiveLocks='[db240f83-9266-4892-a6d2-8ac406cadfb1=  ACTION_TYPE_FAILED_SETUP_NETWORKS_IN_PROGRESS>]',
> sharedLocks='null'}'
> 
> And indeed my configuration is not changed.
> 
> What am I missing ?
> 

The example that you mention describes the old and deprecated
/hosts/{host:id}/nics/setupnetworks action, but you are sending the
request to 

Re: [ovirt-users] virt-in-virt problem: DHCP failing for a container in a oVirt VM

2016-05-07 Thread Yaniv Kaul
On Fri, May 6, 2016 at 11:07 PM, Will Dennis  wrote:

> That’s in iptables, right? I have iptables disabled on my oVirt nodes...
>

No, it's a L2 filter libvirt sets up, I believe using ebtables.
Y.


>
>
> *From:* Yaniv Kaul [mailto:yk...@redhat.com]
> *Sent:* Friday, May 06, 2016 3:50 PM
> *To:* Will Dennis
> *Subject:* Re: [ovirt-users] virt-in-virt problem: DHCP failing for a
> container in a oVirt VM
>
>
>
> Long shot - you need to disable the EnableMACAntiSpoofingFilterRules .
>
> Y.
>
>
>
> On Fri, May 6, 2016 at 8:27 PM, Will Dennis  wrote:
>
> Hi all,
>
>
>
> Have an interesting problem – I am running a VM in oVirt that is running
> Proxmox VE 4.1 OS, which I have spun up a container on.  The container is
> set for DHCP, and I have verified that it is sending Discover packets as
> normal, and that these packets are making it out of the Proxmox VM to the
> oVirt bridge (which is attached to a VLAN sub-interface of a bond
> interface.) However, these packets do NOT make it past the oVirt bridge.
> The interesting thing is that the Proxmox VM (as well as any other VM I
> spin up on oVirt) works fine with DHCP. (I also have other oVirt VMs
> instantiated which are using LXD to spin up containers, and I have the same
> problem with those as well.) I checked a bunch of stuff, and the only clue
> I could find is that it seems that the oVirt bridge is not learning the MAC
> for the container on the VM, even though it does learn the VM’s MAC, but I
> can capture DHCP traffic coming from the container off the ‘vnet0’
> interface which is joined to that bridge...
>
>
>
> Info:
>
>
>
> = off Proxmox VM =
>
>
>
> Container's MAC address: 32:62:65:61:65:33
>
>
>
> root@proxmox-02:~# ip link sh
>
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
> DEFAULT group default
>
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> master vmbr0 state UP mode DEFAULT group default qlen 1000
>
> link/ether 00:1a:4a:16:01:57 brd ff:ff:ff:ff:ff:ff
>
> 3: vmbr0:  mtu 1500 qdisc noqueue state
> UP mode DEFAULT group default
>
> link/ether 00:1a:4a:16:01:57 brd ff:ff:ff:ff:ff:ff
>
> 7: veth100i0@if6:  mtu 1500 qdisc
> pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
>
> link/ether fe:50:4f:3c:bd:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> <<< veth connection to container
>
>
>
> root@proxmox-02:~# brctl showmacs vmbr0
>
> port no mac addris local?   ageing timer
>
>   1 00:12:3f:24:a4:54   no   112.88
>
>   1 00:1a:4a:16:01:56   no 0.02
>
>   1 00:1a:4a:16:01:57   yes0.00
>
>   1 00:1a:4a:16:01:57   yes0.00
>
>   1 00:24:50:dd:a2:05   no 1.37
>
>   1 18:03:73:e3:be:5a   no21.04
>
>   1 18:03:73:e3:ca:24   no 4.23
>
>   1 18:03:73:e3:cb:5b   no48.41
>
>   1 18:03:73:e3:cc:e5   no91.93
>
>   1 18:03:73:e3:cd:b8   no   151.04
>
>   1 18:03:73:e3:ce:43   no 0.80
>
>   1 18:03:73:e3:d0:a4   no   290.74
>
>   1 18:03:73:e3:d4:26   no34.06
>
>   1 18:03:73:e3:d5:3d   no 6.36
>
>   1 18:03:73:e4:23:08   no88.76
>
>   1 18:03:73:e4:25:92   no   111.86
>
>   1 18:03:73:e4:26:2f   no 9.54
>
>   1 18:03:73:e4:2b:4c   no   114.86
>
>   1 18:03:73:e4:31:15   no   263.91
>
>   1 18:03:73:e4:6c:19   no 6.36
>
>   1 18:03:73:e4:7e:0a   no   103.06
>
>   1 18:03:73:e8:16:e0   no23.21
>
>   2 32:62:65:61:65:33   no 5.08   <<< container’s
> MAC learned on Proxmox bridge
>
>   1 34:17:eb:9b:e0:29   no   265.22
>
>   1 34:17:eb:9b:f8:ea   no   114.86
>
>   1 44:d3:ca:7e:3c:ff   no 0.00
>
>   1 78:2b:cb:3b:ca:b9   no   284.70
>
>   1 78:2b:cb:92:cb:cb   no   279.70
>
>   1 78:2b:cb:93:08:a8   no   287.05
>
>   1 b8:ca:3a:7a:70:63   no 4.83
>
>   1 f8:bc:12:69:bb:a3   no   121.82
>
>   2 fe:50:4f:3c:bd:b8   yes0.00
>
>   2 fe:50:4f:3c:bd:b8   yes0.00
>
>
>
> = off oVirt node that has Proxmox VM 
>
>
>
> (relevant lines from ‘ip link show’)
>
> 2: bond0:  mtu 1500 qdisc noqueue
> state UP mode DEFAULT
>
> 3: enp4s0f0:  mtu 1500 qdisc
> pfifo_fast master bond0 state UP