Hi Jaime,

It is happening for all the interfaces and we are using the bridging compatibility layer, but what we have defined in the template as network mode is openvswitch, not default (bridge).
This might be a case of use where the physical machines just power on.


Regards,

Carlos.


On 02/05/2013 12:12 PM, Jaime Melis wrote:
Hi,

the expected behaviour is for the vnet to go away after the VM shuts down (the hypervisor should run brctl delif ...). Is this happening only for a few interfaces or for all of them? are you using the bridging compatibility layer?

regards,
Jaime


On Tue, Feb 5, 2013 at 11:46 AM, Carlos Jiménez <[email protected] <mailto:[email protected]>> wrote:

    Hi all,

    We're running OpenNebula 3.8.3 with Open vSwitch and we've found
    out an issue. Once the frontend and the host are started, the VMs
    appear in Pending state and move to Prolog and again back to
    Pending states.

    This is the output of oned.log:

    /Tue Feb  5 11:30:04 2013 [DiM][D]: Deploying VM 0
    Tue Feb  5 11:30:04 2013 [ReM][D]: Req:5360 UID:0
    VirtualMachineDeploy result SUCCESS, 0
    Tue Feb  5 11:30:07 2013 [TM][D]: Message received: LOG I 0 clone:
    Cloning /var/lib/one/datastores/1/d76d1fd89f175e1027f8506978165c03
    in host1:/var/lib/one//datastores/0/0/disk.0
    Tue Feb  5 11:30:07 2013 [TM][D]: Message received: LOG I 0
    ExitCode: 0
    Tue Feb  5 11:30:07 2013 [TM][D]: Message received: LOG I 0 ln:
    Linking /var/lib/one/datastores/1/923331c1aeb5a587dd428d0b8607ff29
    in host1:/var/lib/one//datastores/0/0/disk.1
    Tue Feb  5 11:30:07 2013 [TM][D]: Message received: LOG I 0
    ExitCode: 0
    Tue Feb  5 11:30:07 2013 [TM][D]: Message received: TRANSFER
    SUCCESS 0 -
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: LOG I 0
    ExitCode: 0
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: LOG I 0
    Successfully execute network driver operation: pre.
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: LOG I 0
    Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy
    /var/lib/one//datastores/0/0/deployment.24 host1 0 host1
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: LOG I 0
    error: Failed to create domain from
    /var/lib/one//datastores/0/0/deployment.24
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: LOG I 0
    error: Unable to add bridge vbr1 port vnet0: Invalid argument
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: LOG E 0 Could
    not create domain from /var/lib/one//datastores/0/0/deployment.24
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: LOG I 0
    ExitCode: 255
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: LOG I 0
    Failed to execute virtualization driver operation: deploy.
    Tue Feb  5 11:30:08 2013 [VMM][D]: Message received: DEPLOY
    FAILURE 0 Could not create domain from
    /var/lib/one//datastores/0/0/deployment.24/

    We've realised that one tries to create a vnetx but that vnet
    interface is already into the Open vSwitch database, so it is
    unable to introduce that interface and therefore to create the VM.
    This is the output of the openvswitch:
    /#ovs-vsctl show
    6725e67a-3af1-4fdf-9dfe-f606d09918a8
        Bridge "vbr1"
            Port "bond0"
                Interface "bond0"
            Port "vbr1"
                Interface "vbr1"
                    type: internal
        ovs_version: "1.4.3"/

    We've managed to solve it manually deleting those interfaces into
    the open vswitch database and immeditely one has been able to
    create the VMs.
    This is the output:

    /Tue Feb  5 11:31:37 2013 [TM][D]: Message received: LOG I 0
    clone: Cloning
    /var/lib/one/datastores/1/d76d1fd89f175e1027f8506978165c03 in
    host1:/var/lib/one//datastores/0/0/disk.0
    Tue Feb  5 11:31:37 2013 [TM][D]: Message received: LOG I 0
    ExitCode: 0
    Tue Feb  5 11:31:37 2013 [TM][D]: Message received: LOG I 0 ln:
    Linking /var/lib/one/datastores/1/923331c1aeb5a587dd428d0b8607ff29
    in host1:/var/lib/one//datastores/0/0/disk.1
    Tue Feb  5 11:31:37 2013 [TM][D]: Message received: LOG I 0
    ExitCode: 0
    Tue Feb  5 11:31:37 2013 [TM][D]: Message received: TRANSFER
    SUCCESS 0 -
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: LOG I 0
    ExitCode: 0
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: LOG I 0
    Successfully execute network driver operation: pre.
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: LOG I 0
    ExitCode: 0
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: LOG I 0
    Successfully execute virtualization driver operation: deploy.
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: LOG I 0 post:
    Executed "sudo /usr/bin/ovs-ofctl add-flow vbr1
    in_port=2,dl_src=02:00:c0:a8:0f:64,priority=40000,actions=normal".
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: LOG I 0 post:
    Executed "sudo /usr/bin/ovs-ofctl add-flow vbr1
    in_port=2,priority=39000,actions=drop".
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: LOG I 0
    ExitCode: 0
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: LOG I 0
    Successfully execute network driver operation: post.
    Tue Feb  5 11:31:38 2013 [VMM][D]: Message received: DEPLOY
    SUCCESS 0 one-0/

    Is there any way to manage it? We've thought on an script to
    automatically check it everytime we restart the servers, but
    perhaps there is already a better way we unknow.


    Thanks in advance,

    Carlos.


    _______________________________________________
    Users mailing list
    [email protected] <mailto:[email protected]>
    http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




--
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org <http://www.OpenNebula.org> | [email protected] <mailto:[email protected]>

_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to