[Openstack] openflow FLOOD data can not go through br-int to br-tun

2013-05-07 Thread Liu Wenmao
hi all:

I have set up quantum+floodlight, there are a compute node and a
controller, so I create a VM in the compute node, but the VM(100.0.0.4) can
not ping its gateway(100.0.0.1) in the controller node.

When the VM send a ARP request to OVS of the compute node, a packet_in
request is sent to the controller, then the controller send a packet_out
response to the OVS, telling it to flood the ARP request.

I run tcpdump at both br-int and br-tun interface, packets are captured at
br-int, but no packets are captured at br-tun:

root@node1:/var/log/openvswitch# tcpdump -i br-int -nn
tcpdump: WARNING: br-int: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-int, link-type EN10MB (Ethernet), capture size 65535 bytes
14:26:45.485978 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28
14:26:46.482442 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28
14:26:47.482416 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28
^C
3 packets captured
3 packets received by filter
0 packets dropped by kernel
root@node1:/var/log/openvswitch# tcpdump -i br-tun -nn
tcpdump: WARNING: br-tun: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-tun, link-type EN10MB (Ethernet), capture size 65535 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

root@node1:/var/log/openvswitch# ovs-ofctl snoop br-int
OFPT_PACKET_IN (xid=0x0): total_len=42 in_port=6 data_len=42
buffer=0x044d
priority0:tunnel0:in_port0006:tci(0)
macfa:16:3e:9f:5b:2c-ff:ff:ff:ff:ff:ff type0806 proto1 tos0 ttl0
ip100.0.0.4-100.0.0.1 arp_hafa:16:3e:9f:5b:2c-00:00:00:00:00:00
fa:16:3e:9f:5b:2c  ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42:
Request who-has 100.0.0.1 tell 100.0.0.4, length 28
OFPT_PACKET_OUT (xid=0x0): in_port=6 actions_len=8 actions=FLOOD data_len=42
fa:16:3e:9f:5b:2c  ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42:
Request who-has 100.0.0.1 tell 100.0.0.4, length 28

I guess it is because the gateway is on another node, so ARP request should
go through br-int-br-tun-eth2[compute node side]--
[controller side]eth2-br-tun-br-int, but the ARP request seems to be
blocked between br-int and br-tun.

I don't know why the ARP request is not sent to br-tun. it seems that ARP
request is sent to normal port of the OVS because VM 100.0.0.4 can ping
other VMs(100.0.0.2) on the same OVS.




root@node1:/var/log/openvswitch# ovs-vsctl show
afaf59ee-48cc-4f5b-9a1d-4311b509a6c5
*Bridge br-int*
Controller tcp:30.0.0.1:6633
is_connected: true
Port qvoe06ea8d8-d7
tag: 1
Interface qvoe06ea8d8-d7
Port qvoa96762cb-f3
tag: 4095
Interface qvoa96762cb-f3
Port qvo38f23ca0-59
tag: 1
Interface qvo38f23ca0-59
Port qvofc3fe9ed-fb
tag: 4095
Interface qvofc3fe9ed-fb
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port eth3
Interface eth3
Port qvo1021fd99-eb
tag: 4095
Interface qvo1021fd99-eb
Port qvo329db52d-81
tag: 4095
Interface qvo329db52d-81
Bridge qbre06ea8d8-d7
Port qbre06ea8d8-d7
Interface qbre06ea8d8-d7
type: internal
Port qvbe06ea8d8-d7
Interface qvbe06ea8d8-d7
Port tape06ea8d8-d7
Interface tape06ea8d8-d7
Bridge qbr329db52d-81
Port qbr329db52d-81
Interface qbr329db52d-81
type: internal
Port qvb329db52d-81
Interface qvb329db52d-81
Bridge qbrc8ec86f4-3a
Port qbrc8ec86f4-3a
Interface qbrc8ec86f4-3a
type: internal
Port qvbc8ec86f4-3a
Interface qvbc8ec86f4-3a
*Bridge br-tun*
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port gre-1
Interface gre-1
type: gre
options: {in_key=flow, out_key=flow, remote_ip=30.0.0.1}
Bridge qbr31c6e35b-81
Port qbr31c6e35b-81
Interface qbr31c6e35b-81
type: internal
Port qvb31c6e35b-81
Interface qvb31c6e35b-81
Bridge qbr38f23ca0-59
Port qbr38f23ca0-59
Interface qbr38f23ca0-59
type: internal
Port tap38f23ca0-59
Interface tap38f23ca0-59
Port qvb38f23ca0-59
Interface qvb38f23ca0-59
Bridge qbr28117358-50
Port qvb28117358-50
Interface qvb28117358-50
Port qbr28117358-50

Re: [Openstack] Quantum conceptual question (bridges)

2013-05-07 Thread Édouard Thuleau
OVS is not compatible with iptables + ebtables rules that are applied
directly on VIF ports.
So the libvirt_vif_driver 'nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver'
create a Linux software bridge to be able to apply security group rules
with iptables.

If you don't need the security group functionalities, you can
use libvirt_vif_driver
'nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver'
or 'nova.virt.libvirt.vif.LibvirtOpenVswitchDriver' (depends on your
libvirt version).
http://docs.openstack.org/trunk/openstack-network/admin/content/nova_with_quantum_vifplugging_ovs.html

I think this point must be listed in the limitations page of the OpenStack
Networking Admin guide
http://docs.openstack.org/grizzly/openstack-network/admin/content/ch_limitations.html

Édouard.

On Tue, May 7, 2013 at 2:46 AM, Lorin Hochstein lo...@nimbisservices.comwrote:

 I'm trying to wrap my head around how Quantum works. If understanding
 things correctly, when using the openvswitch plugin, a packet traveling
 from a guest out to the physical switch has to cross two software bridges:

 1. br-int
 2. br-ethN or br-tun (depending on whether using VLANs or GRE tunnels)

 So, I think I understand the motivation behind this: the integration
 bridge handles the rules associated with the virtual networks defined by
 OpenStack users, and the (br-ethN | br-tun) bridge handles the rules
 associated with moving the packets across the physical network.

 My question is:  Does having two software bridges in the path incur a
 larger network performance penalty than if there was only a single software
 bridge between the VIF and the physical network interface?

 If so, was Quantum implemented this way because it's simply not possible
 to achieve the desired functionality using a single openvswitch bridge, or
 was it because using the dual-bridge approach simplified the
 implementation, or was there some other reason?

 Lorin
 --
 Lorin Hochstein
 Lead Architect - Cloud Services
 Nimbis Services, Inc.
 www.nimbisservices.com

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Grizzly on CentOS VM running on Xen Server 6.0.2

2013-05-07 Thread Ashutosh Narayan
Hi Folks,

Has anybody on the list installed Grizzly on CentOS 6.3 as a virtual machine
running on Xen Server 6.0.2 ? If yes, please provide any pointers for the
same.

Thank you,

-- 
Ashutosh Narayan

http://ashutoshn.wordpress.com/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Unable to ping VM using OpenStack and Quantum(openvswitch plugin)

2013-05-07 Thread Anil Vishnoi
one possible reason can be that your VM didn't get IP address from its DHCP
server. Can you check your VM instance log (You can check it from
dashboard) and see whether its sending the DHCP request for IP and getting
any response from it


On Tue, May 7, 2013 at 12:24 PM, zengshan2008 zengshan2...@gmail.comwrote:

 **
 **
 Hi,
 I've installed openstack using quantum by the guide

 https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
 everything works fine, but I can't ping vm from the outside world, neither
 from the network node.The following is some of my configration.
 *1) **root@networknode:/etc* root@networknode:/etc*# ip netns
 *qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189
 qdhcp-e58739ff-16dc-4289-8110-242f7818d314
 *2) qrouter and qdhcp server is up*
 root@networknode:/etc# ip netns exec
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ifconfig
 loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:16436  Metric:1
   RX packets:85 errors:0 dropped:0 overruns:0 frame:0
   TX packets:85 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:9224 (9.2 KB)  TX bytes:9224 (9.2 KB)

 qg-daf2c037-cc Link encap:Ethernet  HWaddr fa:16:3e:ea:f6:c3
   inet addr:192.168.23.102  Bcast:192.168.23.255
 Mask:255.255.255.0
   inet6 addr: 2401:de00::f816:3eff:feea:f6c3/64 Scope:Global
   inet6 addr: fe80::f816:3eff:feea:f6c3/64 Scope:Link
   inet6 addr: 2401:de00::6066:acc0:66e3:7434/64 Scope:Global
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:5392 errors:0 dropped:0 overruns:0 frame:0
   TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:406572 (406.5 KB)  TX bytes:846 (846.0 B)

 qr-d9cb6d6d-5e Link encap:Ethernet  HWaddr fa:16:3e:6d:5a:3a
   inet addr:202.122.38.1  Bcast:202.122.38.255  Mask:255.255.255.0
   inet6 addr: fe80::f816:3eff:fe6d:5a3a/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:24 errors:0 dropped:0 overruns:0 frame:0
   TX packets:108 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:2184 (2.1 KB)  TX bytes:5928 (5.9 KB)

 root@networknode:/etc# ip netns exec
 qdhcp-e58739ff-16dc-4289-8110-242f7818d314 ifconfig
 loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:16436  Metric:1
   RX packets:10 errors:0 dropped:0 overruns:0 frame:0
   TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:840 (840.0 B)  TX bytes:840 (840.0 B)

 tape10a4f07-60 Link encap:Ethernet  HWaddr fa:16:3e:db:8f:23
   inet addr:202.122.38.14  Bcast:202.122.38.255  Mask:255.255.255.0
   inet6 addr: fe80::f816:3eff:fedb:8f23/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:106 errors:0 dropped:0 overruns:0 frame:0
   TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:5760 (5.7 KB)  TX bytes:2652 (2.6 KB)
 *3) qrouter can ping the dhcp server from the network node*
 root@networknode:/etc# ip netns exec
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ping 202.122.38.14
 PING 202.122.38.14 (202.122.38.14) 56(84) bytes of data.
 64 bytes from 202.122.38.14: icmp_req=1 ttl=64 time=0.325 ms
 64 bytes from 202.122.38.14: icmp_req=2 ttl=64 time=0.023 ms
 64 bytes from 202.122.38.14: icmp_req=3 ttl=64 time=0.024 ms
 ^C
 --- 202.122.38.14 ping statistics ---
 3 packets transmitted, 3 received, 0% packet loss, time 1998ms
 rtt min/avg/max/mdev = 0.023/0.124/0.325/0.142 ms
 *4) virtual machine is up*
 quantum floatingip-list

 +--+--+-+--+
 | id   | fixed_ip_address |
 floating_ip_address | port_id  |

 +--+--+-+--+
 | 88398dd1-7256-49c7-b1ad-719903125501 | 202.122.38.15|
 192.168.23.103  | ce7c1eff-afcb-4908-b399-0e6e07d2791e |

 +--+--+-+--+
 *   5) virtual machine eth0 is up*
  * 6)ssh or ping to vm is failed*
  root@networknode:/etc# ip netns exec
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ping 202.122.38.15
 PING 202.122.38.15 (202.122.38.15) 56(84) bytes of data.
 From 202.122.38.1 icmp_seq=1 Destination Host Unreachable
 From 202.122.38.1 

[Openstack] 回复: Re: Unable to ping VM using OpenStack and Quantum(openvswitch plugin)

2013-05-07 Thread zengshan2008
I run ovs-vsctl show on the three node, and here is the result:
root@networknode:/var/log/quantum# ovs-vsctl show
690ad327-14ad-410e-b310-2d23e4c78223
Bridge br-int
Port br-int
Interface br-int
type: internal
Port int-br-em3
Interface int-br-em3
Port qr-d9cb6d6d-5e
tag: 1
Interface qr-d9cb6d6d-5e
type: internal
Port tape10a4f07-60
tag: 1
Interface tape10a4f07-60
type: internal
Bridge br-em3
Port em3
Interface em3
Port br-em3
Interface br-em3
type: internal
Port phy-br-em3
Interface phy-br-em3
Bridge br-em1
Port em1
Interface em1
Port qg-daf2c037-cc
Interface qg-daf2c037-cc
type: internal
Port br-em1
Interface br-em1
type: internal
ovs_version: 1.4.3

meanwhile, in the network node, dnsmasq is running
ps -ef|grep dnsmqsq
nobody5903 1  0 May06 ?00:00:00 dnsmasq --no-hosts --no-resolv 
--strict-order --bind-interfaces --interface=tape10a4f07-60 
--except-interface=lo --domain=openstacklocal 
--pid-file=/var/lib/quantum/dhcp/e58739ff-16dc-4289-8110-242f7818d314/pid 
--dhcp-hostsfile=/var/lib/quantum/dhcp/e58739ff-16dc-4289-8110-242f7818d314/host
 
--dhcp-optsfile=/var/lib/quantum/dhcp/e58739ff-16dc-4289-8110-242f7818d314/opts 
--dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro 
--dhcp-range=set:tag0,202.122.38.0,static,120s --conf-file=
root  5904  5903  0 May06 ?00:00:00 dnsmasq --no-hosts --no-resolv 
--strict-order --bind-interfaces --interface=tape10a4f07-60 
--except-interface=lo --domain=openstacklocal 
--pid-file=/var/lib/quantum/dhcp/e58739ff-16dc-4289-8110-242f7818d314/pid 
--dhcp-hostsfile=/var/lib/quantum/dhcp/e58739ff-16dc-4289-8110-242f7818d314/host
 
--dhcp-optsfile=/var/lib/quantum/dhcp/e58739ff-16dc-4289-8110-242f7818d314/opts 
--dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro 
--dhcp-range=set:tag0,202.122.38.0,static,120s --conf-file=
root 15753  2398  0 04:36 pts/200:00:00 grep --color=auto dnsm

root@controllernode:/var/log/quantum# ovs-vsctl show
240e7bfb-3d31-4fe1-b6ea-3806b8eb21ca
ovs_version: 1.4.3

root@computenode:~# ovs-vsctl show
e59adf21-1b48-4783-b8db-67254ac18bb4
Bridge br-eth1
Port phy-br-eth1
Interface phy-br-eth1
Port br-eth1
Interface br-eth1
type: internal
Port eth1
Interface eth1
Bridge br-int
Port br-int
Interface br-int
type: internal
Port qvob99918f6-4c
tag: 1
Interface qvob99918f6-4c
Port int-br-eth1
Interface int-br-eth1
Port qvo87b7b645-b9
tag: 1
Interface qvo87b7b645-b9
Port qvo7d05c230-2b
tag: 1
Interface qvo7d05c230-2b
Port qvoce7c1eff-af
tag: 1
Interface qvoce7c1eff-af
ovs_version: 1.4.3
2013-05-07



zengshan2008



发件人:Anil Vishnoi
发送时间:2013-05-07 16:16
主题:Re: [Openstack] Unable to ping VM using OpenStack and Quantum(openvswitch 
plugin)
收件人:zengshan2008zengshan2...@gmail.com
抄送:gong yong 
shenggong...@linux.vnet.ibm.com,gongyshgong...@cn.ibm.com,openstackopenstack@lists.launchpad.net

one possible reason can be that your VM didn't get IP address from its DHCP 
server. Can you check your VM instance log (You can check it from dashboard) 
and see whether its sending the DHCP request for IP and getting any response 
from it



On Tue, May 7, 2013 at 12:24 PM, zengshan2008 zengshan2...@gmail.com wrote:

Hi,
I've installed openstack using quantum by the guide 
https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
everything works fine, but I can't ping vm from the outside world, neither from 
the network node.The following is some of my configration.
1) root@networknode:/etc# ip netns 
qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189
qdhcp-e58739ff-16dc-4289-8110-242f7818d314
2) qrouter and qdhcp server is up
root@networknode:/etc# ip netns exec 
qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ifconfig
loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:85 errors:0 dropped:0 overruns:0 frame:0
  TX packets:85 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:9224 (9.2 KB)  TX bytes:9224 (9.2 KB)

qg-daf2c037-cc Link encap:Ethernet  HWaddr fa:16:3e:ea:f6:c3  
  inet addr:192.168.23.102  Bcast:192.168.23.255  Mask:255.255.255.0
  inet6 addr: 2401:de00::f816:3eff:feea:f6c3/64 Scope:Global
  inet6 addr: 

[Openstack] 回复: Re: Unable to ping VM using OpenStack and Quantum(openvswitch plugin)

2013-05-07 Thread zengshan2008
I am using three nodes to do the installation, and I am using the openvswitch 
plugin with the vlan mode, do I need to do some configration in the physical 
switch?

2013-05-07



zengshan2008



发件人:Anil Vishnoi
发送时间:2013-05-07 16:16
主题:Re: [Openstack] Unable to ping VM using OpenStack and Quantum(openvswitch 
plugin)
收件人:zengshan2008zengshan2...@gmail.com
抄送:gong yong 
shenggong...@linux.vnet.ibm.com,gongyshgong...@cn.ibm.com,openstackopenstack@lists.launchpad.net

one possible reason can be that your VM didn't get IP address from its DHCP 
server. Can you check your VM instance log (You can check it from dashboard) 
and see whether its sending the DHCP request for IP and getting any response 
from it



On Tue, May 7, 2013 at 12:24 PM, zengshan2008 zengshan2...@gmail.com wrote:

Hi,
I've installed openstack using quantum by the guide 
https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
everything works fine, but I can't ping vm from the outside world, neither from 
the network node.The following is some of my configration.
1) root@networknode:/etc# ip netns 
qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189
qdhcp-e58739ff-16dc-4289-8110-242f7818d314
2) qrouter and qdhcp server is up
root@networknode:/etc# ip netns exec 
qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ifconfig
loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:85 errors:0 dropped:0 overruns:0 frame:0
  TX packets:85 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:9224 (9.2 KB)  TX bytes:9224 (9.2 KB)

qg-daf2c037-cc Link encap:Ethernet  HWaddr fa:16:3e:ea:f6:c3  
  inet addr:192.168.23.102  Bcast:192.168.23.255  Mask:255.255.255.0
  inet6 addr: 2401:de00::f816:3eff:feea:f6c3/64 Scope:Global
  inet6 addr: fe80::f816:3eff:feea:f6c3/64 Scope:Link
  inet6 addr: 2401:de00::6066:acc0:66e3:7434/64 Scope:Global
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:5392 errors:0 dropped:0 overruns:0 frame:0
  TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:406572 (406.5 KB)  TX bytes:846 (846.0 B)

qr-d9cb6d6d-5e Link encap:Ethernet  HWaddr fa:16:3e:6d:5a:3a  
  inet addr:202.122.38.1  Bcast:202.122.38.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fe6d:5a3a/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:24 errors:0 dropped:0 overruns:0 frame:0
  TX packets:108 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:2184 (2.1 KB)  TX bytes:5928 (5.9 KB)

root@networknode:/etc# ip netns exec qdhcp-e58739ff-16dc-4289-8110-242f7818d314 
ifconfig
loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:10 errors:0 dropped:0 overruns:0 frame:0
  TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:840 (840.0 B)  TX bytes:840 (840.0 B)

tape10a4f07-60 Link encap:Ethernet  HWaddr fa:16:3e:db:8f:23  
  inet addr:202.122.38.14  Bcast:202.122.38.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fedb:8f23/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:106 errors:0 dropped:0 overruns:0 frame:0
  TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:5760 (5.7 KB)  TX bytes:2652 (2.6 KB)
3) qrouter can ping the dhcp server from the network node
root@networknode:/etc# ip netns exec 
qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ping 202.122.38.14
PING 202.122.38.14 (202.122.38.14) 56(84) bytes of data.
64 bytes from 202.122.38.14: icmp_req=1 ttl=64 time=0.325 ms
64 bytes from 202.122.38.14: icmp_req=2 ttl=64 time=0.023 ms
64 bytes from 202.122.38.14: icmp_req=3 ttl=64 time=0.024 ms
^C
--- 202.122.38.14 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.023/0.124/0.325/0.142 ms
4) virtual machine is up
quantum floatingip-list
+--+--+-+--+
| id   | fixed_ip_address | floating_ip_address 
| port_id  |
+--+--+-+--+
| 88398dd1-7256-49c7-b1ad-719903125501 | 202.122.38.15| 192.168.23.103  
| ce7c1eff-afcb-4908-b399-0e6e07d2791e |

[Openstack] floodlight ignore subnet gateway due to PORT_DOWN and LINK_DOWN

2013-05-07 Thread Liu Wenmao
hi

I use quantum grizzly with namespace and floodlight, but VMs can not ping
its gateway. It seems that floodlight ignore devices whose status
is PORT_DOWN or LINK_DOWN, somehow the subnetwork gateway is
really PORT_DOWN and LINK_DOWN.. is it normal?or how can I change its
status to normal?

root@controller:~# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:e2ed9e9b6942
n_tables:255, n_buffers:256
features: capabilities:0xc7, actions:0xfff
 1(qr-c5496165-c7): addr:5e:67:22:5b:d5:0e
 config: PORT_DOWN
 state:  LINK_DOWN
* 2(qr-8af2e01f-bb): addr:e4:00:00:00:00:00this is the
gateway.*
* config: PORT_DOWN*
* state:  LINK_DOWN*
 3(qr-48c69382-4f): addr:22:64:6f:3a:9f:cd
 config: PORT_DOWN
 state:  LINK_DOWN
 4(patch-tun): addr:8e:90:4c:aa:d2:06
 config: 0
 state:  0
 5(tap5b5891ac-94): addr:6e:52:f7:c1:ef:f4
 config: PORT_DOWN
 state:  LINK_DOWN
 6(tap09a002af-66): addr:c6:cb:01:60:3f:8a
 config: PORT_DOWN
 state:  LINK_DOWN
 7(tap160480aa-84): addr:96:43:cc:05:71:d5
 config: PORT_DOWN
 state:  LINK_DOWN
 8(tapf6040ba0-b5): addr:e4:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 9(tap0ded1c0f-df): addr:12:c8:b3:5c:fb:6a
 config: PORT_DOWN
 state:  LINK_DOWN
 10(tapaebb6140-31): addr:e4:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 11(tapddc3ce63-2b): addr:e4:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 12(qr-9b9a3229-19): addr:e4:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 LOCAL(br-int): addr:e2:ed:9e:9b:69:42
 config: PORT_DOWN
 state:  LINK_DOWN
OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0


floodlight codes:
if (entity.hasSwitchPort() 

!topology.isAttachmentPointPort(entity.getSwitchDPID(),

 entity.getSwitchPort().shortValue())) {
if (logger.isDebugEnabled()) {
logger.debug(Not learning new device on internal
 +  link: {}, entity);
}

public boolean portEnabled(OFPhysicalPort port) {
if (port == null)
return false;
if ((port.getConfig()  OFPortConfig.OFPPC_PORT_DOWN.getValue()) 
0)
return false;
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] openflow FLOOD data can not go through br-int to br-tun

2013-05-07 Thread Liu Wenmao
It seems OK after I set controller for both br-tun and br-int. but
floodlight official installation only set br-int's controller, am I correct?


On Tue, May 7, 2013 at 2:33 PM, Liu Wenmao marvel...@gmail.com wrote:

 hi all:

 I have set up quantum+floodlight, there are a compute node and a
 controller, so I create a VM in the compute node, but the VM(100.0.0.4) can
 not ping its gateway(100.0.0.1) in the controller node.

 When the VM send a ARP request to OVS of the compute node, a packet_in
 request is sent to the controller, then the controller send a packet_out
 response to the OVS, telling it to flood the ARP request.

 I run tcpdump at both br-int and br-tun interface, packets are captured at
 br-int, but no packets are captured at br-tun:

 root@node1:/var/log/openvswitch# tcpdump -i br-int -nn
 tcpdump: WARNING: br-int: no IPv4 address assigned
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on br-int, link-type EN10MB (Ethernet), capture size 65535 bytes
 14:26:45.485978 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28
 14:26:46.482442 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28
 14:26:47.482416 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28
 ^C
 3 packets captured
 3 packets received by filter
 0 packets dropped by kernel
 root@node1:/var/log/openvswitch# tcpdump -i br-tun -nn
 tcpdump: WARNING: br-tun: no IPv4 address assigned
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on br-tun, link-type EN10MB (Ethernet), capture size 65535 bytes
 ^C
 0 packets captured
 0 packets received by filter
 0 packets dropped by kernel

 root@node1:/var/log/openvswitch# ovs-ofctl snoop br-int
 OFPT_PACKET_IN (xid=0x0): total_len=42 in_port=6 data_len=42
 buffer=0x044d
 priority0:tunnel0:in_port0006:tci(0)
 macfa:16:3e:9f:5b:2c-ff:ff:ff:ff:ff:ff type0806 proto1 tos0 ttl0
 ip100.0.0.4-100.0.0.1 arp_hafa:16:3e:9f:5b:2c-00:00:00:00:00:00
 fa:16:3e:9f:5b:2c  ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42:
 Request who-has 100.0.0.1 tell 100.0.0.4, length 28
 OFPT_PACKET_OUT (xid=0x0): in_port=6 actions_len=8 actions=FLOOD
 data_len=42
 fa:16:3e:9f:5b:2c  ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42:
 Request who-has 100.0.0.1 tell 100.0.0.4, length 28

 I guess it is because the gateway is on another node, so ARP request
 should go through br-int-br-tun-eth2[compute node
 side]-- [controller side]eth2-br-tun-br-int, but the
 ARP request seems to be blocked between br-int and br-tun.

 I don't know why the ARP request is not sent to br-tun. it seems that ARP
 request is sent to normal port of the OVS because VM 100.0.0.4 can ping
 other VMs(100.0.0.2) on the same OVS.




 root@node1:/var/log/openvswitch# ovs-vsctl show
 afaf59ee-48cc-4f5b-9a1d-4311b509a6c5
 *Bridge br-int*
 Controller tcp:30.0.0.1:6633
 is_connected: true
 Port qvoe06ea8d8-d7
 tag: 1
 Interface qvoe06ea8d8-d7
 Port qvoa96762cb-f3
 tag: 4095
 Interface qvoa96762cb-f3
 Port qvo38f23ca0-59
 tag: 1
 Interface qvo38f23ca0-59
 Port qvofc3fe9ed-fb
 tag: 4095
 Interface qvofc3fe9ed-fb
 Port br-int
 Interface br-int
 type: internal
 Port patch-tun
 Interface patch-tun
 type: patch
 options: {peer=patch-int}
 Port eth3
 Interface eth3
 Port qvo1021fd99-eb
 tag: 4095
 Interface qvo1021fd99-eb
 Port qvo329db52d-81
 tag: 4095
 Interface qvo329db52d-81
 Bridge qbre06ea8d8-d7
 Port qbre06ea8d8-d7
 Interface qbre06ea8d8-d7
 type: internal
 Port qvbe06ea8d8-d7
 Interface qvbe06ea8d8-d7
 Port tape06ea8d8-d7
 Interface tape06ea8d8-d7
 Bridge qbr329db52d-81
 Port qbr329db52d-81
 Interface qbr329db52d-81
 type: internal
 Port qvb329db52d-81
 Interface qvb329db52d-81
 Bridge qbrc8ec86f4-3a
 Port qbrc8ec86f4-3a
 Interface qbrc8ec86f4-3a
 type: internal
 Port qvbc8ec86f4-3a
 Interface qvbc8ec86f4-3a
 *Bridge br-tun*
 Port patch-int
 Interface patch-int
 type: patch
 options: {peer=patch-tun}
 Port br-tun
 Interface br-tun
 type: internal
 Port gre-1
 Interface gre-1
 type: gre
 options: {in_key=flow, out_key=flow, remote_ip=30.0.0.1}
 Bridge qbr31c6e35b-81
 Port qbr31c6e35b-81
 Interface qbr31c6e35b-81
 type: internal
 Port qvb31c6e35b-81
 Interface qvb31c6e35b-81
 Bridge qbr38f23ca0-59
 Port 

Re: [Openstack] grizzly quantum with namespaces, default route for dhcp-agent

2013-05-07 Thread Molnár Mihály László
any idea? I just restarted my nodes, and it's the same again. I know
how to make new static route without namespaces at startup, but if we
look at openstack as a dynamic system, its not ok to make static
routes manually for every tenant's every network.

thanks

Rusty


On Fri, May 3, 2013 at 2:09 PM, Molnár Mihály László lacik...@gmail.com wrote:
 hi all!

 I'm new to this namespace and quantum networking. So my VM-s works
 fine, got an ip, DGW and nameserver from the dhcp agent. So the
 nameserver is the dhcp agent, but if I check the routing table of the
 dhcpagent's namespace there is no default route, so dnsmasque can't
 resolv anything.
 root@network:/var/log/quantum# ip netns exec
 qdhcp-ff74b46a-1ab4-4a97-91c7-21c95485aa34 route -nv
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse Iface
 172.18.0.0  0.0.0.0 255.255.255.0   U 0  0
 0 tap287d564d-c2

 If I add a default route:
 root@network:/var/log/quantum# ip netns exec
 qdhcp-ff74b46a-1ab4-4a97-91c7-21c95485aa34 route -nv
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse Iface
 0.0.0.0 172.18.0.1  0.0.0.0 UG0  0
 0 tap287d564d-c2
 172.18.0.0  0.0.0.0 255.255.255.0   U 0  0
 0 tap287d564d-c2

 It works fine.

 What is the normal behaviour here? How should I make this permanent?

 Thanks!

 Rusty

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keepalive can not check the haproxy is down.

2013-05-07 Thread Eric_E_Smith
What version of keepalived are you using?  I found this online: 
https://github.com/acassen/keepalived/issues/8

I would first try removing the check script and validating that failure works 
without the check script.  If that works you might need to update keepalived.

Here’s a brief introduction I did a while back on using haproxy with keepalived 
as a load balancer (FWIW): 
http://four-eyes.net/2013/01/haproxy-keepalived-the-free-ha-load-balancer/


From: Lei Zhang [mailto:zhang.lei@gmail.com]
Sent: Monday, May 06, 2013 7:55 PM
To: Smith, Eric E
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] keepalive can not check the haproxy is down.

Thanks Eric,
I have solve it after breaking down the group.

1. the different netmask is typo. And it doesn't break the failover.
2. Why the group is unnecessary? When there are the two instances using the 
same check script , like this case, what's mean after grouping them?

On Mon, May 6, 2013 at 6:37 PM, 
eric_e_sm...@dell.commailto:eric_e_sm...@dell.com wrote:
I see you have different netmasks for the VIP on node1 vs. node2;  I would also 
try breaking them out of the vrrp_sync_group and validating at least 1 router 
will fail independently.

From: Openstack 
[mailto:openstack-bounces+eric_e_smithmailto:openstack-bounces%2Beric_e_smith=dell@lists.launchpad.netmailto:dell@lists.launchpad.net]
 On Behalf Of Lei Zhang
Sent: Monday, May 06, 2013 3:07 AM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] keepalive can not check the haproxy is down.


Hi Guys,

I am trying to use keepalive and haproxy to work together to improve the HA of 
the openstack. But I meet following
unexpected issue.

I expect that when the haproxy process is crashed on the MASTER node(checked by 
chk_haproxy), the second node
will take over the VIP. But when I stop the haproxy process, nothing is 
happened.
However, when stop the keepalived service, the VIP is set up on the node2 as 
expected.

So I think the root cause should be the chk_haproxy block. But I have no idea 
why it doesn't work. Does any body have
ideas?

node1 keepalived.conf

global_defs {

lvs_id LVS_228

}



vrrp_sync_group openstack_haproxy {

group {

v1

v2

}

}

vrrp_script chk_haproxy {

script killall -0 haproxy

interval 2

debug

weight 2

}

vrrp_instance v1 {

interface eth0

debug

state MASTER

virtual_router_id 1

priority 101

virtual_ipaddress {

192.168.0.230/24http://192.168.0.230/24

}

track_script {

chk_haproxy

}

}

vrrp_instance v2 {

interface eth1

state MASTER

debug

virtual_router_id 2

priority 101

virtual_ipaddress {

10.1.0.30/16http://10.1.0.30/16

}

track_script {

chk_haproxy

}

}

node2 keepalived.conf

global_defs {

lvs_id LVS_229

}



vrrp_sync_group openstack_haproxy {

group {

v1

v2

}

}

vrrp_script chk_haproxy {

script killall -0 haproxy

interval 2

weight 2

}

vrrp_instance v1 {

interface eth0

state BACKUP

virtual_router_id 1

priority 100

virtual_ipaddress {

192.168.0.230

}

track_script {

chk_haproxy

}

}

vrrp_instance v2 {

interface eth1

state BACKUP

virtual_router_id 2

priority 100

virtual_ipaddress {

10.1.0.30

}

track_script {

chk_haproxy

}

}
--
Lei Zhang

Blog: http://jeffrey4l.github.com
twitter/weibo: @jeffrey4l



--
Lei Zhang

Blog: http://jeffrey4l.github.io
twitter/weibo: @jeffrey4l
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 回复: Re: Unable to ping VM using OpenStack and Quantum(openvswitch plugin)

2013-05-07 Thread Anil Vishnoi
udhcpc (v1.18.5) started
Sending discover...
Sending discover...
Sending discover...
No lease, failing


So this says that you didn't get DHCP response for your DHCP request
and hence your VM didn't get any ip address assigned.

I think as a next step of debugging, you need to check whether your
DHCP request packet is reaching to the DHCP or not? Dump the packets
at the networking node and check if you get any DHCP request packet.
If you are using openvswitch plugin with VLAN then make sure that your
switch ports are trunked and it allows traffic from all vlans.


Thanks

Anil



On Tue, May 7, 2013 at 2:27 PM, zengshan2008 zengshan2...@gmail.com wrote:

 **
 I am using three nodes to do the installation, and I am using the
 openvswitch plugin with the vlan mode, do I need to do some configration in
 the physical switch?

 2013-05-07
  --
  zengshan2008
  --
  *发件人:*Anil Vishnoi
 *发送时间:*2013-05-07 16:16
 *主题:*Re: [Openstack] Unable to ping VM using OpenStack and
 Quantum(openvswitch plugin)
 *收件人:*zengshan2008zengshan2...@gmail.com
 *抄送:*gong yong shenggong...@linux.vnet.ibm.com,gongysh
 gong...@cn.ibm.com,openstackopenstack@lists.launchpad.net

  one possible reason can be that your VM didn't get IP address from its
 DHCP server. Can you check your VM instance log (You can check it from
 dashboard) and see whether its sending the DHCP request for IP and getting
 any response from it


 On Tue, May 7, 2013 at 12:24 PM, zengshan2008 zengshan2...@gmail.comwrote:

 **
 **
 Hi,
 I've installed openstack using quantum by the guide

 https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
 everything works fine, but I can't ping vm from the outside world,
 neither from the network node.The following is some of my configration.
 *1) **root@networknode:/etc* root@networknode:/etc*# ip netns
 *qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189
 qdhcp-e58739ff-16dc-4289-8110-242f7818d314
 *2) qrouter and qdhcp server is up*
 root@networknode:/etc# ip netns exec
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ifconfig
 loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:16436  Metric:1
   RX packets:85 errors:0 dropped:0 overruns:0 frame:0
   TX packets:85 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:9224 (9.2 KB)  TX bytes:9224 (9.2 KB)

 qg-daf2c037-cc Link encap:Ethernet  HWaddr fa:16:3e:ea:f6:c3
   inet addr:192.168.23.102  Bcast:192.168.23.255
 Mask:255.255.255.0
   inet6 addr: 2401:de00::f816:3eff:feea:f6c3/64 Scope:Global
   inet6 addr: fe80::f816:3eff:feea:f6c3/64 Scope:Link
   inet6 addr: 2401:de00::6066:acc0:66e3:7434/64 Scope:Global
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:5392 errors:0 dropped:0 overruns:0 frame:0
   TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:406572 (406.5 KB)  TX bytes:846 (846.0 B)

 qr-d9cb6d6d-5e Link encap:Ethernet  HWaddr fa:16:3e:6d:5a:3a
   inet addr:202.122.38.1  Bcast:202.122.38.255  Mask:255.255.255.0
   inet6 addr: fe80::f816:3eff:fe6d:5a3a/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:24 errors:0 dropped:0 overruns:0 frame:0
   TX packets:108 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:2184 (2.1 KB)  TX bytes:5928 (5.9 KB)

 root@networknode:/etc# ip netns exec
 qdhcp-e58739ff-16dc-4289-8110-242f7818d314 ifconfig
 loLink encap:Local Loopback
   inet addr:127.0.0.1  Mask:255.0.0.0
   inet6 addr: ::1/128 Scope:Host
   UP LOOPBACK RUNNING  MTU:16436  Metric:1
   RX packets:10 errors:0 dropped:0 overruns:0 frame:0
   TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:840 (840.0 B)  TX bytes:840 (840.0 B)

 tape10a4f07-60 Link encap:Ethernet  HWaddr fa:16:3e:db:8f:23
   inet addr:202.122.38.14  Bcast:202.122.38.255
 Mask:255.255.255.0
   inet6 addr: fe80::f816:3eff:fedb:8f23/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:106 errors:0 dropped:0 overruns:0 frame:0
   TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:5760 (5.7 KB)  TX bytes:2652 (2.6 KB)
 *3) qrouter can ping the dhcp server from the network node*
 root@networknode:/etc# ip netns exec
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ping 202.122.38.14
 PING 202.122.38.14 (202.122.38.14) 56(84) bytes of data.
 64 bytes from 202.122.38.14: icmp_req=1 ttl=64 time=0.325 ms
 64 bytes from 202.122.38.14: icmp_req=2 ttl=64 

[Openstack] How does openstack acheive virtualisation

2013-05-07 Thread Jayakumar Satri
Hi,

   I am new to OpenStack. I wish to  understand how actually
OpenStack achieves virtualization. I am looking for any info/doc/URL.
Request your help regarding.



Thanking you,

Jaya Kumar Satri


Disclaimer:
  This message and the information contained herein is proprietary and 
confidential and subject to the Tech Mahindra policy statement, you may review 
the policy at a 
href=http://www.techmahindra.com/Disclaimer.html;http://www.techmahindra.com/Disclaimer.html/a
 externally and a 
href=http://tim.techmahindra.com/tim/disclaimer.html;http://tim.techmahindra.com/tim/disclaimer.html/a
 internally within Tech 
Mahindra.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How does openstack acheive virtualisation

2013-05-07 Thread Henning Sprang
On Tue, May 7, 2013 at 12:37 PM, Jayakumar Satri
js00123...@techmahindra.com wrote:

I am new to OpenStack. I wish to  understand how actually OpenStack 
 achieves virtualization. I am
 looking for any info/doc/URL. Request your help regarding.

What exactly do you want to know?

Openstack doesn't implement virtualization on it's own, it's a control
layer on top of other low level technologies and as such supports
multiple hypervisors under the hood.

You might want to read these:

http://www.openstack.org/software/openstack-compute/
https://wiki.openstack.org/wiki/HypervisorSupportMatrix

Does that answer your question?

Cheers,
Henning

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How does openstack acheive virtualisation

2013-05-07 Thread Atul Jha
Jay,

Kindly check http://openstack.org for overall information about the OpenStack 
project.
You can also check https://docs.openstack.org which hosts the project 
documentation.

Hope it helps.


From: Openstack [openstack-bounces+atul.jha=csscorp@lists.launchpad.net] on 
behalf of Jayakumar Satri [js00123...@techmahindra.com]
Sent: Tuesday, May 07, 2013 4:07 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] How does openstack acheive virtualisation

Hi,
   I am new to OpenStack. I wish to  understand how actually OpenStack 
achieves virtualization. I am looking for any info/doc/URL. Request your help 
regarding.

Thanking you,
Jaya Kumar Satri

Disclaimer:
  This message and the information contained herein is proprietary and 
confidential and subject to the Tech Mahindra policy statement, you may review 
the policy at http://www.techmahindra.com/Disclaimer.html externally and 
http://tim.techmahindra.com/tim/disclaimer.html internally within Tech 
Mahindra.


http://www.csscorp.com/common/email-disclaimer.php
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer Install

2013-05-07 Thread Riki Arslan
Thank you for the through explanation. I have no problem running the
Collector, Central and Compute Agents. So, I believe only the API Server is
trying to use the old oslo-incubator version.

I am still weighing the options.

Just a quick question; since the only thing that does not work in my
environment is the API Server, I believe -as long as I can query MongoDB
directly-, I think I wouldn't need it anyway. Would you say this is correct?


On Mon, May 6, 2013 at 6:08 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 It looks like you still have incompatible versions of things installed.

 The configuration library changed during grizzly. The old version and new
 version cannot be used together in the same program because they both try
 to modify different copies of a global variable. The exception you're
 getting is, I think, due to the fact that the API service loads the
 keystone middleware to handle authentication. You have a version of the
 middleware that uses oslo.config, and a version of ceilometer that uses the
 older oslo-incubator version of the configuration library.

 The ceilometer team is small, so we have limited capacity to support old
 versions (especially pre-incubated versions). We do intend to support
 grizzly, but can only offer moderate help with folsom. The g2 release
 tarballs *should* be compatible at the communication layer with folsom
 versions of the other components, but it looks like you can't install them
 into the same Python installation as the other services.

 You can separate ceilometer code from the other services a couple of
 different ways. The simplest would be to use a separate VM to run
 ceilometer. That would let you follow all of the normal instructions, and
 ensure that you don't have mismatched versions of libraries. The other way
 is to install ceilometer into a virtualenv. That would take more care,
 since you need to ensure that the virtualenv does not look at the globally
 installed site-packages. I haven't tried doing this, so I can't provide
 more detailed steps, and you will likely need to experiment a bit to get it
 right.

 The one piece of ceilometer that does *need* to be installed in the same
 location as the other services is the plugin for the nova compute agent. We
 spent a fair amount of time making sure there was a version of that plugin
 compatible with folsom, so we believe it should work. However, if you are
 just testing ceilometer, or not using it for billing instance-hours, you
 could skip deploying that piece entirely.

 Doug



 On Mon, May 6, 2013 at 9:43 AM, Riki Arslan riki.ars...@cloudturk.netwrote:

 I have also installed ceilometer-2013.1~g2~20130107.449.tar.gz from the
 tarballs list and still getting the same error:

 Traceback (most recent call last):
   File /usr/local/bin/ceilometer-api, line 5, in module
 pkg_resources.run_script('ceilometer==0.0.0', 'ceilometer-api')
   File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 499, in
 run_script
 self.require(requires)[0].run_script(script_name, ns)
   File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 1235, in
 run_script
 execfile(script_filename, namespace, namespace)
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-0.0.0-py2.7.egg/EGG-INFO/scripts/ceilometer-api,
 line 37, in module
 cfg.CONF(sys.argv[1:])
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-0.0.0-py2.7.egg/ceilometer/openstack/common/cfg.py,
 line 1024, in __call__
 self._cli_values, leftovers = self._parse_cli_opts(args)
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-0.0.0-py2.7.egg/ceilometer/openstack/common/cfg.py,
 line 1527, in _parse_cli_opts
 opt._add_to_cli(self._oparser, group)
   File
 /usr/local/lib/python2.7/dist-packages/oslo.config-1.1.0-py2.7.egg/oslo/config/cfg.py,
 line 591, in _add_to_cli
 container = self._get_argparse_container(parser, group)
   File
 /usr/local/lib/python2.7/dist-packages/oslo.config-1.1.0-py2.7.egg/oslo/config/cfg.py,
 line 633, in _get_argparse_container
 return group._get_argparse_group(parser)
 AttributeError: 'OptGroup' object has no attribute '_get_argparse_group'


 On Mon, May 6, 2013 at 3:56 PM, Riki Arslan riki.ars...@cloudturk.netwrote:

 Hi Doug,

 I actually got it from a link on your website:


 http://doughellmann.com/2013/01/ceilometer-grizzly-2-milestone-available.html

 So, do you think this one is not good?


 On Thu, May 2, 2013 at 7:33 PM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




 On Mon, Apr 29, 2013 at 6:42 PM, Riki Arslan riki.ars...@cloudturk.net
  wrote:

 I thought it might help if mentioned little more:

 /etc/ceilometer.conf file has the following parameters added:

 os_username=ceilometer
 os_password=$PASSWORD
 os_tenant_name=service
 os_auth_url=http://localhost:5000/v2.0/

 I checked CLI_OPTIONS in service.py and it looks allright:

 CLI_OPTIONS = [
 cfg.StrOpt('os-username',
default=os.environ.get('OS_USERNAME', 

[Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Guilherme Russi
Hello Guys,

 I'm here again with some doubt, have anybody already installed the Ryu
plugin with Quantum? I'm trying to follow this page:
http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not
working. Can anybody help me with some doubts about the installation?

Thank you so much.

Guilherme.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Kyle Mestery (kmestery)
On May 7, 2013, at 7:57 AM, Guilherme Russi luisguilherme...@gmail.com wrote:
 
 Hello Guys,
 
  I'm here again with some doubt, have anybody already installed the Ryu 
 plugin with Quantum? I'm trying to follow this page: 
 http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not working. 
 Can anybody help me with some doubts about the installation?
 
 Thank you so much.
 
What exactly is happening with Ryu on Folsom here? I've run this before and had 
no issues, so I can try to provide some help here.

Kyle

 Guilherme.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Guilherme Russi
Hello Kyle, The doubts are, If I install the ryu plugin, my current working
openstack will broken, I mean, only installing ryu by the step python
./setup.py install, another thing, and the part to configure nova.conf, I
don't have nova-network, I must install it?

Thanks.

Guilherme.


2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com

 On May 7, 2013, at 7:57 AM, Guilherme Russi luisguilherme...@gmail.com
 wrote:
 
  Hello Guys,
 
   I'm here again with some doubt, have anybody already installed the Ryu
 plugin with Quantum? I'm trying to follow this page:
 http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not
 working. Can anybody help me with some doubts about the installation?
 
  Thank you so much.
 
 What exactly is happening with Ryu on Folsom here? I've run this before
 and had no issues, so I can try to provide some help here.

 Kyle

  Guilherme.
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] cinder malformed url error

2013-05-07 Thread Dennis Jacobfeuerborn

Hi,
I've got the cinder-api service up and running but when I run cinder 
list I get an error ERROR: Malformed request url.


This is the URL called from the debug output:
REQ: curl -i 
http://10.16.171.3:8776/v1/11b39f6529ea4eb6a527de82122ba6f6/volumes/detail 
-X GET


The endpoints in the database are defined as
http://10.16.171.3:8776/v1/%(tenant_id)s
which should be correct.

Any ideas what about the URL is malformed?

Regards,
  Dennis

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Kyle Mestery (kmestery)
You don't want to run nova-network with OpenStack Networking (Quantum) at the 
same time. If you're running Folsom, I think upstream Ryu has changed such that 
the Folsom version of the Quantum plugin may not work. Running Grizzly will 
solve this, or alternatively if you want to try Folsom, look at the github page 
here for instructions on which branch of Ryu to pull to get it working with 
Folsom Quantum:

https://github.com/osrg/ryu/wiki/RYU-Openstack-Folsom-environment-HOWTO

Thanks,
Kyle

On May 7, 2013, at 8:21 AM, Guilherme Russi luisguilherme...@gmail.com wrote:

 Hello Kyle, The doubts are, If I install the ryu plugin, my current working 
 openstack will broken, I mean, only installing ryu by the step python 
 ./setup.py install, another thing, and the part to configure nova.conf, I 
 don't have nova-network, I must install it?
 
 Thanks.
 
 Guilherme.
 
 
 2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
 On May 7, 2013, at 7:57 AM, Guilherme Russi luisguilherme...@gmail.com 
 wrote:
 
  Hello Guys,
 
   I'm here again with some doubt, have anybody already installed the Ryu 
  plugin with Quantum? I'm trying to follow this page: 
  http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not 
  working. Can anybody help me with some doubts about the installation?
 
  Thank you so much.
 
 What exactly is happening with Ryu on Folsom here? I've run this before and 
 had no issues, so I can try to provide some help here.
 
 Kyle
 
  Guilherme.
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Guilherme Russi
Hello again Kyle, thank you for answering, how do I update the Openstack
version, from Folsom to Grizzly?

Regards.

Guilherme.


2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com

 You don't want to run nova-network with OpenStack Networking (Quantum) at
 the same time. If you're running Folsom, I think upstream Ryu has changed
 such that the Folsom version of the Quantum plugin may not work. Running
 Grizzly will solve this, or alternatively if you want to try Folsom, look
 at the github page here for instructions on which branch of Ryu to pull to
 get it working with Folsom Quantum:

 https://github.com/osrg/ryu/wiki/RYU-Openstack-Folsom-environment-HOWTO

 Thanks,
 Kyle

 On May 7, 2013, at 8:21 AM, Guilherme Russi luisguilherme...@gmail.com
 wrote:

  Hello Kyle, The doubts are, If I install the ryu plugin, my current
 working openstack will broken, I mean, only installing ryu by the step
 python ./setup.py install, another thing, and the part to configure
 nova.conf, I don't have nova-network, I must install it?
 
  Thanks.
 
  Guilherme.
 
 
  2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
  On May 7, 2013, at 7:57 AM, Guilherme Russi luisguilherme...@gmail.com
 wrote:
  
   Hello Guys,
  
I'm here again with some doubt, have anybody already installed the
 Ryu plugin with Quantum? I'm trying to follow this page:
 http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not
 working. Can anybody help me with some doubts about the installation?
  
   Thank you so much.
  
  What exactly is happening with Ryu on Folsom here? I've run this before
 and had no issues, so I can try to provide some help here.
 
  Kyle
 
   Guilherme.
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Installing OpenvSwitch on CentOS 6.4

2013-05-07 Thread Pádraig Brady
On 05/07/2013 05:53 AM, Ashutosh Narayan wrote:
 Hi Folks,
 
 I have installed Grizzly on CentOS 6.4 and facing some issues while
 installing OpenvSwitch in order to make Quantum service work properly.
 Can anybody point me to some link which has useful information for the same ?
 I was following this link to set up Quantum -
 https://fedoraproject.org/wiki/Packstack_to_Quantum
 
 And the below mentioned link to install OpenvSwitch -
 http://networkstatic.net/open-vswitch-red-hat-installation/
 I am not able to compile the source. It fails while running make.

That may be old now redundant info.
kernel = 2.6.32-343 should facilitate OpenvSwitch.
Note also that user space rpms are available from the RDO repositories.
Please see http://openstack.redhat.com/Quickstart for setup details.

thanks,
Pádraig.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swauth

2013-05-07 Thread Paras pradhan
Hi,

I am getting 500 server Error when I create the user.  The command i used:

 swauth-add-user -K swauthkey -a test testerrr testin

swauth version = 1.0.2+git2028-0ubuntu1

swift from grizzly runing on 12.04 lts.


However I can see from swauth-list

swauth-list -A http://localhost:8080/auth/ -K swauthkey | python -m
json.tool

{ accounts: [ { name: groupX }, { name: test } ] }

Anybody seeing this issue?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swauth

2013-05-07 Thread Christian Schwede
Hi,

Am 07.05.13 17:32, schrieb Paras pradhan:
 I am getting 500 server Error when I create the user.  The command i used:
 
  swauth-add-user -K swauthkey -a test testerrr testin
 
 swauth version = 1.0.2+git2028-0ubuntu1

that version of swauth is quite old, current version is 1.0.8, released
13 days ago. Is it possible to test with 1.0.8?

Regards,

Christian

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swauth

2013-05-07 Thread Paras pradhan
Yes but where do I find it. I can only see 1.0.9-dev at github.

Paras.


On Tue, May 7, 2013 at 10:55 AM, Christian Schwede i...@cschwede.de wrote:

 Hi,

 Am 07.05.13 17:32, schrieb Paras pradhan:
  I am getting 500 server Error when I create the user.  The command i
 used:
 
   swauth-add-user -K swauthkey -a test testerrr testin
 
  swauth version = 1.0.2+git2028-0ubuntu1

 that version of swauth is quite old, current version is 1.0.8, released
 13 days ago. Is it possible to test with 1.0.8?

 Regards,

 Christian

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] nova-objectstore on grizzly

2013-05-07 Thread Molnár Mihály László
Hi all!

I want to setup juju for my openstack installation, but it requires
objectstore. Intsalling swift is just seems overhead for this environment.

Does grizzly support nova-objectstore? How should I configure it? I
couldn't find any documentation about it. As I can see after installation
the service is started, but I can't see it in Horizon API list, and I dont
know if there is something I should modify.
nova-manage service list doesnt show it neither.

Thanks!

Rusty
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swauth

2013-05-07 Thread Christian Schwede
Oh, the 1.0.8 commit is not yet tagged.

You can download the release version using the commit hash:

https://github.com/gholt/swauth/archive/4fc009d5ad9d3309ed023d433c01ade21e0e16af.zip

Christian


Am 07.05.13 18:12, schrieb Paras pradhan:
 Yes but where do I find it. I can only see 1.0.9-dev at github.
 
 Paras.
 
 
 On Tue, May 7, 2013 at 10:55 AM, Christian Schwede i...@cschwede.de
 mailto:i...@cschwede.de wrote:
 
 Hi,
 
 Am 07.05.13 17:32, schrieb Paras pradhan:
  I am getting 500 server Error when I create the user.  The command
 i used:
 
   swauth-add-user -K swauthkey -a test testerrr testin
 
  swauth version = 1.0.2+git2028-0ubuntu1
 
 that version of swauth is quite old, current version is 1.0.8, released
 13 days ago. Is it possible to test with 1.0.8?
 
 Regards,
 
 Christian
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swauth

2013-05-07 Thread Paras pradhan
It worked ! Thanks a lot.

Paras.


On Tue, May 7, 2013 at 11:42 AM, Christian Schwede i...@cschwede.de wrote:

 Oh, the 1.0.8 commit is not yet tagged.

 You can download the release version using the commit hash:


 https://github.com/gholt/swauth/archive/4fc009d5ad9d3309ed023d433c01ade21e0e16af.zip

 Christian


 Am 07.05.13 18:12, schrieb Paras pradhan:
  Yes but where do I find it. I can only see 1.0.9-dev at github.
 
  Paras.
 
 
  On Tue, May 7, 2013 at 10:55 AM, Christian Schwede i...@cschwede.de
  mailto:i...@cschwede.de wrote:
 
  Hi,
 
  Am 07.05.13 17:32, schrieb Paras pradhan:
   I am getting 500 server Error when I create the user.  The command
  i used:
  
swauth-add-user -K swauthkey -a test testerrr testin
  
   swauth version = 1.0.2+git2028-0ubuntu1
 
  that version of swauth is quite old, current version is 1.0.8,
 released
  13 days ago. Is it possible to test with 1.0.8?
 
  Regards,
 
  Christian
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Kyle Mestery (kmestery)
If you are using devstack, it's as simple as adding a stable/grizzly for each 
component you are running, similar to this:

NOVA_BRANCH=stable/grizzly

Repeat that for the services you are running, (e.g. Quantum, Glance, etc.).

Thanks,
Kyle

On May 7, 2013, at 10:20 AM, Guilherme Russi luisguilherme...@gmail.com wrote:

 Hello again Kyle, thank you for answering, how do I update the Openstack 
 version, from Folsom to Grizzly?
 
 Regards.
 
 Guilherme.
 
 
 2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
 You don't want to run nova-network with OpenStack Networking (Quantum) at the 
 same time. If you're running Folsom, I think upstream Ryu has changed such 
 that the Folsom version of the Quantum plugin may not work. Running Grizzly 
 will solve this, or alternatively if you want to try Folsom, look at the 
 github page here for instructions on which branch of Ryu to pull to get it 
 working with Folsom Quantum:
 
 https://github.com/osrg/ryu/wiki/RYU-Openstack-Folsom-environment-HOWTO
 
 Thanks,
 Kyle
 
 On May 7, 2013, at 8:21 AM, Guilherme Russi luisguilherme...@gmail.com 
 wrote:
 
  Hello Kyle, The doubts are, If I install the ryu plugin, my current working 
  openstack will broken, I mean, only installing ryu by the step python 
  ./setup.py install, another thing, and the part to configure nova.conf, I 
  don't have nova-network, I must install it?
 
  Thanks.
 
  Guilherme.
 
 
  2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
  On May 7, 2013, at 7:57 AM, Guilherme Russi luisguilherme...@gmail.com 
  wrote:
  
   Hello Guys,
  
I'm here again with some doubt, have anybody already installed the Ryu 
   plugin with Quantum? I'm trying to follow this page: 
   http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not 
   working. Can anybody help me with some doubts about the installation?
  
   Thank you so much.
  
  What exactly is happening with Ryu on Folsom here? I've run this before and 
  had no issues, so I can try to provide some help here.
 
  Kyle
 
   Guilherme.
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Guilherme Russi
I'm not using devstack, I have installed each component and put a
repository in my ubuntu, I need to purge all of them or just change the
repository and run an apt-get update / apt-get upgrade?

Regards.

Guilherme.


2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com

 If you are using devstack, it's as simple as adding a stable/grizzly for
 each component you are running, similar to this:

 NOVA_BRANCH=stable/grizzly

 Repeat that for the services you are running, (e.g. Quantum, Glance, etc.).

 Thanks,
 Kyle

 On May 7, 2013, at 10:20 AM, Guilherme Russi luisguilherme...@gmail.com
 wrote:

  Hello again Kyle, thank you for answering, how do I update the Openstack
 version, from Folsom to Grizzly?
 
  Regards.
 
  Guilherme.
 
 
  2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
  You don't want to run nova-network with OpenStack Networking (Quantum)
 at the same time. If you're running Folsom, I think upstream Ryu has
 changed such that the Folsom version of the Quantum plugin may not work.
 Running Grizzly will solve this, or alternatively if you want to try
 Folsom, look at the github page here for instructions on which branch of
 Ryu to pull to get it working with Folsom Quantum:
 
  https://github.com/osrg/ryu/wiki/RYU-Openstack-Folsom-environment-HOWTO
 
  Thanks,
  Kyle
 
  On May 7, 2013, at 8:21 AM, Guilherme Russi luisguilherme...@gmail.com
 wrote:
 
   Hello Kyle, The doubts are, If I install the ryu plugin, my current
 working openstack will broken, I mean, only installing ryu by the step
 python ./setup.py install, another thing, and the part to configure
 nova.conf, I don't have nova-network, I must install it?
  
   Thanks.
  
   Guilherme.
  
  
   2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
   On May 7, 2013, at 7:57 AM, Guilherme Russi 
 luisguilherme...@gmail.com wrote:
   
Hello Guys,
   
 I'm here again with some doubt, have anybody already installed the
 Ryu plugin with Quantum? I'm trying to follow this page:
 http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not
 working. Can anybody help me with some doubts about the installation?
   
Thank you so much.
   
   What exactly is happening with Ryu on Folsom here? I've run this
 before and had no issues, so I can try to provide some help here.
  
   Kyle
  
Guilherme.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
  
  
  
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Kyle Mestery (kmestery)
I would have to defer to the normal Ubuntu OpenStack install/upgrade procedures 
here, I'm not an expert in that area.

On May 7, 2013, at 12:01 PM, Guilherme Russi luisguilherme...@gmail.com wrote:

 I'm not using devstack, I have installed each component and put a repository 
 in my ubuntu, I need to purge all of them or just change the repository and 
 run an apt-get update / apt-get upgrade?
 
 Regards.
 
 Guilherme.
 
 
 2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
 If you are using devstack, it's as simple as adding a stable/grizzly for 
 each component you are running, similar to this:
 
 NOVA_BRANCH=stable/grizzly
 
 Repeat that for the services you are running, (e.g. Quantum, Glance, etc.).
 
 Thanks,
 Kyle
 
 On May 7, 2013, at 10:20 AM, Guilherme Russi luisguilherme...@gmail.com 
 wrote:
 
  Hello again Kyle, thank you for answering, how do I update the Openstack 
  version, from Folsom to Grizzly?
 
  Regards.
 
  Guilherme.
 
 
  2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
  You don't want to run nova-network with OpenStack Networking (Quantum) at 
  the same time. If you're running Folsom, I think upstream Ryu has changed 
  such that the Folsom version of the Quantum plugin may not work. Running 
  Grizzly will solve this, or alternatively if you want to try Folsom, look 
  at the github page here for instructions on which branch of Ryu to pull to 
  get it working with Folsom Quantum:
 
  https://github.com/osrg/ryu/wiki/RYU-Openstack-Folsom-environment-HOWTO
 
  Thanks,
  Kyle
 
  On May 7, 2013, at 8:21 AM, Guilherme Russi luisguilherme...@gmail.com 
  wrote:
 
   Hello Kyle, The doubts are, If I install the ryu plugin, my current 
   working openstack will broken, I mean, only installing ryu by the step 
   python ./setup.py install, another thing, and the part to configure 
   nova.conf, I don't have nova-network, I must install it?
  
   Thanks.
  
   Guilherme.
  
  
   2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
   On May 7, 2013, at 7:57 AM, Guilherme Russi luisguilherme...@gmail.com 
   wrote:
   
Hello Guys,
   
 I'm here again with some doubt, have anybody already installed the Ryu 
plugin with Quantum? I'm trying to follow this page: 
http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not 
working. Can anybody help me with some doubts about the installation?
   
Thank you so much.
   
   What exactly is happening with Ryu on Folsom here? I've run this before 
   and had no issues, so I can try to provide some help here.
  
   Kyle
  
Guilherme.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
  
  
  
 
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum + Ryu on Folsom

2013-05-07 Thread Guilherme Russi
Well, thank you for your support, I'll try to do this and I'll let you know
if it worked, or not.

Regards.


2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com

 I would have to defer to the normal Ubuntu OpenStack install/upgrade
 procedures here, I'm not an expert in that area.

 On May 7, 2013, at 12:01 PM, Guilherme Russi luisguilherme...@gmail.com
 wrote:

  I'm not using devstack, I have installed each component and put a
 repository in my ubuntu, I need to purge all of them or just change the
 repository and run an apt-get update / apt-get upgrade?
 
  Regards.
 
  Guilherme.
 
 
  2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
  If you are using devstack, it's as simple as adding a stable/grizzly
 for each component you are running, similar to this:
 
  NOVA_BRANCH=stable/grizzly
 
  Repeat that for the services you are running, (e.g. Quantum, Glance,
 etc.).
 
  Thanks,
  Kyle
 
  On May 7, 2013, at 10:20 AM, Guilherme Russi luisguilherme...@gmail.com
 wrote:
 
   Hello again Kyle, thank you for answering, how do I update the
 Openstack version, from Folsom to Grizzly?
  
   Regards.
  
   Guilherme.
  
  
   2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
   You don't want to run nova-network with OpenStack Networking (Quantum)
 at the same time. If you're running Folsom, I think upstream Ryu has
 changed such that the Folsom version of the Quantum plugin may not work.
 Running Grizzly will solve this, or alternatively if you want to try
 Folsom, look at the github page here for instructions on which branch of
 Ryu to pull to get it working with Folsom Quantum:
  
  
 https://github.com/osrg/ryu/wiki/RYU-Openstack-Folsom-environment-HOWTO
  
   Thanks,
   Kyle
  
   On May 7, 2013, at 8:21 AM, Guilherme Russi 
 luisguilherme...@gmail.com wrote:
  
Hello Kyle, The doubts are, If I install the ryu plugin, my current
 working openstack will broken, I mean, only installing ryu by the step
 python ./setup.py install, another thing, and the part to configure
 nova.conf, I don't have nova-network, I must install it?
   
Thanks.
   
Guilherme.
   
   
2013/5/7 Kyle Mestery (kmestery) kmest...@cisco.com
On May 7, 2013, at 7:57 AM, Guilherme Russi 
 luisguilherme...@gmail.com wrote:

 Hello Guys,

  I'm here again with some doubt, have anybody already installed
 the Ryu plugin with Quantum? I'm trying to follow this page:
 http://osrg.github.io/ryu/doc/using_with_openstack.html but it's not
 working. Can anybody help me with some doubts about the installation?

 Thank you so much.

What exactly is happening with Ryu on Folsom here? I've run this
 before and had no issues, so I can try to provide some help here.
   
Kyle
   
 Guilherme.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
   
   
   
  
  
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Horizon - Internal Server Error when hitting /nova/instances_and_volumes/

2013-05-07 Thread Daniel Ellison
As the subject says, I'm having issues getting at Instances  Volumes (also 
Images  Snapshots) in Horizon. I'm running grizzly on precise. Everything 
else works fine; the entire Admin tab works as expected. The Overview and 
Access  Security also work fine.

Is there any way to see what calls are being made from horizon to nova? I 
debugged some previous issues by using --debug on some python client calls. But 
I don't think there's an equivalent in this case.

The Apache log on my horizon machine (a VM under nova) shows the 500 error, 
then has a bunch of 404s when trying to retrieve media and other resources, 
e.g. /nova/images_and_snapshots/dashboard/css/style.css. For all other calls 
there are no 404 errors (and no 500 errors, needless to say). 

I'm only using Nova, Keystone and Glance for the moment. So no Cinder to 
consider, and no attached volumes. Does any of this sound familiar to anyone?

Thanks,
Daniel
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum conceptual question (bridges)

2013-05-07 Thread Lorin Hochstein
Édouard:

I didn't realize that there's a Linux software bridge inolved when security
groups are enabled.

However, this doesn't really answer my original question. I asked about the
fact that there seemed to be two openvswitch bridges that packets have to
cross to get from the virtual interface (say, vnet0) to the physical
interface (say, eth2) on the host, assuming the openvswitch plugin and
using vlan for transport.

vnet0 -- br-int -- br-eth2 -- eth2.


Based on your answer,  I see that there are actually three bridges that
packets have to traverse when using security groups:

vnet0 -- qbr -- br-int -- br-eth2 -- eth2

Is this view correct? If so, is there a performance penalty (e.g.,
increased latency, reduced bandwidth) for having to cross two Open vSwitch
bridges: br-int and br-eth2?

If there is a penalty, I was curious as to whether this splitting into two
bridges was done because it isn't possible to implement the desired
functionality using a single openvswitch bridge, or if there was some other
reason why it was split out into two (e.g., to simplify the implementation).

Lorin





On Tue, May 7, 2013 at 2:38 AM, Édouard Thuleau thul...@gmail.com wrote:

 OVS is not compatible with iptables + ebtables rules that are applied
 directly on VIF ports.
 So the libvirt_vif_driver 'nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver'
 create a Linux software bridge to be able to apply security group rules
 with iptables.

 If you don't need the security group functionalities, you can
 use libvirt_vif_driver 
 'nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver'
 or 'nova.virt.libvirt.vif.LibvirtOpenVswitchDriver' (depends on your
 libvirt version).
 http://docs.openstack.org/trunk/openstack-network/admin/content/nova_with_quantum_vifplugging_ovs.html

 I think this point must be listed in the limitations page of the OpenStack
 Networking Admin guide
 http://docs.openstack.org/grizzly/openstack-network/admin/content/ch_limitations.html

 Édouard.

 On Tue, May 7, 2013 at 2:46 AM, Lorin Hochstein 
 lo...@nimbisservices.comwrote:

 I'm trying to wrap my head around how Quantum works. If understanding
 things correctly, when using the openvswitch plugin, a packet traveling
 from a guest out to the physical switch has to cross two software bridges:

 1. br-int
 2. br-ethN or br-tun (depending on whether using VLANs or GRE tunnels)

 So, I think I understand the motivation behind this: the integration
 bridge handles the rules associated with the virtual networks defined by
 OpenStack users, and the (br-ethN | br-tun) bridge handles the rules
 associated with moving the packets across the physical network.

 My question is:  Does having two software bridges in the path incur a
 larger network performance penalty than if there was only a single software
 bridge between the VIF and the physical network interface?

 If so, was Quantum implemented this way because it's simply not possible
 to achieve the desired functionality using a single openvswitch bridge, or
 was it because using the dual-bridge approach simplified the
 implementation, or was there some other reason?

 Lorin
 --
 Lorin Hochstein
 Lead Architect - Cloud Services
 Nimbis Services, Inc.
 www.nimbisservices.com

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Help About The Documentation

2013-05-07 Thread Razique Mahroua
+1Well the best option would be a raid 1 that would ensure data safety in case one drive fails actually.There is not I think any "optimal" strategy - but since that service (Cinder/ nova-volume) aims to provide customers a safe place to put their datas into - then your best best would be a raid 1 if you have two disksRegards,
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 30 avr. 2013 à 17:44, Sylvain Bauza sylvain.ba...@digimind.com a écrit :
  

  
  
Indeed. There is room for improvement :
  should the 2 disks be RAID1 or parts of the same VG, as said ?
  
  As it is recommended hardware, it would be interesting to know
  which kind of setup with 2 SATA disks is optimal ?
  
  Of course, it does depend a lot of the usage : if you need to boot
  from volume, I would say having 2 physical PVs is better, but if
  you need to store critical data, then the RAID one is fine.
  
  -Sylvain
  
  Le 30/04/2013 14:35, Alexandre De Carvalho a écrit:

Here the link :http://docs.openstack.org/trunk/openstack-compute/admin/content/compute-system-requirements.html
  

  
  regards,
  Alexandre

2013/4/30 Razique Mahroua razique.mahr...@gmail.com
  
Hi,
  can you provide us the link ?
  I think that means create an LVM VG made of two disks
(so two PV) that you will call "nova-volume"
  Regards,
  

  Razique Mahroua-Nuage  Co
razique.mahr...@gmail.com
Tel: +33
  9 72 37 94 15
  
  Pièce jointe.jpeg



  Le 30 avr. 2013 à 09:49, Alexandre De Carvalho
alexandre7.decarva...@gmail.com
a écrit :
  
  

  
Hi everyone !


I found these ones in the documentation :
  "Volume storage: two disks with 2 TB (SATA)
  for volumes attached to the compute nodes".
  And I don't understand this sentence. Someone
  can explain to me this sentence, please ?


Thanks !


Have a good day !
  
  
  -- 
  regards,
  Alexandre
  
  
  
  

  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

  

  

  
  
  
  
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Disk space not available when creating small instance using OpenStack

2013-05-07 Thread Razique Mahroua
Excellent :)

Le 30 avr. 2013 à 16:29, rahul singh singh.rahul.1...@gmail.com a écrit :

 Thanks Razique, that works :)
 
 
 On Tue, Apr 30, 2013 at 10:22 AM, Razique Mahroua razique.mahr...@gmail.com 
 wrote:
 if it's an ephemeral one, just run $ fdisk -l and you will see it :)
 it's neither mounted nor formatted in the first place
 regards,
 
 Razique Mahroua - Nuage  Co
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15
 
 NUAGECO-LOGO-Fblan_petit.jpg
 
 Le 30 avr. 2013 à 16:16, rahul singh singh.rahul.1...@gmail.com a écrit :
 
 Hi,
 I installed OpenStack using devstack on a Ubuntu 12.04 VM created using 
 VirtualBox. I create an m1.small instance which should have 20GB disk. But I 
 log into the VM created by OpenStack I do not see 20GB disk space. Here is 
 my df -h output:
 
 FilesystemSize  Used Available Use% Mounted on
 /dev998.2M 0998.2M   0% /dev
 /dev/vda 54.2M  9.7M 41.7M  19% /
 tmpfs  1001.8M 0   1001.8M   0% /dev/shm
 tmpfs   200.0K 72.0K128.0K  36% /run
 
 Where is the 20GB space?
 
 Thanking you,
 Rahul.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Keystone] Need feedback on how to fix keystone ldap domain support for Grizzly; are you using keystone ldap with multiple domains?

2013-05-07 Thread Brad Topol
Hi Folks,

The current implementation of Keystone's domain support when using LDAP as 
a backend is broken in the read-only case for Grizzly.  This is because 
Keystone in Grizzly assumes it can create a default domain which is not 
possible for many read-only LDAPs.  We are trying to backport a fix for 
this.  Basically  we have two options:

1.  Completely refrain from trying to store domains in LDAP.  If we run 
with the assumptions that most LDAPs don't have the concept of a domain 
than we just assume that there is one default domain per LDAP backend for 
Grizzly.

2.  Patch the current implementation so that if the default domain does 
not exist essentially emulate having one.  This will work but will leave 
in storing the domain_id in an LDAP attribute such as businessCategory (or 
equivalent attribute, its mappable).   This design has been seen by many 
as not desirable so we would like to avoid having to leave it in if we can 
and then  start fresh for Havana.

Ideally we would like to go with Option 1.  We need to know if there are 
any early adopters of Grizzly that are using keystone with an LDAP backend 
and using it to store multiple domains in the LDAP.   Because if we 
backport option 1 we will most certainly break anyone who is using 
keystone with an LDAP backend and using it to store multiple domains in 
the LDAP.

Please provide us input on this if you are using keystone ldap domain 
support!

Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Cindy Willman (919) 268-5296___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Failure to arp by quantum router

2013-05-07 Thread Greg Chavez
Problem:

VMs launch, get local IPs from DHCP, get floating ips, and are
available from the external network, but seem to time out after a
minute of inactivity.  After that time, the VM cannot be reached.  I
can sometimes network access back by pinging the external gateway from
the VM console, but eventually times out as well.

Background:

I have Grizzly up and running on Ubuntu 13.04 with gre-tunneling and
per-tenant routers.  I have a network node with three interfaces,
public, management, and vm config.  The compute nodes and quantum node
interact on the vm config network.  Everything else runs from the
controller node.

Here's a diagram of my setup:
http://chavezy.files.wordpress.com/2013/03/ostack-log-net_iscsi.png

I can trace the packet to the external port of the tenant router on
the network node.  So the network node arps for the floating IP with
its interface, no problem.  But when I sniff the tenant network side
of the router, I see unanswered arp requests for the VM's local IP.

What's failing here? How would you troubleshoot this?  Thanks.

--
\*..+.-
--Greg Chavez
+//..;};

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Keystone] Need feedback on how to fix keystone ldap domain support for Grizzly; are you using keystone ldap with multiple domains?

2013-05-07 Thread Aaron Knister
Hi Brad,

FWIW-- I'm using AD as the LDAP backend and was using the msSFU30NisDomain
attribute for the domain_id mapping. I'm now leveraging some OpenLDAP
overlay magic instead, but I digress. I could see value for us in being
able to leverage a domain_id stored in LDAP although admittedly we aren't
yet using it. Could option 1 be implemented but be configurable? Something
like domains_enabled? If disabled then the behavior in the default domain
is used for all operations, if enabled then the current behavior is used?

-Aaron


On Tue, May 7, 2013 at 3:56 PM, Brad Topol bto...@us.ibm.com wrote:

 Hi Folks,

 The current implementation of Keystone's domain support when using LDAP as
 a backend is broken in the read-only case for Grizzly.  This is because
 Keystone in Grizzly assumes it can create a default domain which is not
 possible for many read-only LDAPs.  We are trying to backport a fix for
 this.  Basically  we have two options:

 1.  Completely refrain from trying to store domains in LDAP.  If we run
 with the assumptions that most LDAPs don't have the concept of a domain
 than we just assume that there is one default domain per LDAP backend for
 Grizzly.

 2.  Patch the current implementation so that if the default domain does
 not exist essentially emulate having one.  This will work but will leave in
 storing the domain_id in an LDAP attribute such as businessCategory (or
 equivalent attribute, its mappable).   This design has been seen by many as
 not desirable so we would like to avoid having to leave it in if we can and
 then  start fresh for Havana.

 Ideally we would like to go with Option 1.  We need to know if there are
 any early adopters of Grizzly that are using keystone with an LDAP backend
 and using it to store multiple domains in the LDAP.   Because if we
 backport option 1 we will most certainly break anyone who is using
  keystone with an LDAP backend and using it to store multiple domains in
 the LDAP.

 Please provide us input on this if you are using keystone ldap domain
 support!

 Thanks,

 Brad

 Brad Topol, Ph.D.
 IBM Distinguished Engineer
 OpenStack
 (919) 543-0646
 Internet:  bto...@us.ibm.com
 Assistant: Cindy Willman (919) 268-5296
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] cinder malformed url error

2013-05-07 Thread Dennis Jacobfeuerborn

On 07.05.2013 16:05, Dennis Jacobfeuerborn wrote:

Hi,
I've got the cinder-api service up and running but when I run cinder
list I get an error ERROR: Malformed request url.

This is the URL called from the debug output:
REQ: curl -i
http://10.16.171.3:8776/v1/11b39f6529ea4eb6a527de82122ba6f6/volumes/detail
-X GET

The endpoints in the database are defined as
http://10.16.171.3:8776/v1/%(tenant_id)s
which should be correct.

Any ideas what about the URL is malformed?


Found the reason. Apparently this happens when auth_strategy = 
keystone is missing from the config.


The issue I know have that a cinder create 1 leads to a volume in 
error state with the following error in the scheduler.log:


2013-05-08 02:13:17  WARNING [cinder.scheduler.host_manager] service is 
down or disabled.
2013-05-08 02:13:17ERROR [cinder.scheduler.manager] Failed to 
schedule_create_volume: No valid host was found.


I'm running the api and scheduler services on the controller node and 
the volume service on a compute node. Any ideas why cinder doesn't find 
a valid host?


Regards,
  Dennis

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Unable to ping VM using OpenStack and Quantum(openvswitch plugin)

2013-05-07 Thread Daniels Cai
Is your physical nic whiched used for vm communication can ping each other?
Check your GRE tunnel if you are in GRE mode or check your vlan
setting in  physical switch if in vlan mode


Daniels Cai

http://dnscai.com

在 2013-5-7,14:56,zengshan2008 zengshan2...@gmail.com 写道:

 Hi,
 I've installed openstack using quantum by the guide
 https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
 everything works fine, but I can't ping vm from the outside world, neither 
 from the network node.The following is some of my configration.
 1) root@networknode:/etc# ip netns
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189
 qdhcp-e58739ff-16dc-4289-8110-242f7818d314
 2) qrouter and qdhcp server is up
 root@networknode:/etc# ip netns exec 
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ifconfig
 loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:85 errors:0 dropped:0 overruns:0 frame:0
  TX packets:85 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:9224 (9.2 KB)  TX bytes:9224 (9.2 KB)

 qg-daf2c037-cc Link encap:Ethernet  HWaddr fa:16:3e:ea:f6:c3
  inet addr:192.168.23.102  Bcast:192.168.23.255  Mask:255.255.255.0
  inet6 addr: 2401:de00::f816:3eff:feea:f6c3/64 Scope:Global
  inet6 addr: fe80::f816:3eff:feea:f6c3/64 Scope:Link
  inet6 addr: 2401:de00::6066:acc0:66e3:7434/64 Scope:Global
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:5392 errors:0 dropped:0 overruns:0 frame:0
  TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:406572 (406.5 KB)  TX bytes:846 (846.0 B)

 qr-d9cb6d6d-5e Link encap:Ethernet  HWaddr fa:16:3e:6d:5a:3a
  inet addr:202.122.38.1  Bcast:202.122.38.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fe6d:5a3a/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:24 errors:0 dropped:0 overruns:0 frame:0
  TX packets:108 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:2184 (2.1 KB)  TX bytes:5928 (5.9 KB)

 root@networknode:/etc# ip netns exec 
 qdhcp-e58739ff-16dc-4289-8110-242f7818d314 ifconfig
 loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:10 errors:0 dropped:0 overruns:0 frame:0
  TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:840 (840.0 B)  TX bytes:840 (840.0 B)

 tape10a4f07-60 Link encap:Ethernet  HWaddr fa:16:3e:db:8f:23
  inet addr:202.122.38.14  Bcast:202.122.38.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fedb:8f23/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:106 errors:0 dropped:0 overruns:0 frame:0
  TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:5760 (5.7 KB)  TX bytes:2652 (2.6 KB)
 3) qrouter can ping the dhcp server from the network node
 root@networknode:/etc# ip netns exec 
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ping 202.122.38.14
 PING 202.122.38.14 (202.122.38.14) 56(84) bytes of data.
 64 bytes from 202.122.38.14: icmp_req=1 ttl=64 time=0.325 ms
 64 bytes from 202.122.38.14: icmp_req=2 ttl=64 time=0.023 ms
 64 bytes from 202.122.38.14: icmp_req=3 ttl=64 time=0.024 ms
 ^C
 --- 202.122.38.14 ping statistics ---
 3 packets transmitted, 3 received, 0% packet loss, time 1998ms
 rtt min/avg/max/mdev = 0.023/0.124/0.325/0.142 ms
 4) virtual machine is up
 quantum floatingip-list
 +--+--+-+--+
 | id   | fixed_ip_address | 
 floating_ip_address | port_id  |
 +--+--+-+--+
 | 88398dd1-7256-49c7-b1ad-719903125501 | 202.122.38.15| 192.168.23.103
   | ce7c1eff-afcb-4908-b399-0e6e07d2791e |
 +--+--+-+--+
   5) virtual machine eth0 is up

 6)ssh or ping to vm is failed
 root@networknode:/etc# ip netns exec 
 qrouter-8f5f3c17-a00e-4382-a403-181dfbb9d189 ping 202.122.38.15
 PING 202.122.38.15 (202.122.38.15) 56(84) bytes of data.
 From 202.122.38.1 icmp_seq=1 Destination Host Unreachable
 From 202.122.38.1 icmp_seq=2 Destination Host Unreachable
 From 202.122.38.1 icmp_seq=3 Destination Host Unreachable
 From 202.122.38.1 icmp_seq=4 

Re: [Openstack] cinder malformed url error

2013-05-07 Thread Steve Heistand

 2013-05-08 02:13:17  WARNING [cinder.scheduler.host_manager] service is 
 down or disabled.
 2013-05-08 02:13:17ERROR [cinder.scheduler.manager] Failed to 
 schedule_create_volume: No valid host was found.
 
 I'm running the api and scheduler services on the controller node and 
 the volume service on a compute node. Any ideas why cinder doesn't find 
 a valid host?

Ive found that even small time differences between nodes can cause issues
like this.

s


-- 

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][DevStack] How to contribute?

2013-05-07 Thread Gareth
Hi all

In OpenStack 'how-to-contribute' document, the first step is join the team
in Launchpad. All of ~swift, ~nova, ~glance are ok. But ~devstack is a
restricted team here[1].

Does that 'joining the team' is a necessary step for contributing?


[1] https://launchpad.net/~devstack https://launchpad.net/~devstack

-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] cinder malformed url error

2013-05-07 Thread Dennis Jacobfeuerborn

On 08.05.2013 02:34, Steve Heistand wrote:



2013-05-08 02:13:17  WARNING [cinder.scheduler.host_manager] service is
down or disabled.
2013-05-08 02:13:17ERROR [cinder.scheduler.manager] Failed to
schedule_create_volume: No valid host was found.

I'm running the api and scheduler services on the controller node and
the volume service on a compute node. Any ideas why cinder doesn't find
a valid host?


Ive found that even small time differences between nodes can cause issues
like this.


Yep, that was it. Ntpd was running on the systems but apparently it 
couldn't reach any timeservers. I set up another system as the time 
source to synchronize the two nodes to and now the volume is created 
properly.


Thanks!

Regards,
  Dennis

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][DevStack] How to contribute?

2013-05-07 Thread Dean Troyer
On Tue, May 7, 2013 at 9:16 PM, Gareth academicgar...@gmail.com wrote:
 In OpenStack 'how-to-contribute' document, the first step is join the team in 
 Launchpad. All of ~swift, ~nova, ~glance are ok. But ~devstack is a 
 restricted team here[1].

 Does that 'joining the team' is a necessary step for contributing?

Not for DevStack, the only team we have set up is the core team.
IIRC, if you are not a member of any other teams you would need to
join the OpenStack team[1] that is linked in the grey box under the
team list.

Otherwise you just need to have completed the CLA-related bits that
allows you to send reviews to Gerrit.

dt

[1] https://launchpad.net/~openstack

--

Dean Troyer
dtro...@gmail.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] floodlight ignore subnet gateway due to PORT_DOWN and LINK_DOWN

2013-05-07 Thread Liu Wenmao
I just comment some of floodlight codes and VMs can ping the gateway.

But I do not know why the gateway port and link is down, it is up in
namespace view:

 root@controller
:/usr/src/eclipse# ip netns exec
qrouter-7bde1209-e8ed-4ae6-a627-efaa148c743c ip link
14: qr-8af2e01f-bb: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UNKNOWN
link/ether fa:16:3e:f7:3d:5e brd ff:ff:ff:ff:ff:ff

root@controller
:/usr/src/eclipse# ip netns exec
qrouter-7bde1209-e8ed-4ae6-a627-efaa148c743c ip addr
14: qr-8af2e01f-bb: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UNKNOWN
link/ether fa:16:3e:f7:3d:5e brd ff:ff:ff:ff:ff:ff
inet 100.0.0.1/24 brd 100.0.0.255 scope global qr-8af2e01f-bb
inet6 fe80::f816:3eff:fef7:3d5e/64 scope link
   valid_lft forever preferred_lft forever


On Tue, May 7, 2013 at 5:01 PM, Liu Wenmao marvel...@gmail.com wrote:

 hi

 I use quantum grizzly with namespace and floodlight, but VMs can not ping
 its gateway. It seems that floodlight ignore devices whose status
 is PORT_DOWN or LINK_DOWN, somehow the subnetwork gateway is
 really PORT_DOWN and LINK_DOWN.. is it normal?or how can I change its
 status to normal?

 root@controller:~# ovs-ofctl show br-int
 OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:e2ed9e9b6942
 n_tables:255, n_buffers:256
 features: capabilities:0xc7, actions:0xfff
  1(qr-c5496165-c7): addr:5e:67:22:5b:d5:0e
  config: PORT_DOWN
  state:  LINK_DOWN
 * 2(qr-8af2e01f-bb): addr:e4:00:00:00:00:00this is
 the gateway.*
 * config: PORT_DOWN*
 * state:  LINK_DOWN*
  3(qr-48c69382-4f): addr:22:64:6f:3a:9f:cd
  config: PORT_DOWN
  state:  LINK_DOWN
  4(patch-tun): addr:8e:90:4c:aa:d2:06
  config: 0
  state:  0
  5(tap5b5891ac-94): addr:6e:52:f7:c1:ef:f4
  config: PORT_DOWN
  state:  LINK_DOWN
  6(tap09a002af-66): addr:c6:cb:01:60:3f:8a
  config: PORT_DOWN
  state:  LINK_DOWN
  7(tap160480aa-84): addr:96:43:cc:05:71:d5
  config: PORT_DOWN
  state:  LINK_DOWN
  8(tapf6040ba0-b5): addr:e4:00:00:00:00:00
  config: PORT_DOWN
  state:  LINK_DOWN
  9(tap0ded1c0f-df): addr:12:c8:b3:5c:fb:6a
  config: PORT_DOWN
  state:  LINK_DOWN
  10(tapaebb6140-31): addr:e4:00:00:00:00:00
  config: PORT_DOWN
  state:  LINK_DOWN
  11(tapddc3ce63-2b): addr:e4:00:00:00:00:00
  config: PORT_DOWN
  state:  LINK_DOWN
  12(qr-9b9a3229-19): addr:e4:00:00:00:00:00
  config: PORT_DOWN
  state:  LINK_DOWN
  LOCAL(br-int): addr:e2:ed:9e:9b:69:42
  config: PORT_DOWN
  state:  LINK_DOWN
 OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0


 floodlight codes:
 if (entity.hasSwitchPort() 

 !topology.isAttachmentPointPort(entity.getSwitchDPID(),

  entity.getSwitchPort().shortValue())) {
 if (logger.isDebugEnabled()) {
 logger.debug(Not learning new device on internal
  +  link: {}, entity);
 }

 public boolean portEnabled(OFPhysicalPort port) {
 if (port == null)
 return false;
 if ((port.getConfig()  OFPortConfig.OFPPC_PORT_DOWN.getValue()) 
 0)
 return false;

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][DevStack] How to contribute?

2013-05-07 Thread Gareth
Great thanks!


On Wed, May 8, 2013 at 10:33 AM, Dean Troyer dtro...@gmail.com wrote:

 On Tue, May 7, 2013 at 9:16 PM, Gareth academicgar...@gmail.com wrote:
  In OpenStack 'how-to-contribute' document, the first step is join the
 team in Launchpad. All of ~swift, ~nova, ~glance are ok. But ~devstack is a
 restricted team here[1].
 
  Does that 'joining the team' is a necessary step for contributing?

 Not for DevStack, the only team we have set up is the core team.
 IIRC, if you are not a member of any other teams you would need to
 join the OpenStack team[1] that is linked in the grey box under the
 team list.

 Otherwise you just need to have completed the CLA-related bits that
 allows you to send reviews to Gerrit.

 dt

 [1] https://launchpad.net/~openstack

 --

 Dean Troyer
 dtro...@gmail.com




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keepalive can not check the haproxy is down.

2013-05-07 Thread Lei Zhang
I am using the default version of the ubuntu 12.04 ( keepalived
1:1.2.2-3ubuntu1).
Maybe this issue is the root cause. I will try the latest keepalived.


On Tue, May 7, 2013 at 6:11 PM, eric_e_sm...@dell.com wrote:

 What version of keepalived are you using?  I found this online:
 https://github.com/acassen/keepalived/issues/8

 ** **

 I would first try removing the check script and validating that failure
 works without the check script.  If that works you might need to update
 keepalived.

 ** **

 Here’s a brief introduction I did a while back on using haproxy with
 keepalived as a load balancer (FWIW):
 http://four-eyes.net/2013/01/haproxy-keepalived-the-free-ha-load-balancer/
 

 ** **

 ** **

 *From:* Lei Zhang [mailto:zhang.lei@gmail.com]
 *Sent:* Monday, May 06, 2013 7:55 PM
 *To:* Smith, Eric E
 *Cc:* openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] keepalive can not check the haproxy is down.***
 *

 ** **

 Thanks Eric,

 I have solve it after breaking down the group. 

 ** **

 1. the different netmask is typo. And it doesn't break the failover. 

 2. Why the group is unnecessary? When there are the two instances using
 the same check script , like this case, what's mean after grouping them?**
 **

 ** **

 On Mon, May 6, 2013 at 6:37 PM, eric_e_sm...@dell.com wrote:

 I see you have different netmasks for the VIP on node1 vs. node2;  I would
 also try breaking them out of the vrrp_sync_group and validating at least 1
 router will fail independently.  

  

 *From:* Openstack [mailto:openstack-bounces+eric_e_smith=
 dell@lists.launchpad.net] *On Behalf Of *Lei Zhang
 *Sent:* Monday, May 06, 2013 3:07 AM
 *To:* openstack@lists.launchpad.net
 *Subject:* [Openstack] keepalive can not check the haproxy is down.

  

 Hi Guys,

 I am trying to use keepalive and haproxy to work together to improve the
 HA of the openstack. But I meet following
 unexpected issue. 

 I expect that when the haproxy process is crashed on the MASTER
 node(checked by chk_haproxy), the second node
 will take over the VIP. But when I stop the haproxy process, nothing is
 happened.
 However, when stop the keepalived service, the VIP is set up on the node2
 as expected. 

 So I think the root cause should be the chk_haproxy block. But I have no
 idea why it doesn't work. Does any body have
 ideas?

 *node1 keepalived.conf*

 global_defs {

 lvs_id LVS_228

 }

  

 vrrp_sync_group openstack_haproxy {

 group {

 v1

 v2

 }

 }

 vrrp_script chk_haproxy {

 script killall -0 haproxy

 interval 2

 debug

 weight 2

 }

 vrrp_instance v1 {

 interface eth0 

 debug

 state MASTER

 virtual_router_id 1

 priority 101

 virtual_ipaddress {

 192.168.0.230/24

 }

 track_script {

 chk_haproxy

 }

 }

 vrrp_instance v2 {

 interface eth1 

 state MASTER

 debug

 virtual_router_id 2

 priority 101

 virtual_ipaddress {

 10.1.0.30/16

 }

 track_script {

 chk_haproxy

 }

 }

 *node2 keepalived.conf*

 global_defs {

 lvs_id LVS_229

 }

  

 vrrp_sync_group openstack_haproxy {

 group {

 v1

 v2

 }

 }

 vrrp_script chk_haproxy {

 script killall -0 haproxy

 interval 2

 weight 2

 }

 vrrp_instance v1 {

 interface eth0 

 state BACKUP

 virtual_router_id 1

 priority 100

 virtual_ipaddress {

 192.168.0.230

 }

 track_script {

 chk_haproxy

 }

 }

 vrrp_instance v2 {

 interface eth1 

 state BACKUP

 virtual_router_id 2

 priority 100

 virtual_ipaddress {

 10.1.0.30

 }

 track_script {

 chk_haproxy

 }

 }

 -- 

 Lei Zhang

  

 Blog: http://jeffrey4l.github.com

 twitter/weibo: @jeffrey4l



 

 ** **

 -- 

 Lei Zhang

 ** **

 Blog: http://jeffrey4l.github.io

 twitter/weibo: @jeffrey4l




-- 
Lei Zhang

Blog: http://jeffrey4l.github.io
twitter/weibo: @jeffrey4l
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_glance_trunk #41

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_glance_trunk/41/Project:precise_havana_glance_trunkDate of build:Tue, 07 May 2013 03:01:36 -0400Build duration:5 min 23 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesImplement registry API v2by flaper87editglance/tests/unit/v1/test_api.pyaddglance/registry/api/v2/rpc.pyaddglance/registry/api/v2/__init__.pyeditglance/common/exception.pyaddglance/tests/unit/v1/test_registry_api.pyaddglance/tests/unit/v2/test_registry_api.pyeditglance/db/sqlalchemy/models.pyaddglance/tests/unit/common/test_rpc.pyeditglance/common/wsgi.pyaddglance/common/rpc.pyConsole Output[...truncated 7400 lines...]dch -a [9174753] Create package for registry's clientdch -a [0a4f4af] Compress response's content according to client's accepted encodingdch -a [a9f9f13] Call os.kill for each child instead of the process groupdch -a [039f3d8] Convert scripts to entry pointsdch -a [ff75ad4] Fix functional test 'test_copy_from_swift'dch -a [d700f24] Remove unused configure_db functiondch -a [2d492e0] Don't raise HTTPForbidden on a multitenant environmentdch -a [acc2900] Expand HACKING with commit message guidelinesdch -a [39477af] Redirects requests from /v# to /v#/dch -a [d415611] Functional tests use a clean cached db that is only created once.dch -a [d3c5a6c] Fixes for mis-use of various exceptionsdch -a [545cb15] scrubber: dont print URI of image to be deleteddch -a [6335fdb] Eliminate the race when selecting a port for tests.dch -a [7d341de] Raise 404 while deleting a deleted imagedch -a [459e3e6] Sync with oslo-incubator copy of setup.py and version.pydch -a [1e98e10] Gracefully handle qpid errorsdch -a [6780571] Fix Qpid test casesdch -a [cd00848] Fix the deletion of a pending_delete image.dch -a [1e49329] Fix functional test 'test_scrubber_with_metadata_enc'dch -a [1c5a4d2] Call monkey_patch before other modules are loadeddch -a [6eaf42a] Improve unit tests for glance.api.middleware.cache moduledch -a [ae0f904] Add GridFS storedch -a [28b1129] Verify SSL certificates at boot timedch -a [b1ac90f] Add a policy handler to control copy-from functionalitydch -a [62068a3] Remove internal store references from migration 015dch -a [7155134] Add unit tests for glance.api.cached_images moduledebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.2+git201305070301~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A glance_2013.2+git201305070301~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201305070301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201305070301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_ceilometer_trunk #37

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_ceilometer_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_ceilometer_trunk/37/Project:precise_havana_ceilometer_trunkDate of build:Tue, 07 May 2013 09:31:37 -0400Build duration:1 min 3 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 12 lines...]	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1029)	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970)	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236)	at hudson.remoting.UserRequest.perform(UserRequest.java:118)	at hudson.remoting.UserRequest.perform(UserRequest.java:48)	at hudson.remoting.Request$2.run(Request.java:326)	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)	at java.util.concurrent.FutureTask.run(FutureTask.java:166)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)	at hudson.remoting.Engine$1$1.run(Engine.java:60)	at java.lang.Thread.run(Thread.java:722)Caused by: hudson.plugins.git.GitException: Error performing command: git clone --progress -o origin https://github.com/openstack/ceilometer.git /var/lib/jenkins/slave/workspace/precise_havana_ceilometer_trunk/ceilometerCommand "git clone --progress -o origin https://github.com/openstack/ceilometer.git /var/lib/jenkins/slave/workspace/precise_havana_ceilometer_trunk/ceilometer" returned status code 128: Cloning into '/var/lib/jenkins/slave/workspace/precise_havana_ceilometer_trunk/ceilometer'...error: RPC failed; result=7, HTTP code = 0fatal: The remote end hung up unexpectedly	at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:776)	at hudson.plugins.git.GitAPI.access$000(GitAPI.java:38)	at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:241)	at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:221)	at hudson.FilePath.act(FilePath.java:842)	at hudson.FilePath.act(FilePath.java:824)	at hudson.plugins.git.GitAPI.clone(GitAPI.java:221)	... 13 moreCaused by: hudson.plugins.git.GitException: Command "git clone --progress -o origin https://github.com/openstack/ceilometer.git /var/lib/jenkins/slave/workspace/precise_havana_ceilometer_trunk/ceilometer" returned status code 128: Cloning into '/var/lib/jenkins/slave/workspace/precise_havana_ceilometer_trunk/ceilometer'...error: RPC failed; result=7, HTTP code = 0fatal: The remote end hung up unexpectedly	at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771)	... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041)	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970)	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236)	at hudson.remoting.UserRequest.perform(UserRequest.java:118)	at hudson.remoting.UserRequest.perform(UserRequest.java:48)	at hudson.remoting.Request$2.run(Request.java:326)	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)	at java.util.concurrent.FutureTask.run(FutureTask.java:166)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)	at hudson.remoting.Engine$1$1.run(Engine.java:60)	at java.lang.Thread.run(Thread.java:722)-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #94

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/94/Project:precise_havana_quantum_trunkDate of build:Tue, 07 May 2013 09:31:36 -0400Build duration:2 min 41 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFix 500 raised on disassociate_floatingips when out of syncby aroseneditquantum/plugins/nicira/QuantumPlugin.pyConsole Output[...truncated 3255 lines...]dch -a [62017cd] Imported Translations from Transifexdch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [35988f1] Make the 'admin' role configurabledch -a [ee50162] Simplify delete_health_monitor() using cascadesdch -a [765baf8] Imported Translations from Transifexdch -a [15a1445] Update latest OSLO codedch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1cINFO:root:Destroying schroot.513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201305070931~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201305070931~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201305070931~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201305070931~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: saucy_havana_quantum_trunk #24

2013-05-07 Thread openstack-testing-bot
Title: saucy_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/saucy_havana_quantum_trunk/24/Project:saucy_havana_quantum_trunkDate of build:Tue, 07 May 2013 09:32:38 -0400Build duration:6 min 24 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFix 500 raised on disassociate_floatingips when out of syncby aroseneditquantum/plugins/nicira/QuantumPlugin.pyConsole Output[...truncated 14493 lines...]Job: quantum_2013.2+git201305070932~saucy-0ubuntu1.dscMachine Architecture: amd64Package: quantumPackage-Time: 245Source-Version: 1:2013.2+git201305070932~saucy-0ubuntu1Space: 74004Status: attemptedVersion: 1:2013.2+git201305070932~saucy-0ubuntu1Finished at 20130507-0939Build needed 00:04:05, 74004k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'saucy-havana', '-n', '-A', 'quantum_2013.2+git201305070932~saucy-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'saucy-havana', '-n', '-A', 'quantum_2013.2+git201305070932~saucy-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/havana /tmp/tmpgF61B7/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpgF61B7/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log ffa04d3e83ad83a51b366f1a3579ed47328c1b43..HEAD --no-merges --pretty=format:[%h] %sdch -b -D saucy --newversion 1:2013.2+git201305070932~saucy-0ubuntu1 Automated Ubuntu testing build:dch -a [17336b9] Fix 500 raised on disassociate_floatingips when out of syncdch -a [2db451d] Imported Translations from Transifexdch -a [87d0f81] Duplicate line in Brocade plugindch -a [9051f68] Imported Translations from Transifexdch -a [6f01194] Perform a joined query for ports and security group associationsdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201305070932~saucy-0ubuntu1_source.changessbuild -d saucy-havana -n -A quantum_2013.2+git201305070932~saucy-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'saucy-havana', '-n', '-A', 'quantum_2013.2+git201305070932~saucy-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'saucy-havana', '-n', '-A', 'quantum_2013.2+git201305070932~saucy-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_ceilometer_trunk #38

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_ceilometer_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_ceilometer_trunk/38/Project:precise_havana_ceilometer_trunkDate of build:Tue, 07 May 2013 10:01:33 -0400Build duration:1 min 15 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesUpdate oslo before bringing in exceptionsby asalkeldaddceilometer/openstack/common/uuidutils.pyeditceilometer/openstack/common/loopingcall.pyeditceilometer/openstack/common/log.pyeditceilometer/openstack/common/rpc/impl_qpid.pyeditceilometer/openstack/common/rpc/impl_fake.pyaddceilometer/openstack/common/rpc/zmq_receiver.pyeditceilometer/openstack/common/jsonutils.pyeditceilometer/openstack/common/policy.pyeditceilometer/openstack/common/rpc/common.pyeditceilometer/openstack/common/rpc/impl_kombu.pyeditceilometer/openstack/common/rpc/proxy.pyeditceilometer/openstack/common/rpc/impl_zmq.pyeditceilometer/openstack/common/network_utils.pyaddceilometer/openstack/common/processutils.pyeditceilometer/openstack/common/rpc/amqp.pyeditceilometer/openstack/common/context.pyeditceilometer/openstack/common/rpc/dispatcher.pyeditceilometer/openstack/common/rpc/__init__.pyUpdate WSME dependencyby julienedittools/pip-requiresConsole Output[...truncated 1293 lines...]INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/havana /tmp/tmpaH2Ha8/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmpaH2Ha8/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 6f7efd1efedbc4baa7c2263d1536a985f0895fc0..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201305071001~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [8629e09] Connect the Alarm API to the dbdch -a [896015c] Add the mongo implementation of alarms collectiondch -a [bcb8236] Update WSME dependencydch -a [0c45387] Imported Translations from Transifexdch -a [c1b7161] Add Alarm DB API and modelsdch -a [9518813] Imported Translations from Transifexdch -a [d764f8c] Remove "extras" againdch -a [89ab2f8] add links to return values from API methodsdch -a [82ad299] Modify limitation on request versiondch -a [f90b36d] Doc improvementsdch -a [92905c9] Rename EventFilter to SampleFilter.dch -a [39d9ca7] Fixes AttributeError of FloatingIPPollsterdch -a [0d5c271] Add just the most minimal alarm APIdch -a [5cb2f9c] Update oslo before bringing in exceptionsdch -a [4fb7650] Enumerate the meter type in the API Meter classdch -a [ca971ff] Remove "extras" as it is not useddch -a [6979b16] Adds examples of CLI and API queries to the V2 documentation.dch -a [1828143] update the ceilometer.conf.sampledch -a [8bcc377] Set hbase table_prefix default to Nonedch -a [af2704e] glance/cinder/quantum counter units are not accurate/consistentdch -a [6cb0eb9] Add some recommendations about databasedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-8c894d19-a93d-426a-8e59-f2673268652a', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-8c894d19-a93d-426a-8e59-f2673268652a', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: saucy_havana_quantum_trunk #25

2013-05-07 Thread openstack-testing-bot
Title: saucy_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/saucy_havana_quantum_trunk/25/Project:saucy_havana_quantum_trunkDate of build:Tue, 07 May 2013 11:31:38 -0400Build duration:19 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesblueprint cisco-plugin-exception-handlingby leblancdeditquantum/tests/unit/cisco/test_network_plugin.pyeditquantum/plugins/cisco/common/config.pyeditquantum/plugins/cisco/common/cisco_exceptions.pyeditquantum/plugins/cisco/network_plugin.pyeditquantum/plugins/cisco/db/nexus_models_v2.pyeditquantum/plugins/cisco/models/virt_phy_sw_v2.pyeditquantum/plugins/cisco/nexus/cisco_nexus_network_driver_v2.pyeditquantum/plugins/cisco/nexus/cisco_nexus_plugin_v2.pyupdate-port error if port does not exist in nvpby aroseneditquantum/plugins/nicira/QuantumPlugin.pyLog msg for load policy file only if the file is actually loadedby salv.orlandoeditquantum/policy.pyCalculate nicira plugin NAT rules order according to CIDR prefixby salv.orlandoeditquantum/plugins/nicira/QuantumPlugin.pyUpdate import of oslos processutils.by mikaleditquantum/openstack/common/processutils.pyValidate that netaddr does not receive a string with whitespaceby gkottoneditquantum/tests/unit/test_attributes.pyeditquantum/api/v2/attributes.pyImported Translations from Transifexby Jenkinseditquantum/locale/ja/LC_MESSAGES/quantum.poeditquantum/locale/quantum.poteditquantum/locale/ka_GE/LC_MESSAGES/quantum.poeditquantum/locale/ko_KR/LC_MESSAGES/quantum.poConsole Output[...truncated 55 lines...]Get:9 http://archive.ubuntu.com saucy/multiverse Sources [173 kB]Get:10 http://archive.ubuntu.com saucy/main amd64 Packages [1183 kB]Get:11 http://archive.ubuntu.com saucy/restricted amd64 Packages [9636 B]Get:12 http://archive.ubuntu.com saucy/universe amd64 Packages [5503 kB]Get:13 http://archive.ubuntu.com saucy/multiverse amd64 Packages [131 kB]Hit http://archive.ubuntu.com saucy/main Translation-enHit http://archive.ubuntu.com saucy/multiverse Translation-enHit http://archive.ubuntu.com saucy/restricted Translation-enGet:14 http://archive.ubuntu.com saucy/universe Translation-en [3809 kB]Hit http://archive.ubuntu.com saucy-updates/main SourcesHit http://archive.ubuntu.com saucy-updates/restricted SourcesHit http://archive.ubuntu.com saucy-updates/universe SourcesHit http://archive.ubuntu.com saucy-updates/multiverse SourcesHit http://archive.ubuntu.com saucy-updates/main amd64 PackagesHit http://archive.ubuntu.com saucy-updates/restricted amd64 PackagesHit http://archive.ubuntu.com saucy-updates/universe amd64 PackagesHit http://archive.ubuntu.com saucy-updates/multiverse amd64 PackagesHit http://archive.ubuntu.com saucy-updates/main Translation-enHit http://archive.ubuntu.com saucy-updates/multiverse Translation-enHit http://archive.ubuntu.com saucy-updates/restricted Translation-enHit http://archive.ubuntu.com saucy-updates/universe Translation-enFetched 17.8 MB in 10s (1767 kB/s)W: Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_saucy_main_binary-amd64_Packages  Hash Sum mismatchW: Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_saucy_universe_binary-amd64_Packages  Hash Sum mismatchE: Some index files failed to download. They have been ignored, or old ones used instead.ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-33a3fb12-239f-42a2-9c5e-00d2a6c29e51', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-33a3fb12-239f-42a2-9c5e-00d2a6c29e51', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100INFO:root:Complete command log:INFO:root:Destroying schroot.Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-33a3fb12-239f-42a2-9c5e-00d2a6c29e51', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-33a3fb12-239f-42a2-9c5e-00d2a6c29e51', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: 

[Openstack-ubuntu-testing-notifications] Build Failure: saucy_havana_nova_trunk #38

2013-05-07 Thread openstack-testing-bot
Title: saucy_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/saucy_havana_nova_trunk/38/Project:saucy_havana_nova_trunkDate of build:Tue, 07 May 2013 11:31:58 -0400Build duration:1 min 8 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNova evacuate failed when VM is in SHUTOFF statusby yuyangbjeditnova/tests/compute/test_compute.pyeditnova/compute/manager.pyConsole Output[...truncated 90 lines...]Get:9 http://archive.ubuntu.com saucy/multiverse Sources [173 kB]Get:10 http://archive.ubuntu.com saucy/main amd64 Packages [1183 kB]Get:11 http://archive.ubuntu.com saucy/restricted amd64 Packages [9636 B]Get:12 http://archive.ubuntu.com saucy/universe amd64 Packages [5503 kB]Get:13 http://archive.ubuntu.com saucy/multiverse amd64 Packages [131 kB]Hit http://archive.ubuntu.com saucy/main Translation-enHit http://archive.ubuntu.com saucy/multiverse Translation-enHit http://archive.ubuntu.com saucy/restricted Translation-enGet:14 http://archive.ubuntu.com saucy/universe Translation-en [3809 kB]Hit http://archive.ubuntu.com saucy-updates/main SourcesHit http://archive.ubuntu.com saucy-updates/restricted SourcesHit http://archive.ubuntu.com saucy-updates/universe SourcesHit http://archive.ubuntu.com saucy-updates/multiverse SourcesHit http://archive.ubuntu.com saucy-updates/main amd64 PackagesHit http://archive.ubuntu.com saucy-updates/restricted amd64 PackagesHit http://archive.ubuntu.com saucy-updates/universe amd64 PackagesHit http://archive.ubuntu.com saucy-updates/multiverse amd64 PackagesHit http://archive.ubuntu.com saucy-updates/main Translation-enHit http://archive.ubuntu.com saucy-updates/multiverse Translation-enHit http://archive.ubuntu.com saucy-updates/restricted Translation-enHit http://archive.ubuntu.com saucy-updates/universe Translation-enFetched 17.8 MB in 9s (1827 kB/s)W: Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_saucy_main_binary-amd64_Packages  Hash Sum mismatchW: Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_saucy_universe_binary-amd64_Packages  Hash Sum mismatchE: Some index files failed to download. They have been ignored, or old ones used instead.ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-d33a837b-48d1-43dc-886d-465a7127620c', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-d33a837b-48d1-43dc-886d-465a7127620c', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100INFO:root:Complete command log:INFO:root:Destroying schroot.Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-d33a837b-48d1-43dc-886d-465a7127620c', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-d33a837b-48d1-43dc-886d-465a7127620c', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #95

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/95/Project:precise_havana_quantum_trunkDate of build:Tue, 07 May 2013 11:31:37 -0400Build duration:2 min 36 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesblueprint cisco-plugin-exception-handlingby leblancdeditquantum/plugins/cisco/nexus/cisco_nexus_plugin_v2.pyeditquantum/plugins/cisco/nexus/cisco_nexus_network_driver_v2.pyeditquantum/plugins/cisco/network_plugin.pyeditquantum/plugins/cisco/db/nexus_models_v2.pyeditquantum/plugins/cisco/models/virt_phy_sw_v2.pyeditquantum/plugins/cisco/common/config.pyeditquantum/plugins/cisco/common/cisco_exceptions.pyeditquantum/tests/unit/cisco/test_network_plugin.pyupdate-port error if port does not exist in nvpby aroseneditquantum/plugins/nicira/QuantumPlugin.pyLog msg for load policy file only if the file is actually loadedby salv.orlandoeditquantum/policy.pyCalculate nicira plugin NAT rules order according to CIDR prefixby salv.orlandoeditquantum/plugins/nicira/QuantumPlugin.pyUpdate import of oslos processutils.by mikaleditquantum/openstack/common/processutils.pyValidate that netaddr does not receive a string with whitespaceby gkottoneditquantum/api/v2/attributes.pyeditquantum/tests/unit/test_attributes.pyImported Translations from Transifexby Jenkinseditquantum/locale/ka_GE/LC_MESSAGES/quantum.poeditquantum/locale/ko_KR/LC_MESSAGES/quantum.poeditquantum/locale/quantum.poteditquantum/locale/ja/LC_MESSAGES/quantum.poConsole Output[...truncated 3279 lines...]dch -a [62017cd] Imported Translations from Transifexdch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [35988f1] Make the 'admin' role configurabledch -a [ee50162] Simplify delete_health_monitor() using cascadesdch -a [765baf8] Imported Translations from Transifexdch -a [15a1445] Update latest OSLO codedch -a [343ca18] Imported Translations from Transifexdch -a [c11707INFO:root:Destroying schroot.4] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201305071131~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201305071131~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201305071131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201305071131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_cinder_trunk #42

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/42/Project:precise_havana_cinder_trunkDate of build:Tue, 07 May 2013 12:31:36 -0400Build duration:1 min 16 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesFix ability to add custom volume_backend_nameby walter.boringeditcinder/tests/test_rbd.pyeditcinder/volume/drivers/rbd.pyeditcinder/volume/drivers/san/hp/hp_3par_common.pyeditcinder/volume/drivers/san/hp/hp_3par_iscsi.pyeditcinder/volume/drivers/xenapi/sm.pyeditcinder/volume/drivers/san/hp/hp_3par_fc.pyeditcinder/volume/drivers/nexenta/volume.pyeditcinder/volume/drivers/sheepdog.pyeditcinder/tests/test_hp3par.pyeditcinder/volume/drivers/coraid.pyeditcinder/volume/drivers/scality.pyeditcinder/volume/drivers/huawei/huawei_iscsi.pyeditcinder/volume/drivers/emc/emc_smis_iscsi.pyConsole Output[...truncated 1383 lines...]Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmp7dV63hbzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 17.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch can be reverse-appliedERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-75ce6004-0b60-49b9-bbd1-e2a6540c3858', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-75ce6004-0b60-49b9-bbd1-e2a6540c3858', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/havana /tmp/tmp7dV63h/cindermk-build-deps -i -r -t apt-get -y /tmp/tmp7dV63h/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0b5b93fe7611b35a8c6a9b09f46641f4fb859f4..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201305071231~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [7996aaa] Update import of oslo's processutils.dch -a [54a2ee4] Fix ability to add custom volume_backend_namedch -a [db991e6] Remove old_name from kwargs when using IET helper.dch -a [7546682] Remove setuptools-git as run time dependencydch -a [006d673] Fix LHN driver to allow backend name configurationdch -a [0ee20a0] Fixes 3par driver methods that were double lockingdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-75ce6004-0b60-49b9-bbd1-e2a6540c3858', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-75ce6004-0b60-49b9-bbd1-e2a6540c3858', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: saucy_havana_cinder_trunk #10

2013-05-07 Thread openstack-testing-bot
Title: saucy_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/saucy_havana_cinder_trunk/10/Project:saucy_havana_cinder_trunkDate of build:Tue, 07 May 2013 12:31:37 -0400Build duration:1 min 44 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesFix ability to add custom volume_backend_nameby walter.boringeditcinder/volume/drivers/san/hp/hp_3par_common.pyeditcinder/volume/drivers/scality.pyeditcinder/volume/drivers/coraid.pyeditcinder/volume/drivers/xenapi/sm.pyeditcinder/volume/drivers/huawei/huawei_iscsi.pyeditcinder/volume/drivers/emc/emc_smis_iscsi.pyeditcinder/volume/drivers/nexenta/volume.pyeditcinder/volume/drivers/rbd.pyeditcinder/volume/drivers/san/hp/hp_3par_fc.pyeditcinder/tests/test_hp3par.pyeditcinder/tests/test_rbd.pyeditcinder/volume/drivers/san/hp/hp_3par_iscsi.pyeditcinder/volume/drivers/sheepdog.pyConsole Output[...truncated 2070 lines...]Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmpryKEn7bzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 17.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch can be reverse-appliedERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-0c51bac8-7805-4603-b4ec-afa724debaf9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-0c51bac8-7805-4603-b4ec-afa724debaf9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/havana /tmp/tmpryKEn7/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpryKEn7/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0b5b93fe7611b35a8c6a9b09f46641f4fb859f4..HEAD --no-merges --pretty=format:[%h] %sdch -b -D saucy --newversion 1:2013.2+git201305071231~saucy-0ubuntu1 Automated Ubuntu testing build:dch -a [7996aaa] Update import of oslo's processutils.dch -a [54a2ee4] Fix ability to add custom volume_backend_namedch -a [db991e6] Remove old_name from kwargs when using IET helper.dch -a [7546682] Remove setuptools-git as run time dependencydch -a [006d673] Fix LHN driver to allow backend name configurationdch -a [0ee20a0] Fixes 3par driver methods that were double lockingdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-0c51bac8-7805-4603-b4ec-afa724debaf9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-0c51bac8-7805-4603-b4ec-afa724debaf9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_horizon_trunk #30

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_horizon_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_horizon_trunk/30/Project:precise_havana_horizon_trunkDate of build:Tue, 07 May 2013 16:31:37 -0400Build duration:2 min 41 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesUpdated translations from Transifexby jpichonedithorizon/locale/es/LC_MESSAGES/django.poeditopenstack_dashboard/locale/en/LC_MESSAGES/django.poedithorizon/locale/en/LC_MESSAGES/django.moeditopenstack_dashboard/locale/zh_TW/LC_MESSAGES/django.moeditopenstack_dashboard/locale/fr/LC_MESSAGES/django.moedithorizon/locale/pt_BR/LC_MESSAGES/django.poeditopenstack_dashboard/locale/zh_CN/LC_MESSAGES/django.poedithorizon/locale/ja/LC_MESSAGES/django.poeditopenstack_dashboard/locale/bg_BG/LC_MESSAGES/django.moaddhorizon/locale/fi_FI/LC_MESSAGES/django.poedithorizon/locale/nl_NL/LC_MESSAGES/django.moeditopenstack_dashboard/locale/ko_KR/LC_MESSAGES/django.poedithorizon/locale/ja/LC_MESSAGES/django.moaddopenstack_dashboard/locale/ca/LC_MESSAGES/django.poedithorizon/locale/zh_TW/LC_MESSAGES/django.moedithorizon/locale/ko_KR/LC_MESSAGES/django.poedithorizon/locale/bg_BG/LC_MESSAGES/django.moaddhorizon/locale/cs/LC_MESSAGES/django.poedithorizon/locale/ru/LC_MESSAGES/django.poaddopenstack_dashboard/locale/ka_GE/LC_MESSAGES/django.moeditopenstack_dashboard/locale/ja/LC_MESSAGES/django.moaddopenstack_dashboard/locale/hu/LC_MESSAGES/django.poaddhorizon/locale/cs/LC_MESSAGES/django.moedithorizon/locale/ru/LC_MESSAGES/django.moeditopenstack_dashboard/locale/ru/LC_MESSAGES/django.moaddopenstack_dashboard/locale/en_GB/LC_MESSAGES/django.moeditopenstack_dashboard/locale/es/LC_MESSAGES/django.poaddopenstack_dashboard/locale/ca/LC_MESSAGES/django.moeditopenstack_dashboard/locale/fr/LC_MESSAGES/django.poedithorizon/locale/nl_NL/LC_MESSAGES/django.poedithorizon/locale/pt/LC_MESSAGES/django.poaddhorizon/locale/zh_HK/LC_MESSAGES/django.moaddopenstack_dashboard/locale/fi_FI/LC_MESSAGES/django.poaddopenstack_dashboard/locale/ka_GE/LC_MESSAGES/django.poedithorizon/locale/en/LC_MESSAGES/django.poaddhorizon/locale/hu/LC_MESSAGES/django.moaddhorizon/locale/hu/LC_MESSAGES/django.poedithorizon/locale/zh_CN/LC_MESSAGES/django.poeditopenstack_dashboard/locale/pt_BR/LC_MESSAGES/django.poaddopenstack_dashboard/locale/hu/LC_MESSAGES/django.moedithorizon/locale/pt/LC_MESSAGES/django.moeditopenstack_dashboard/locale/ko_KR/LC_MESSAGES/django.moaddhorizon/locale/ka_GE/LC_MESSAGES/django.moeditopenstack_dashboard/locale/pt_BR/LC_MESSAGES/django.moeditopenstack_dashboard/locale/ru/LC_MESSAGES/django.poaddhorizon/locale/zh_HK/LC_MESSAGES/django.poedithorizon/locale/es/LC_MESSAGES/django.moedithorizon/locale/pt_BR/LC_MESSAGES/django.moaddopenstack_dashboard/locale/en_GB/LC_MESSAGES/django.poedithorizon/locale/bg_BG/LC_MESSAGES/django.poeditopenstack_dashboard/locale/zh_TW/LC_MESSAGES/django.poeditopenstack_dashboard/locale/pt/LC_MESSAGES/django.poeditopenstack_dashboard/locale/es/LC_MESSAGES/django.moedithorizon/locale/it/LC_MESSAGES/django.moeditopenstack_dashboard/locale/en/LC_MESSAGES/django.moeditopenstack_dashboard/locale/cs/LC_MESSAGES/django.moaddhorizon/locale/ca/LC_MESSAGES/django.moedithorizon/locale/ko_KR/LC_MESSAGES/django.moeditopenstack_dashboard/locale/zh_CN/LC_MESSAGES/django.moaddhorizon/locale/en_GB/LC_MESSAGES/django.poedithorizon/locale/it/LC_MESSAGES/django.poeditopenstack_dashboard/locale/ja/LC_MESSAGES/django.poaddhorizon/locale/fi_FI/LC_MESSAGES/django.moedithorizon/locale/zh_TW/LC_MESSAGES/django.poaddopenstack_dashboard/locale/fi_FI/LC_MESSAGES/django.moaddhorizon/locale/ca/LC_MESSAGES/django.poeditopenstack_dashboard/locale/nl_NL/LC_MESSAGES/django.moedithorizon/locale/zh_CN/LC_MESSAGES/django.moeditopenstack_dashboard/locale/cs/LC_MESSAGES/django.poaddhorizon/locale/ka_GE/LC_MESSAGES/django.poeditopenstack_dashboard/locale/nl_NL/LC_MESSAGES/django.poeditopenstack_dashboard/locale/bg_BG/LC_MESSAGES/django.poaddhorizon/locale/en_GB/LC_MESSAGES/django.moeditopenstack_dashboard/locale/pt/LC_MESSAGES/django.moFix cosmetic bug when displaying unnamed volumesby matt.wagnereditopenstack_dashboard/dashboards/project/instances/workflows/create_instance.pyConsole Output[...truncated 1031 lines...]Download error on http://pypi.python.org/simple/pbr/: timed out -- Some packages may not be found!Couldn't find index page for 'pbr' (maybe misspelled?)Download error on http://pypi.python.org/simple/: timed out -- Some packages may not be found!No local packages or download links found for pbrTraceback (most recent call last):  File "setup.py", line 28, in d2to1=True)  File "/usr/lib/python2.7/distutils/core.py", line 112, in setup_setup_distribution = dist = klass(attrs)  File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 221, in __init__self.fetch_build_eggs(attrs.pop('setup_requires'))  File 

[Openstack-ubuntu-testing-notifications] Build Still Failing: saucy_havana_ceilometer_trunk #25

2013-05-07 Thread openstack-testing-bot
Title: saucy_havana_ceilometer_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/saucy_havana_ceilometer_trunk/25/Project:saucy_havana_ceilometer_trunkDate of build:Tue, 07 May 2013 20:01:37 -0400Build duration:18 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesalarm: fix MongoDB alarm idby julieneditceilometer/storage/impl_mongodb.pyConsole Output[...truncated 57 lines...]Get:11 http://archive.ubuntu.com saucy/restricted amd64 Packages [9636 B]Get:12 http://archive.ubuntu.com saucy/universe amd64 Packages [5503 kB]Get:13 http://archive.ubuntu.com saucy/multiverse amd64 Packages [131 kB]Get:14 http://archive.ubuntu.com saucy/main Translation-en [681 kB]Get:15 http://archive.ubuntu.com saucy/multiverse Translation-en [99.9 kB]Hit http://archive.ubuntu.com saucy/restricted Translation-enGet:16 http://archive.ubuntu.com saucy/universe Translation-en [3809 kB]Hit http://archive.ubuntu.com saucy-updates/main SourcesHit http://archive.ubuntu.com saucy-updates/restricted SourcesHit http://archive.ubuntu.com saucy-updates/universe SourcesHit http://archive.ubuntu.com saucy-updates/multiverse SourcesHit http://archive.ubuntu.com saucy-updates/main amd64 PackagesHit http://archive.ubuntu.com saucy-updates/restricted amd64 PackagesHit http://archive.ubuntu.com saucy-updates/universe amd64 PackagesHit http://archive.ubuntu.com saucy-updates/multiverse amd64 PackagesHit http://archive.ubuntu.com saucy-updates/main Translation-enHit http://archive.ubuntu.com saucy-updates/multiverse Translation-enHit http://archive.ubuntu.com saucy-updates/restricted Translation-enHit http://archive.ubuntu.com saucy-updates/universe Translation-enFetched 18.6 MB in 13s (1430 kB/s)W: Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_saucy_universe_source_Sources  Hash Sum mismatchW: Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_saucy_main_binary-amd64_Packages  Hash Sum mismatchW: Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_saucy_universe_binary-amd64_Packages  Hash Sum mismatchE: Some index files failed to download. They have been ignored, or old ones used instead.ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-2b9ce241-4c42-43b8-b979-cc400601c8d8', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-2b9ce241-4c42-43b8-b979-cc400601c8d8', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100INFO:root:Complete command log:INFO:root:Destroying schroot.Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-2b9ce241-4c42-43b8-b979-cc400601c8d8', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'saucy-amd64-2b9ce241-4c42-43b8-b979-cc400601c8d8', '-u', 'root', '--', 'apt-get', 'update']' returned non-zero exit status 100Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_ceilometer_trunk #39

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_ceilometer_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_ceilometer_trunk/39/Project:precise_havana_ceilometer_trunkDate of build:Tue, 07 May 2013 20:01:36 -0400Build duration:1 min 14 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesalarm: fix MongoDB alarm idby julieneditceilometer/storage/impl_mongodb.pyConsole Output[...truncated 1296 lines...]bzr branch lp:~openstack-ubuntu-testing/ceilometer/havana /tmp/tmpPcjRbg/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmpPcjRbg/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 6f7efd1efedbc4baa7c2263d1536a985f0895fc0..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201305072001~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [0fdf53d] alarm: fix MongoDB alarm iddch -a [8629e09] Connect the Alarm API to the dbdch -a [896015c] Add the mongo implementation of alarms collectiondch -a [bcb8236] Update WSME dependencydch -a [0c45387] Imported Translations from Transifexdch -a [c1b7161] Add Alarm DB API and modelsdch -a [9518813] Imported Translations from Transifexdch -a [d764f8c] Remove "extras" againdch -a [89ab2f8] add links to return values from API methodsdch -a [82ad299] Modify limitation on request versiondch -a [f90b36d] Doc improvementsdch -a [92905c9] Rename EventFilter to SampleFilter.dch -a [39d9ca7] Fixes AttributeError of FloatingIPPollsterdch -a [0d5c271] Add just the most minimal alarm APIdch -a [5cb2f9c] Update oslo before bringing in exceptionsdch -a [4fb7650] Enumerate the meter type in the API Meter classdch -a [ca971ff] Remove "extras" as it is not useddch -a [6979b16] Adds examples of CLI and API queries to the V2 documentation.dch -a [1828143] update the ceilometer.conf.sampledch -a [8bcc377] Set hbase table_prefix default to Nonedch -a [af2704e] glance/cinder/quantum counter units are not accurate/consistentdch -a [6cb0eb9] Add some recommendations about databasedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-10e6168d-464f-4e9a-87bd-8251feba31f1', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-10e6168d-464f-4e9a-87bd-8251feba31f1', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_havana_nova_trunk #122

2013-05-07 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/122/Project:precise_havana_nova_trunkDate of build:Tue, 07 May 2013 23:01:39 -0400Build duration:2 min 32 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesAdd an index to compute_node_statsby bdelliottaddnova/db/sqlalchemy/migrate_repo/versions/178_add_index_to_compute_node_stats.pylibvirt: ignore NOSTATE in resume_state_on_host_boot() method.by yufang521247editnova/virt/libvirt/driver.pyTransition from openstack.common.setup to pbr.by msherborneeditrun_tests.sheditopenstack-common.confedittools/test-requireseditsetup.pyeditsetup.cfgdeletenova/openstack/common/setup.pyedittools/pip-requiresImported Translations from Transifexby Jenkinseditnova/locale/ru/LC_MESSAGES/nova.poeditnova/locale/zh_CN/LC_MESSAGES/nova.poeditnova/locale/de/LC_MESSAGES/nova.poeditnova/locale/ja/LC_MESSAGES/nova.poeditnova/locale/bs/LC_MESSAGES/nova.poeditnova/locale/es/LC_MESSAGES/nova.poeditnova/locale/tr/LC_MESSAGES/nova.poeditnova/locale/nb/LC_MESSAGES/nova.poeditnova/locale/tr_TR/LC_MESSAGES/nova.poeditnova/locale/it/LC_MESSAGES/nova.poeditnova/locale/cs/LC_MESSAGES/nova.poeditnova/locale/fr/LC_MESSAGES/nova.poeditnova/locale/ko/LC_MESSAGES/nova.poeditnova/locale/pt_BR/LC_MESSAGES/nova.poeditnova/locale/ko_KR/LC_MESSAGES/nova.poeditnova/locale/da/LC_MESSAGES/nova.poeditnova/locale/uk/LC_MESSAGES/nova.poeditnova/locale/en_US/LC_MESSAGES/nova.poeditnova/locale/tl/LC_MESSAGES/nova.poeditnova/locale/en_GB/LC_MESSAGES/nova.poeditnova/locale/en_AU/LC_MESSAGES/nova.poeditnova/locale/zh_TW/LC_MESSAGES/nova.poeditnova/locale/nova.potConsole Output[...truncated 793 lines...]Download error on http://pypi.python.org/simple/pbr/: timed out -- Some packages may not be found!Couldn't find index page for 'pbr' (maybe misspelled?)Download error on http://pypi.python.org/simple/: timed out -- Some packages may not be found!No local packages or download links found for pbrTraceback (most recent call last):  File "setup.py", line 21, in d2to1=True)  File "/usr/lib/python2.7/distutils/core.py", line 112, in setup_setup_distribution = dist = klass(attrs)  File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 221, in __init__self.fetch_build_eggs(attrs.pop('setup_requires'))  File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 245, in fetch_build_eggsparse_requirements(requires), installer=self.fetch_build_egg  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 576, in resolvedist = best[req.key] = env.best_match(req, self, installer)  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 821, in best_matchreturn self.obtain(req, installer) # try and download/install  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 833, in obtainreturn installer(requirement)  File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 294, in fetch_build_eggreturn cmd.easy_install(req)  File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 602, in easy_installraise DistutilsError(msg)distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('pbr')ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-5380e62c-a356-4459-a7dd-9b08ada64071', '-u', 'jenkins', '--', 'python', 'setup.py', 'sdist']' returned non-zero exit status 1ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-5380e62c-a356-4459-a7dd-9b08ada64071', '-u', 'jenkins', '--', 'python', 'setup.py', 'sdist']' returned non-zero exit status 1INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/havana /tmp/tmpniGqWO/novamk-build-deps -i -r -t apt-get -y /tmp/tmpniGqWO/nova/debian/controlpython setup.py sdistTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-5380e62c-a356-4459-a7dd-9b08ada64071', '-u', 'jenkins', '--', 'python', 'setup.py', 'sdist']' returned non-zero exit status 1Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-5380e62c-a356-4459-a7dd-9b08ada64071', '-u', 'jenkins', '--', 'python', 'setup.py', 'sdist']' returned non-zero exit status 1Build 

[Openstack-ubuntu-testing-notifications] Jenkins build is back to normal : cloud-archive_grizzly_version-drift #18

2013-05-07 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/18/


-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp