Re: [Openstack] Initial quantum network state broken

2013-02-17 Thread Greg Chavez
I'm replying to my own message because I'm desperate.  My network situation
is a mess.  I need to add this as well: my bridge interfaces are all down.
 On my compute node:

root@kvm-cs-sn-10i:/var/lib/nova/instances/instance-0005# ip addr show
| grep ^[0-9]
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
4: eth2: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000
5: eth3: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000
9: br-int: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
10: br-eth1: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
13: phy-br-eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
pfifo_fast state UP qlen 1000
14: int-br-eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
pfifo_fast state UP qlen 1000
15: qbre56c5d9e-b6: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UP
16: qvoe56c5d9e-b6: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500
qdisc pfifo_fast state UP qlen 1000
17: qvbe56c5d9e-b6: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500
qdisc pfifo_fast master qbre56c5d9e-b6 state UP qlen 1000
19: qbrb805a9c9-11: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UP
20: qvob805a9c9-11: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500
qdisc pfifo_fast state UP qlen 1000
21: qvbb805a9c9-11: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500
qdisc pfifo_fast master qbrb805a9c9-11 state UP qlen 1000
34: qbr2b23c51f-02: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UP
35: qvo2b23c51f-02: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500
qdisc pfifo_fast state UP qlen 1000
36: qvb2b23c51f-02: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500
qdisc pfifo_fast master qbr2b23c51f-02 state UP qlen 1000
37: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast
master qbr2b23c51f-02 state UNKNOWN qlen 500

And on my network node:

root@knet-cs-gen-01i:~# ip addr show | grep ^[0-9]
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
4: eth2: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500 qdisc mq state
UP qlen 1000
5: eth3: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000
6: br-int: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
7: br-eth1: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
8: br-ex: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN
22: phy-br-eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
pfifo_fast state UP qlen 1000
23: int-br-eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
pfifo_fast state UP qlen 1000

I gave br-ex an IP and UP'ed it manually.  I assume this is correct.  By I
honestly don't know.

Thanks.




On Fri, Feb 15, 2013 at 6:54 PM, Greg Chavez greg.cha...@gmail.com wrote:


 Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up the
 scale-ready installation described in these instructions:


 https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

 Basically:

 (o) controller node on a mgmt and public net
 (o) network node (quantum and openvs) on a mgmt, net-config, and public net
 (o) compute node is on a mgmt and net-config net

 Took me just over an hour and ran into only a few easily-fixed speed
 bumps.  But the VM networks are totally non-functioning.  VMs launch but no
 network traffic can go in or out.

 I'm particularly befuddled by these problems:

 ( 1 ) This error in nova-compute:

 ERROR nova.network.quantumv2 [-] _get_auth_token() failed

 ( 2 ) No NAT rules on the compute node, which probably explains why the
 VMs complain about not finding a network or being able to get metadata from
 169.254.169.254.

 root@kvm-cs-sn-10i:~# iptables -t nat -S
 -P PREROUTING ACCEPT
 -P INPUT ACCEPT
 -P OUTPUT ACCEPT
 -P POSTROUTING ACCEPT
 -N nova-api-metadat-OUTPUT
 -N nova-api-metadat-POSTROUTING
 -N nova-api-metadat-PREROUTING
 -N nova-api-metadat-float-snat
 -N nova-api-metadat-snat
 -N nova-compute-OUTPUT
 -N nova-compute-POSTROUTING
 -N nova-compute-PREROUTING
 -N nova-compute-float-snat
 -N nova-compute-snat
 -N nova-postrouting-bottom
 -A PREROUTING -j nova-api-metadat-PREROUTING
 -A PREROUTING -j nova-compute-PREROUTING
 -A OUTPUT -j nova-api-metadat-OUTPUT
 -A OUTPUT -j nova-compute-OUTPUT
 -A POSTROUTING -j nova-api-metadat-POSTROUTING
 -A POSTROUTING -j nova-compute-POSTROUTING
 -A POSTROUTING -j nova-postrouting-bottom
 -A nova-api-metadat-snat -j nova-api-metadat-float-snat
 -A nova-compute-snat -j nova-compute-float-snat
 -A nova-postrouting-bottom -j nova-api-metadat-snat
 -A nova-postrouting-bottom -j nova-compute-snat

 (3) A lastly, no default secgroup rules, whose function governs... what
 exactly?  Connections 

Re: [Openstack] Cant ping private or floating IP

2013-02-17 Thread Jean-Baptiste RANSY
ping

Are you on IRC ?

JB


On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:
 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)

 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:
 oh that's weird.

 I still get this error. couldnt this be because I cannot ping the VM
 in the first place?. Because as far as I know metadata takes care of
 ssh keys. But what if you cant reach the VM in the first place?

 no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
 url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
 url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: 
 url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md after 
 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G 
 [74G[ OK ]


 On 17 February 2013 02:41, Jean-Baptiste RANSY
 jean-baptiste.ra...@alyseo.com
 mailto:jean-baptiste.ra...@alyseo.com wrote:

 For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side,
 but my VM get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:
 root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 

 root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254... 


 root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source  
 destination 
 59493   22M quantum-l3-agent-INPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source  
 destination 
   707 47819 quantum-filter-top  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source  
 destination 
 56022   22M quantum-filter-top  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *  
 0.0.0.0/0 http://0.0.0.0/00.0.0.0/0
 http://0.0.0.0/0   
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 http://0.0.0.0/00.0.0.0/0 http://0.0.0.0/0  
 

 Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source  
 destination 

 

[Openstack] Is there a command to list all instances by all tenants in nova Essex installation?

2013-02-17 Thread Prakashan Korambath

Hi,

Is there a ccommand to list all instances running on a Essex 
nova installation by system admins from the control node? 
Equivalent of nova list, without giving any os_username and 
os_password.


Thanks.

Prakashan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Is there a command to list all instances by all tenants in nova Essex installation?

2013-02-17 Thread Scott Devoid
Hi Prakashan,

If you are on a machine running the nova-api, nova-manage vm list
will show you all of the instances (and what nova-compute host they
are placed on).

From outside you can use nova list --all_tenants or nova list
--all-tenants depending on what exact python-novaclient version you
have installed. For this you will need to set your OS_PASSWORD,
OS_USERNAME and OS_TENANT_NAME to an account that has the admin role
in each tenant.

~ Scott

On Sun, Feb 17, 2013 at 5:18 PM, Prakashan Korambath p...@ats.ucla.edu wrote:
 Hi,

 Is there a ccommand to list all instances running on a Essex nova
 installation by system admins from the control node? Equivalent of nova
 list, without giving any os_username and os_password.

 Thanks.

 Prakashan

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Horizon Keystone Nova miscommunication

2013-02-17 Thread Gabriel Hurley
That particular endpoint not found log message is a red herring. It's been 
removed in keystoneclient trunk because it was logging an *expected* error. 
There isn't supposed to be a service catalog available at the point at which it 
logged that message, and it lead to confusion just like this.

However, as for your actual problem, I've got a couple broad ideas:

Since you're able to log in that means Keystone is working. And since you're 
not seeing any error messages indicating that the data couldn't be retrieved 
from Nova, that means Nova is working and is truly believes that the tenant 
you're requesting data for has no instances, etc.

What that sounds like to me is that you're creating things in Nova with one 
tenant, and then looking for them in Horizon with a different tenant. The 
easiest way to check for that would be to log into horizon with a user who has 
the admin role on a project, navigate to the Instances panel in the Admin 
dashboard, and see if you can see the missing instances there. The admin 
instances panel shows *all* running instances across all tenants, so if the 
instances exist and Nova is returning data then they'll show up there.

The other (much less likely) possibility is that you somehow have two Nova 
services running which are unaware of each other, and you're managing to talk 
to different ones via the client vs. Horizon. I have to think you'd know if you 
were running two Nova's, however.

The last option would be that Keystone's service catalog is misconfigured and 
you're not actually communicating with Nova, but if that were the case you 
should be seeing errors all over the place, so I find that highly unlikely.

Hope something there helps.


-  Gabriel

From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
Behalf Of Greg Chavez
Sent: Saturday, February 16, 2013 11:54 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] Horizon  Keystone  Nova miscommunication


It seems that nova and horizon are not communicating on my controller node.  
Acces and security objects created with nova are not seen by Horizon and vice 
versa.  This includes key pairs and secgroup rules.  For example, if I create a 
keypair with the nova client, it isn't visible in horizon, and if I create one 
in horizon it is not visible via the nova client.

Possibly related: VMs that I create, whether via the nova client or Horizon, 
are not shown with I run nova list.  The nova-api.log shows a successful 
servers-detail query, but it comes back empty.

Also possibly related: Although I have all my services and endpoints configured 
correctly, I can't get individual endpoint detail with endpoint-get.  What's 
more, I see this error in Horizon's error log:

[Sun Feb 17 07:02:50 2013] [error] EndpointNotFound: Endpoint not found.
[Sun Feb 17 07:06:55 2013] [error] unable to retrieve service catalog with token

This matches what I get when I run:

$ keystone endpoint-get --service nova
Endpoint not found.

But that can't be because endpoint-list shows all six endpoints I created and 
all the information seems correct in the database:


mysql select * from endpoint where service_id 
='9e40d355b49342f8ac6947c497df76d2'\G
*** 1. row ***
id: 922baafde75f4cffa7dbe7f57cddb951
region: RegionOne
service_id: 9e40d355b49342f8ac6947c497df76d2
 extra: {adminurl: http://192.168.241.100:35357/v2.0;, internalurl: 
http://192.168.241.100:5000/v2.0;, publicurl: 
http://10.21.164.75:5000/v2.0}
1 row in set (0.00 sec)

mysql select * from service where id ='9e40d355b49342f8ac6947c497df76d2'\G
*** 1. row ***
   id: 9e40d355b49342f8ac6947c497df76d2
 type: identity
extra: {description: OpenStack Identity, name: keystone}
1 row in set (0.00 sec)

Please please please help me.  My boss is giving my project the ax on Monday if 
I can't get this to work.

--
\*..+.-
--Greg Chavez
+//..;};
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Is there a command to list all instances by all tenants in nova Essex installation?

2013-02-17 Thread Prakashan Korambath

Hi Scott,

Thank you very much.  Yes, that was the command I was 
looking for.  It works!


Prakashan


On 02/17/2013 04:05 PM, Scott Devoid wrote:

Hi Prakashan,

If you are on a machine running the nova-api, nova-manage vm list
will show you all of the instances (and what nova-compute host they
are placed on).

 From outside you can use nova list --all_tenants or nova list
--all-tenants depending on what exact python-novaclient version you
have installed. For this you will need to set your OS_PASSWORD,
OS_USERNAME and OS_TENANT_NAME to an account that has the admin role
in each tenant.

~ Scott

On Sun, Feb 17, 2013 at 5:18 PM, Prakashan Korambath p...@ats.ucla.edu wrote:

Hi,

Is there a ccommand to list all instances running on a Essex nova
installation by system admins from the control node? Equivalent of nova
list, without giving any os_username and os_password.

Thanks.

Prakashan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] cannot restart the instance

2013-02-17 Thread gtt116
Hi, 小盆儿

which version of openstack you used? I think you need this patch
(https://review.openstack.org/#/c/14496/) to get the real exception.

于 2012年12月31日 16:39, 小盆儿 写道:
 hey guys~

 I just shutoff one of my instance .
 but when I try to start it ,I got the errors like below:

 2012-12-31 16:31:12 ERROR nova.openstack.common.rpc.amqp [-] Exception
 during message handling
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp Traceback
 (most recent call last):
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py,
 line 275, in _process_data
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp rval =
 self.proxy.dispatch(ctxt, version, method, **args)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
 line 145, in dispatch
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp return
 getattr(proxyobj, method)(ctxt, **kwargs)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 117, in wrapped
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 temp_level, payload)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 92, in wrapped
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp return
 f(*args, **kw)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 181,
 in decorated_function
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp pass
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 167,
 in decorated_function
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp return
 function(self, context, *args, **kwargs)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 202,
 in decorated_function
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 kwargs['instance']['uuid'], e, sys.exc_info())
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 196,
 in decorated_function
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp return
 function(self, context, *args, **kwargs)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 953,
 in start_instance
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 self.power_on_instance(context, instance)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 117, in wrapped
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 temp_level, payload)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 92, in wrapped
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp return
 f(*args, **kw)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 181,
 in decorated_function
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp pass
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 167,
 in decorated_function
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp return
 function(self, context, *args, **kwargs)
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 202,
 in decorated_function
 2012-12-31 16:31:12 TRACE nova.openstack.common.rpc.amqp
 kwargs['instance']['uuid'], e, sys.exc_info())
 2012-12-31 16:31:12 TRACE 

[Openstack] How to ping/ssh instance outside openstack server?

2013-02-17 Thread jeffty
Hello,

I installed openstack in my PC. The IP of my router is 192.168.1.1 and
the PC's nic IP is static - 192.168.1.2.

Then I created images and instances, assigned private IP as manual
states in website. e.g. 192.168.4.40.

I can ping/ssh instance in PC 192.168.1.10. But fail to do that in my
laptop(IP: 192.168.1.3).

How to access instance then? Do I need to configure sth for that?

And I found that if I run

sudo kvm -m xxx file=linux.img ... -vnc :0

The instance can access Internet. But if it's running automatically by
Nova, I cannot connect to Internet when SSH/VNC onto it.

Thanks a lot.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to ping/ssh instance outside openstack server?

2013-02-17 Thread Aaron Rosen
The issue is your laptop doesn't have a route to that network.  Try running
this on your laptop to see if this makes it work:

sudo ip route add 192.168.4.0/24  dev wlan0 # replace wlan0 with the
correct interface

If that doesn't work perhaps give this one a shot:

route add -net 192.168.4.0 netmask 255.255.255.0 gw 192.168.1.10 dev wlan0
# replace wlan0 with correct interface

The correct solution though is to add a static route for 192.168.4.0/24 to
192.168.4.1 on your router.

Aaron

On Sun, Feb 17, 2013 at 9:25 PM, jeffty wantwater...@gmail.com wrote:

 Hello,

 I installed openstack in my PC. The IP of my router is 192.168.1.1 and
 the PC's nic IP is static - 192.168.1.2.

 Then I created images and instances, assigned private IP as manual
 states in website. e.g. 192.168.4.40.

 I can ping/ssh instance in PC 192.168.1.10. But fail to do that in my
 laptop(IP: 192.168.1.3).

 How to access instance then? Do I need to configure sth for that?

 And I found that if I run

 sudo kvm -m xxx file=linux.img ... -vnc :0

 The instance can access Internet. But if it's running automatically by
 Nova, I cannot connect to Internet when SSH/VNC onto it.

 Thanks a lot.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] question regarding nova-compute

2013-02-17 Thread Aru s
Hi Tam,

I am new to openstack. i have setup a two node openstack setup in my lab. I
have some questions on the networking part which mentioned below. Please
help.

I am using ubuntu server 12.10.
I am using the flatmanager option, as both of my nodes have only single
nic.
first node is running all the services except the nova compute.
only nova compute is running on the second node.
I have cretated br100 on both the nodes and bridged to em1.
I have one DHCP running external on the same network.
All went good.

The problem i can see is my VM's are getting ip from the external DHCP
server, but the Horizon UI is showing different ip. please let me know if
anymore info required.

Regards,
Arumon
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-17 Thread Sebastien HAN
Hi,

Good to hear that you finally managed to get it working. Usually the 
postrouting rule is more for clients that needs to be routed. 

Cheers! 

On 16 févr. 2013, at 03:06, Samuel Winchenbach swinc...@gmail.com wrote:

 Well I got it to work.  I was being stupid, and forgot to change over the 
 endpoints in keystone.
 
 One thing I find interesting is that if I call keystone user-list from 
 test1 it _always_ sends the request to test2 and vice versa.
 
 Also I did not need to add the POSTROUTING rule... I am not sure why.
 
 
 On Fri, Feb 15, 2013 at 3:44 PM, Samuel Winchenbach swinc...@gmail.com 
 wrote:
 Hrmmm it isn't going so well:
 
 root@test1# ip a s dev eth0
 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 
 1000
 link/ether 00:25:90:10:00:78 brd ff:ff:ff:ff:ff:ff
 inet 10.21.0.1/16 brd 10.21.255.255 scope global eth0
 inet 10.21.1.1/16 brd 10.21.255.255 scope global secondary eth0
 inet 10.21.21.1/16 scope global secondary eth0
 inet6 fe80::225:90ff:fe10:78/64 scope link 
valid_lft forever preferred_lft forever
 
 
 root@test1# ipvsadm -L -n
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
   - RemoteAddress:Port   Forward Weight ActiveConn InActConn
 TCP  10.21.21.1:5000 wlc persistent 600
   - 10.21.0.1:5000   Masq1000  1 
   - 10.21.0.2:5000   Masq1000  0 
 TCP  10.21.21.1:35357 wlc persistent 600
   - 10.21.0.1:35357  Masq1000  0 
   - 10.21.0.2:35357  Masq1000  0
 
 root@test1# iptables -L -v -tnat
 Chain PREROUTING (policy ACCEPT 283 packets, 24902 bytes)
  pkts bytes target prot opt in out source   
 destination 
 
 Chain INPUT (policy ACCEPT 253 packets, 15256 bytes)
  pkts bytes target prot opt in out source   
 destination 
 
 Chain OUTPUT (policy ACCEPT 509 packets, 37182 bytes)
  pkts bytes target prot opt in out source   
 destination 
 
 Chain POSTROUTING (policy ACCEPT 196 packets, 12010 bytes)
  pkts bytes target prot opt in out source   
 destination 
   277 16700 MASQUERADE  all  --  anyeth0anywhere anywhere
 
 root@test1:~# export OS_AUTH_URL=http://10.21.21.1:5000/v2.0/;
 root@test1:~# keystone user-list
 No handlers could be found for logger keystoneclient.client
 Unable to communicate with identity service: [Errno 113] No route to host. 
 (HTTP 400)
 
 
 I still have some debugging to do with tcpdump, but I thought I would post 
 my initial results.
 
 
 On Fri, Feb 15, 2013 at 2:56 PM, Sébastien Han han.sebast...@gmail.com 
 wrote:
 Well if you follow my article, you will get LVS-NAT running. It's fairly 
 easy, no funky stuff. Yes you will probably need the postrouting rule, as 
 usual :). Let me know how it goes ;)
 
 --
 Regards,
 Sébastien Han.
 
 
 On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach swinc...@gmail.com 
 wrote:
 I didn't give NAT a shot because it didn't seem as well documented.
 
 I will give NAT a shot.  Will I need to enable to iptables and add a rule 
 to the nat table?   None of the documentation mentioned that but every 
 time I have ever done NAT I had to setup a rule like... iptables -t nat -A 
 POSTROUTING -o eth0 -j MASQUERADE
 
 Thanks for helping me with this.
 
 
 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han han.sebast...@gmail.com 
 wrote:
 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.
 
 LVS has the public IPs and redirects to the private IPs, this _must_ work.
 
 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.
 
 
 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com 
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24 
  VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect

[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_keystone_trunk #139

2013-02-17 Thread openstack-testing-bot
Title: precise_grizzly_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/139/Project:precise_grizzly_keystone_trunkDate of build:Sun, 17 Feb 2013 03:31:08 -0500Build duration:5 min 0 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesUpdate to oslo version code.by mordrededitdoc/source/conf.pyeditsetup.pyeditkeystone/openstack/common/setup.pyaddkeystone/openstack/common/version.pyeditopenstack-common.confConsole Output[...truncated 8711 lines...]Distribution: precise-grizzlyFail-Stage: buildHost Architecture: amd64Install-Time: 42Job: keystone_2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1.dscMachine Architecture: amd64Package: keystonePackage-Time: 202Source-Version: 2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1Space: 13900Status: attemptedVersion: 2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1Finished at 20130217-0336Build needed 00:03:22, 13900k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmpJzhj1f/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpJzhj1f/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/keystone/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A keystone_2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1.a138.g5a8682d+git201302170331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_keystone_trunk #149

2013-02-17 Thread openstack-testing-bot
Title: raring_grizzly_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_keystone_trunk/149/Project:raring_grizzly_keystone_trunkDate of build:Sun, 17 Feb 2013 03:31:09 -0500Build duration:6 min 15 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesUpdate to oslo version code.by mordrededitkeystone/openstack/common/setup.pyeditdoc/source/conf.pyaddkeystone/openstack/common/version.pyeditopenstack-common.confeditsetup.pyConsole Output[...truncated 9874 lines...]Distribution: raring-grizzlyFail-Stage: buildHost Architecture: amd64Install-Time: 32Job: keystone_2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1.dscMachine Architecture: amd64Package: keystonePackage-Time: 198Source-Version: 2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1Space: 13916Status: attemptedVersion: 2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1Finished at 20130217-0337Build needed 00:03:18, 13916k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'keystone_2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'keystone_2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmp9JCvSI/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmp9JCvSI/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/keystone/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A keystone_2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'keystone_2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'keystone_2013.1.a138.g5a8682d+git201302170331~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_cinder_trunk #152

2013-02-17 Thread openstack-testing-bot
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/152/Project:precise_grizzly_cinder_trunkDate of build:Sun, 17 Feb 2013 12:31:08 -0500Build duration:3 min 8 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAdd get_cluster_stats to SolidFire driverby john.griffitheditcinder/tests/test_drivers_compatibility.pyeditcinder/volume/drivers/solidfire.pyeditcinder/tests/test_solidfire.pyConsole Output[...truncated 5398 lines...]Status: attemptedVersion: 2013.1.a146.g71576bf+git201302171231~precise-0ubuntu1Finished at 20130217-1234Build needed 00:01:34, 24284k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a146.g71576bf+git201302171231~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a146.g71576bf+git201302171231~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpytrUap/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpytrUap/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 90971cd1026728d3061e13843d117e549c0be67c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/cinder/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a146.g71576bf+git201302171231~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [bb923d5] Add get_cluster_stats to SolidFire driverdch -a [695e3a8] Adding support for Coraid AoE SANs Appliances.dch -a [f06f5e1] Add an update option to run_tests.shdch -a [edbfa6a] Update EMC SMI-S Driverdch -a [1fc5575] Add LIO iSCSI backend support using python-rtslibdch -a [06b26a8] Add GlusterFS volume driverdch -a [abd3475] Create a RemoteFsDriver classdch -a [9627e6d] Add an ID to temporary volume snapshot objectdch -a [d17cc23] Allow create_volume() to retry when exception happeneddch -a [029435c] rbd: update volume<->image copyingdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a146.g71576bf+git201302171231~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A cinder_2013.1.a146.g71576bf+git201302171231~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a146.g71576bf+git201302171231~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a146.g71576bf+git201302171231~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_cinder_trunk #154

2013-02-17 Thread openstack-testing-bot
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/154/Project:raring_grizzly_cinder_trunkDate of build:Sun, 17 Feb 2013 12:31:08 -0500Build duration:4 min 28 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAdd get_cluster_stats to SolidFire driverby john.griffitheditcinder/tests/test_solidfire.pyeditcinder/tests/test_drivers_compatibility.pyeditcinder/volume/drivers/solidfire.pyConsole Output[...truncated 6315 lines...]Status: attemptedVersion: 2013.1.a146.g71576bf+git201302171231~raring-0ubuntu1Finished at 20130217-1235Build needed 00:01:39, 24272k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a146.g71576bf+git201302171231~raring-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a146.g71576bf+git201302171231~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpqCgt7q/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpqCgt7q/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 90971cd1026728d3061e13843d117e549c0be67c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/cinder/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a146.g71576bf+git201302171231~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [bb923d5] Add get_cluster_stats to SolidFire driverdch -a [695e3a8] Adding support for Coraid AoE SANs Appliances.dch -a [f06f5e1] Add an update option to run_tests.shdch -a [edbfa6a] Update EMC SMI-S Driverdch -a [1fc5575] Add LIO iSCSI backend support using python-rtslibdch -a [06b26a8] Add GlusterFS volume driverdch -a [abd3475] Create a RemoteFsDriver classdch -a [9627e6d] Add an ID to temporary volume snapshot objectdch -a [d17cc23] Allow create_volume() to retry when exception happeneddch -a [029435c] rbd: update volume<->image copyingdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a146.g71576bf+git201302171231~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A cinder_2013.1.a146.g71576bf+git201302171231~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a146.g71576bf+git201302171231~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a146.g71576bf+git201302171231~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_quantum_trunk #322

2013-02-17 Thread openstack-testing-bot
/unit/_test_extension_portbindings.pyeditquantum/plugins/openvswitch/common/config.pyeditquantum/tests/unit/test_quantum_manager.pyeditquantum/debug/debug_agent.pyeditquantum/tests/unit/test_agent_rpc.pyeditquantum/api/v2/router.pyeditquantum/api/extensions.pyeditquantum/tests/unit/test_extension_extended_attribute.pyeditquantum/tests/unit/test_servicetype.pyeditquantum/tests/unit/ryu/test_ryu_db.pyConsole Output[...truncated 5680 lines...]Host Architecture: amd64Install-Time: 31Job: quantum_2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1.dscMachine Architecture: amd64Package: quantumPackage-Time: 69Source-Version: 2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1Space: 13320Status: attemptedVersion: 2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1Finished at 20130217-1404Build needed 00:01:09, 13320k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'quantum_2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'quantum_2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmp3htv6W/quantummk-build-deps -i -r -t apt-get -y /tmp/tmp3htv6W/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 2d1762ced08883467f4106ccfe26fc21c0350315..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [cfda6bc] Use oslo-config-2013.1b3dch -a [cc78724] Shorten the DHCP default resync_intervaldebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A quantum_2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'quantum_2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'quantum_2013.1.a562.g40296e3+git201302171401~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_quantum_trunk #323

2013-02-17 Thread openstack-testing-bot
Title: raring_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/323/Project:raring_grizzly_quantum_trunkDate of build:Sun, 17 Feb 2013 18:01:08 -0500Build duration:4 min 12 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesAdd Migration for nvp-qos extensionby arosenaddquantum/db/migration/alembic_migrations/versions/45680af419f9_nvp_qos.pyConsole Output[...truncated 5685 lines...]Install-Time: 31Job: quantum_2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1.dscMachine Architecture: amd64Package: quantumPackage-Time: 90Source-Version: 2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1Space: 13328Status: attemptedVersion: 2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1Finished at 20130217-1805Build needed 00:01:30, 13328k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'quantum_2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'quantum_2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpaj6N1G/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpaj6N1G/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 2d1762ced08883467f4106ccfe26fc21c0350315..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [1f37342] Add Migration for nvp-qos extensiondch -a [cfda6bc] Use oslo-config-2013.1b3dch -a [cc78724] Shorten the DHCP default resync_intervaldebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A quantum_2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'quantum_2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'quantum_2013.1.a563.g1f37342+git201302171801~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_deploy #17

2013-02-17 Thread openstack-testing-bot
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/17/Project:raring_grizzly_deployDate of build:Sun, 17 Feb 2013 18:02:49 -0500Build duration:19 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 5200 lines...]INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Setting up connection to test-06.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-06.os.magners.qa.lexingtonINFO:root:Setting up connection to test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Connected (version 2.0, client OpenSSH_6.1p1)INFO:paramiko.transport:Authentication (publickey) successful!INFO:paramiko.transport:Secsh channel 1 opened.INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Archiving logs on test-07.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-12.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-12.os.magners.qa.lexingtonINFO:root:Archiving logs on test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-09.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-04.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-05.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-06.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-06.os.magners.qa.lexingtonINFO:root:Archiving logs on test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Grabbing information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonERROR:root:Unable to get information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonERROR:root:Unable to get information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.Traceback (most recent call last):  File "/var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py", line 88, in connections[host]["sftp"].close()KeyError: 'sftp'+ exit 1Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_cinder_trunk #153

2013-02-17 Thread openstack-testing-bot
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/153/Project:precise_grizzly_cinder_trunkDate of build:Sun, 17 Feb 2013 21:31:09 -0500Build duration:3 min 10 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesUpdate snapshot rest api to be consistent with volumesby thingeeeditcinder/api/v2/snapshots.pyeditcinder/tests/api/v2/test_snapshots.pyConsole Output[...truncated 5401 lines...]Version: 2013.1.a148.g3b7cd95+git201302172131~precise-0ubuntu1Finished at 20130217-2134Build needed 00:01:38, 24288k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a148.g3b7cd95+git201302172131~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a148.g3b7cd95+git201302172131~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpJR0PPO/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpJR0PPO/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 90971cd1026728d3061e13843d117e549c0be67c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/cinder/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a148.g3b7cd95+git201302172131~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [bb923d5] Add get_cluster_stats to SolidFire driverdch -a [695e3a8] Adding support for Coraid AoE SANs Appliances.dch -a [f06f5e1] Add an update option to run_tests.shdch -a [edbfa6a] Update EMC SMI-S Driverdch -a [1fc5575] Add LIO iSCSI backend support using python-rtslibdch -a [06b26a8] Add GlusterFS volume driverdch -a [abd3475] Create a RemoteFsDriver classdch -a [9627e6d] Add an ID to temporary volume snapshot objectdch -a [d17cc23] Allow create_volume() to retry when exception happeneddch -a [029435c] rbd: update volume<->image copyingdch -a [4ca3b53] Update snapshot rest api to be consistent with volumesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a148.g3b7cd95+git201302172131~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A cinder_2013.1.a148.g3b7cd95+git201302172131~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a148.g3b7cd95+git201302172131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a148.g3b7cd95+git201302172131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp