Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Michaël Van de Borne

No I'm running just one iscsi target service:

root@leonard:~# netstat -antp | grep 3260
tcp0  0 0.0.0.0:3260 0.0.0.0:*   LISTEN  
8927/tgtd

tcp6   0  0 :::3260 :::*LISTEN  8927/tgtd



Here is my cinder.conf:
 root@leonard:~# cat /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
sql_connection = mysql://cinder:grizzly@leonard/cinder
rabbit_password = grizzly
iscsi_ip_address=192.168.203.103

So how can the compute node try to reach the public interface of the 
controller?  How can it possibly even know this IP?




Michaël Van de Borne
RD Engineer, SOA team, CETIC
Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli
www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi

Le 26/04/2013 20:22, K, Shanthakumar a écrit :


I believe you are using both iscsitarget and tgt in your 
configuration, that's why you are getting this issue.


Stop any one target and mark the current target in cinder.conf 
iscsihelper=targetname


Example :  iscsi_helper=tgtadm

Then restart the all the services, hopefully this will work.

Thanks

Shanthakumar K

*From:*Openstack 
[mailto:openstack-bounces+sk13=hp@lists.launchpad.net] *On Behalf 
Of *Michaël Van de Borne

*Sent:* Friday, April 26, 2013 8:11 PM
*To:* openstack@lists.launchpad.net
*Subject:* [Openstack] [Grizzly] cannot attach volume to instance due 
to wrong iscsi target


Hi all,

I installed three nodes like this topology:
http://docs.openstack.org/trunk/openstack-network/admin/content/connectivity.html

Here are my subnets:
management: 192.168.203.X/24
data: 192.168.201.X/24
external  API: 192.168.202.X/24

I'm running ubuntu 12.04.

When I try to attach a volume to a VM, I get the following error in 
nova-compute.log:
2013-04-26 16:27:59.439 WARNING nova.virt.libvirt.volume 
[req-f5b4e121-a3ac-456d-b5cb-ba389c7fb409 
6d72da42f39648c48d3dfb4cd190107d 93a48de7ef674f07a96e169383c34399] 
ISCSI volume not yet found at: vdr. Will rescan  retry.  Try number: 0
2013-04-26 16:27:59.490 ERROR nova.compute.manager 
[req-f5b4e121-a3ac-456d-b5cb-ba389c7fb409 
6d72da42f39648c48d3dfb4cd190107d 93a48de7ef674f07a96e169383c34399] 
[instance: 05141f81-04cc-4493-86da-d2c05fd8a2f9] Failed to attach 
volume d9424219-33f6-40c8-88d9-ecba4c8aa6be at /dev/vdr
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] Traceback (most recent call last):
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2859, 
in _attach_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] mountpoint)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
957, in attach_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] disk_info)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
943, in volume_driver_method
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] return 
method(connection_info, *args, **kwargs)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, 
line 242, in inner
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] retval = f(*args, **kwargs)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py, line 
245, in connect_volume
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] 
self._run_iscsiadm(iscsi_properties, (--rescan,))
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py, line 
179, in _run_iscsiadm
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9] check_exit_code=check_exit_code)
2013-04-26 16:27:59.490 30092 TRACE nova.compute.manager [instance: 
05141f81-04cc-4493-86da-d2c05fd8a2f9]   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 239, in execute

Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Michaël Van de Borne

I set that key in both controller and compute

still no luck



Le 26/04/2013 18:26, Darragh O'Reilly a écrit :

its not really obvious, but I believe the iscsi_ip_address needs to be set in
the nova.conf on the **controller** - just want to check you did it there.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] VMs not authorized by metadata server

2013-04-27 Thread Michaël Van de Borne
Anybody has an idea about why the nova metadata server rejects the VM 
requests?




Le 26/04/2013 15:58, Michaël Van de Borne a écrit :

Hi there,

I've installed Grizzly on 3 servers:
compute (howard)
controller (leonard)
network (rajesh)).

Namespaces are ON
Overlapping IPs are ON

When booting, my VMs can reach the metadata server (on the controller 
node), but it responds a 500 Internal Server Error


*Here is the error from the log of nova-api:*
2013-04-26 15:35:28.149 19902 INFO nova.metadata.wsgi.server [-] 
(19902) accepted ('192.168.202.105', 54871)


2013-04-26 15:35:28.346 ERROR nova.network.quantumv2 
[req-52ffc3ae-a15e-4bf4-813c-6596618eb430 None None] _get_auth_token() 
failed
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 Traceback 
(most recent call last):
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2   File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py, 
line 40, in _get_auth_token
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 
httpclient.authenticate()
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2   File 
/usr/lib/python2.7/dist-packages/quantumclient/client.py, line 193, 
in authenticate
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 
content_type=application/json)
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2   File 
/usr/lib/python2.7/dist-packages/quantumclient/client.py, line 131, 
in _cs_request
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 raise 
exceptions.Unauthorized(message=body)
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 
Unauthorized: {error: {message: The request you have made 
requires authentication., code: 401, title: Not Authorized}}

2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2
2013-04-26 15:35:28.347 ERROR nova.api.metadata.handler 
[req-52ffc3ae-a15e-4bf4-813c-6596618eb430 None None] Failed to get 
metadata for instance id: 05141f81-04cc-4493-86da-d2c05fd8a2f9
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
Traceback (most recent call last):
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/api/metadata/handler.py, line 
179, in _handle_instance_id_request
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
remote_address)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/api/metadata/handler.py, line 
90, in get_metadata_by_instance_id
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
instance_id, address)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py, line 
417, in get_metadata_by_instance_id
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler return 
InstanceMetadata(instance, address)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py, line 
143, in __init__
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
conductor_api=capi)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py, line 
359, in get_instance_nw_info
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler result = 
self._get_instance_nw_info(context, instance, networks)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py, line 
367, in _get_instance_nw_info
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler nw_info 
= self._build_network_info_model(context, instance, networks)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py, line 
777, in _build_network_info_model
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler client = 
quantumv2.get_client(context, admin=True)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py, 
line 67, in get_client
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler return 
_get_client(token=token)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py, 
line 49, in _get_client
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler token = 
_get_auth_token()
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py, 
line 43, in _get_auth_token
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
LOG.exception(_(_get_auth_token() failed))
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
self.gen.next()
2013-04-26 15:35:28.347 19902 TRACE 

Re: [Openstack] Swift container's rwx permissions

2013-04-27 Thread Shashank Sahni
Thanks David, Clay and Vaidy for your suggestions.

I'm curious about what version of that Hadoop-Swift-integration project
 you're trying to run.  You shouldn't have been able to create containers
 with it in any of the more recent versions.


Yes, my bad. Container creation isn't supported. As part of the the Swift's
implementation in Hadoop Filesystem class, we need to explicitly mention a
container which will be considered as DFS root.


  You might want to try the converged branch of
 https://github.com/hortonworks/Hadoop-and-Swift-integration.  This is the
 branch that's getting submitted back to Apache for inclusion in Hadoop.


Yes, we have already tried it. Same result.

FYI we are using Hadoop 1.1.2

--
Shashank Sahni
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Darragh O'Reilly
is nova configured to use cinder?

in nova.conf
volume_api_class=nova.volume.cinder.API

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Michaël Van de Borne

yes it was already.



Le 27/04/2013 14:08, Darragh O'Reilly a écrit :

is nova configured to use cinder?

in nova.conf
volume_api_class=nova.volume.cinder.API




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bridging question

2013-04-27 Thread Daniel Ellison
On 2013-04-26, at 7:53 PM, David Wittman dwitt...@gmail.com wrote:
 This is the expected behavior. With nova-network, FLIPs are assigned as a 
 secondary address on the host interface, and traffic is routed to your 
 instances via NAT rules. I'd recommend reading the following blog post from 
 Mirantis for more information:
 
 http://www.mirantis.com/blog/configuring-floating-ip-addresses-networking-openstack-public-private-clouds/

Excellent! Thanks for that information. It was just what I was looking for. 
Mystery solved!

Daniel
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Darragh O'Reilly


it seems the ips for the targets are set at the time they were created

$ mysqlmysql use cinder; 
mysql select provider_location from volumes; 

Try creating a new volume - does it get the new iscsi_ip_address ?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] [SOLVED] cannot attach volume to instance due to wrong iscsi target

2013-04-27 Thread Michaël Van de Borne

Man, this is awesome!
that did it!

thank you very much.


Le 27/04/2013 17:13, Darragh O'Reilly a écrit :


it seems the ips for the targets are set at the time they were created

$ mysqlmysql use cinder;
mysql select provider_location from volumes;

Try creating a new volume - does it get the new iscsi_ip_address ?




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Having problem getting external internet access working

2013-04-27 Thread Drew Weaver
Greetings, and a lovely Saturday to all of you openstackers.

I've had a fairly successful day of getting openstack working on Ubuntu 13.04 
but I'm running into a problem fairly late in the game.

No matter what I do I appear to be unable to get the virtual machines to 
connect to the external Internet.

Here are my configurations for linuxbridge, and me trying to create a provider 
network and subsequent failure.

root@ce:/etc/quantum/plugins/linuxbridge# quantum net-create thenet --tenant_id 
3f51b75eb5274b079d0bd44dc51af7e8 --provider:network_type vlan 
--provider:physical_network physnet1 --provider:segmentation_id 1005
Invalid input for operation: Unknown provider:physical_network physnet1.

(quantum) agent-show c7099d63-1034-48db-8a48-7e39a6d232b2
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| agent_type  | Linux bridge agent   |
| alive   | True |
| binary  | quantum-linuxbridge-agent|
| configurations  | {|
| |  physnet1: em1,  |
| |  devices: 4|
| | }|
| created_at  | 2013-04-27 18:30:56.309230   |
| description |  |
| heartbeat_timestamp | 2013-04-27 21:50:03.360738   |
| host| testhost.com  |
| id  | c7099d63-1034-48db-8a48-7e39a6d232b2 |
| started_at  | 2013-04-27 21:44:31.181148   |
| topic   | N/A  |
+-+--+

linuxbridge_conf.ini

[LINUX_BRIDGE]
physical_interface_mappings = physnet1:em1
[VLANS]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:2999
[DATABASE]
sql_connection = mysql://quantumUser:dbtestpass@10.10.100.51/quantum
reconnect_interval = 2
[AGENT]
polling_interval = 2
[SECURITYGROUP]
# Firewall driver for realizing quantum security group function
firewall_driver = quantum.agent.linux.iptables_firewall.IptablesFirewallDriver

If anyone has any ideas, I would be horrendously grateful.

Thanks,
-Drew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] SNAT and Floating IPs in quantum

2013-04-27 Thread Kannan, Hari
Is there a way to have SNAT (where multiple private IPs could go out using a 
single public IP) or is Floating IPs (1:1 NAT aka static NAT) the only choice 
in Openstack/Quantum?

Hari
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] SNAT and Floating IPs in quantum

2013-04-27 Thread Drew Weaver
Hi,

If the gateway for the external net is a firewall or something that does NAT, 
then what would stop you from doing that?



From: Openstack 
[mailto:openstack-bounces+drew.weaver=thenap@lists.launchpad.net] On Behalf 
Of Kannan, Hari
Sent: Saturday, April 27, 2013 7:59 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] SNAT and Floating IPs in quantum

Is there a way to have SNAT (where multiple private IPs could go out using a 
single public IP) or is Floating IPs (1:1 NAT aka static NAT) the only choice 
in Openstack/Quantum?

Hari
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] SNAT and Floating IPs in quantum

2013-04-27 Thread Jian Wen
On 2013?04?28? 07:59, Kannan, Hari wrote:

 Is there a way to have SNAT (where multiple private IPs could go out
 using a single public IP) or is Floating IPs (1:1 NAT aka static NAT)
 the only choice in Openstack/Quantum?

 Hari



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

Hello,
   You can do SNAT and DNAT within the instance which has a public IP for
 the instances have no public IP addresses. This is not related to
Openstack/Quantum.

-- 
Jian

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Grizzly] VMs not authorized by metadata server

2013-04-27 Thread Gary Kotton

On 04/27/2013 12:44 PM, Michaël Van de Borne wrote:
Anybody has an idea about why the nova metadata server rejects the VM 
requests?


Hi,
Just a few questions:-
1. Can you please check /etc/quantum/metadata_agent.ini to see that you 
have the correct quantum keystone credential configured?

2. Can you please make sure that you are running the quantum metadata proxy.
3. In nova.conf can you please see that service_quantum_metadata_proxy 
= True is set.

Thanks
Gary





Le 26/04/2013 15:58, Michaël Van de Borne a écrit :

Hi there,

I've installed Grizzly on 3 servers:
compute (howard)
controller (leonard)
network (rajesh)).

Namespaces are ON
Overlapping IPs are ON

When booting, my VMs can reach the metadata server (on the controller 
node), but it responds a 500 Internal Server Error


*Here is the error from the log of nova-api:*
2013-04-26 15:35:28.149 19902 INFO nova.metadata.wsgi.server [-] 
(19902) accepted ('192.168.202.105', 54871)


2013-04-26 15:35:28.346 ERROR nova.network.quantumv2 
[req-52ffc3ae-a15e-4bf4-813c-6596618eb430 None None] 
_get_auth_token() failed
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 Traceback 
(most recent call last):
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2   File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py, line 
40, in _get_auth_token
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 
httpclient.authenticate()
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2   File 
/usr/lib/python2.7/dist-packages/quantumclient/client.py, line 193, 
in authenticate
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 
content_type=application/json)
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2   File 
/usr/lib/python2.7/dist-packages/quantumclient/client.py, line 131, 
in _cs_request
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 raise 
exceptions.Unauthorized(message=body)
2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2 
Unauthorized: {error: {message: The request you have made 
requires authentication., code: 401, title: Not Authorized}}

2013-04-26 15:35:28.346 19902 TRACE nova.network.quantumv2
2013-04-26 15:35:28.347 ERROR nova.api.metadata.handler 
[req-52ffc3ae-a15e-4bf4-813c-6596618eb430 None None] Failed to get 
metadata for instance id: 05141f81-04cc-4493-86da-d2c05fd8a2f9
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
Traceback (most recent call last):
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/api/metadata/handler.py, line 
179, in _handle_instance_id_request
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
remote_address)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/api/metadata/handler.py, line 
90, in get_metadata_by_instance_id
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
instance_id, address)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py, line 
417, in get_metadata_by_instance_id
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
return InstanceMetadata(instance, address)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py, line 
143, in __init__
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
conductor_api=capi)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py, 
line 359, in get_instance_nw_info
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
result = self._get_instance_nw_info(context, instance, networks)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py, 
line 367, in _get_instance_nw_info
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
nw_info = self._build_network_info_model(context, instance, networks)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py, 
line 777, in _build_network_info_model
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
client = quantumv2.get_client(context, admin=True)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py, line 
67, in get_client
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
return _get_client(token=token)
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler   File 
/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py, line 
49, in _get_client
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler 
token = _get_auth_token()
2013-04-26 15:35:28.347 19902 TRACE nova.api.metadata.handler  

[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_grizzly_version-drift #6

2013-04-27 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/6/

--
Started by timer
Building remotely on pkg-builder in workspace 
http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/ws/
[cloud-archive_grizzly_version-drift] $ /bin/bash -xe 
/tmp/hudson5761372999226179564.sh
+ OS_RELEASE=grizzly
+ 
/var/lib/jenkins/tools/ubuntu-reports/server/cloud-archive/version-tracker/gather-versions.py
 grizzly
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ 
/var/lib/jenkins/tools/ubuntu-reports/server/cloud-archive/version-tracker/ca-versions.py
 -c -r grizzly
---
The following Cloud Archive packages for grizzly
have been superseded newer versions in Ubuntu!

nova:
Ubuntu: 1:2013.1-0ubuntu2
Cloud Archive staging: 1:2013.1-0ubuntu2~cloud1

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #63

2013-04-27 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/63/Project:precise_havana_quantum_trunkDate of build:Sun, 28 Apr 2013 01:31:36 -0400Build duration:1 min 16 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFix logic issue in OVSQuantumAgent.port_unbound methodby enikanoroveditquantum/plugins/openvswitch/agent/ovs_quantum_agent.pyConsole Output[...truncated 1734 lines...]dch -a [152f3cf] Create veth peer in namespace.dch -a [9c21592] Imported Translations from Transifexdch -a [01a977b] Send 400 error if device specification contains unexpected attributesdch -a [62017cd] Imported Translations from Transifexdch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [35988f1] Make the 'admin' role configurabledch -a [ee50162] Simplify delete_health_monitor() using cascadesdch -a [765baf8] Imported Translations from Transifexdch -a [15a1445] Update latest OSLO codedch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-bb17b673-f5df-42e9-89c2-1da47829a077', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-bb17b673-f5df-42e9-89c2-1da47829a077', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp