Re: [Openstack] [Keystone]Question: Assignment of default role

2013-02-25 Thread Leo Toyoda
Hi Adam

Thanks a lot for your answer.

It is my understanding follows. Would that be OK with you?
Case1: Create a user *with* specifying the tenant.
* Default role is assigned.
* I need to assign the required roles in keystone user-role-add.
* The user has two roles.

Case2: Create a user *without* specifying the tenant.
* I need to assign the required roles and the tenant in keystone 
user-role-add.
* The user has one role.

Thanks,
Leo Toyoda


 -Original Message-
 From: 
 openstack-bounces+toyoda-reo=cnt.mxw.nes.nec.co.jp@lists.launc
 hpad.net 
 [mailto:openstack-bounces+toyoda-reo=cnt.mxw.nes.nec.co.jp@lis
 ts.launchpad.net] On Behalf Of Adam Young
 Sent: Saturday, February 23, 2013 5:31 AM
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] [Keystone]Question: Assignment of 
 default role
 
 Yes, this is new.  We are removing the direct associtation 
 between users and projects (Project members) and replacing it 
 with a Role (_member_)
 
 The _ is there to ensure it does not conflict with existing roles.
 
 The two different ways of associating users to projects was 
 causing problems.  With RBAC, we can now enforce policy about 
 project membership that we could not do before.
 
 
 
 
 
 On 02/21/2013 09:39 PM, Leo Toyoda wrote:
  Hi, everyone
 
  I'm using the master branch devstack.
  I hava a question about assignment of default role (Keystone).
 
  When I create a user to specify the tenant, '_member_' is 
 assigned to the roles.
  $ keystone user-create --name test --tenant-id e61..7f6 --pass test 
  --email t...@example.com
  +--+---+
  | Property |  Value|
  +--+---+
  |  email   | te...@example.com |
  | enabled  |   True|
  |id| af1..8d2  |
  |   name   |   test|
  | tenantId | e61..7f6  |
  +--+---+
  $ keystone user-role-list --user test --tenant e61..7f6
  +--+--+--+---+
  |id|   name   | user_id  | tenant_id |
  +--+--+--+---+
  | 9fe..bab | _member_ | af1..8d2 | e61..7f6  |
  +--+--+--+---+
 
  Then, assign the Member role to the user.
  Hitting assigned two roles of 'Member' and '_member_'.
  $ keystone user-role-add --user af1..8d2 --role 57d..d1f --tenant 
  e61..7f6 $ keystone user-role-list --user af1..8d2 --tenant e61..7f6
  +--+--+--+---+
  |id|   name   | user_id  | tenant_id |
  +--+--+--+---+
  | 57d..d1f |  Member  | af1..8d2 | e61..7f6  | 9fe..bab | 
 _member_  | 
  | af1..8d2 | e61..7f6  |
  +--+--+--+---+
 
  When I create a user without specifying a tenant, I assign 
 'Member' role.
  In this case, Only one role is assigned.
  $ keystone user-create --name test2 --pass test --email 
  te...@example.com
  +--+---+
  | Property |  Value|
  +--+---+
  |  email   | te...@example.com |
  | enabled  |  True |
  |id|c22..a6d   |
  |   name   |  test2|
  | tenantId |   |
  +--+---+
  $ keystone user-role-add --user c22..a6d --role 57d..d1f  --tenant 
  e61..7f6 $ keystone user-role-list --user c22..a6d --tenant e61..7f6
  +--+--+--+---+
  |id|   name   | user_id  | tenant_id |
  +--+--+--+---+
  | 57d..d1f |  Member  | c22..a6d | e61..7f6  |
  +--+--+--+---+
 
  Is it expected behavior that two rolls are assigned?
 
  Thanks
  Leo Toyoda
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Regarding Role Management

2013-02-25 Thread Aru s
Hi,

I am trying to understand the roles (default available) and its
privillages. Not able to find any document on this.
Also looking for document which reffers how to create roles with custom
privillages. Please help.

Regards,
Arumon
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Server increasing load due increasing processes in D state

2013-02-25 Thread Alessandro Tagliapietra
Hello guys, 

at work we've the openstack controller that since some months started to 
increase its load after some days of uptime.

I've seen that the cause is that processes sometimes hangs and remain in D 
state.

I've used some combination of ps args to get these outputs:

http://pastebin.com/raw.php?i=LGGzGrWu
http://pastie.org/pastes/6332964/text
http://pastie.org/pastes/6332979/text

The hdd is a soft-raid1 over 2 disks, which SMART values are fine.

Commands like lsof, strace on a D process doesn't return.

Any idea on what could be the cause?

Thanks in advance

--

Alessandro Tagliapietra 
alexfu.it (http://www.alexfu.it) ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Server increasing load due increasing processes in D state

2013-02-25 Thread Alessandro Tagliapietra
After an strace of lsof I've seen it hangs on

stat(/proc/1227/, {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0 
open(/proc/1227/stat, O_RDONLY) = 4 read(4, 1227 (nova-dhcpbridge) D 1224 
25..., 4096) = 242 close(4) = 0 readlink(/proc/1227/cwd, /..., 4096) = 1 
stat(/proc/1227/cwd, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 
readlink(/proc/1227/root, /, 4096) = 1 stat(/proc/1227/root, 
{st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 readlink(/proc/1227/exe, 
/usr/bin/python2.7..., 4096) = 18 stat(/proc/1227/exe, 
{st_mode=S_IFREG|0755, st_size=2989480, ...}) = 0 open(/proc/1227/maps, 
O_RDONLY) = 4 read(4,
Could it be a memory issue?
Actually I cannot run the memory test, maybe tomorrow. Just to know if someone 
else had the same issue.
Thanks in advance

--

Alessandro Tagliapietra  
alexfu.it (http://www.alexfu.it)  

Il giorno lunedì 25 febbraio 2013, alle ore 12:29, Alessandro Tagliapietra ha 
scritto:  

 Hello guys,  
  
 at work we've the openstack controller that since some months started to 
 increase its load after some days of uptime.
  
 I've seen that the cause is that processes sometimes hangs and remain in D 
 state.
  
 I've used some combination of ps args to get these outputs:
  
 http://pastebin.com/raw.php?i=LGGzGrWu
 http://pastie.org/pastes/6332964/text
 http://pastie.org/pastes/6332979/text
  
 The hdd is a soft-raid1 over 2 disks, which SMART values are fine.
  
 Commands like lsof, strace on a D process doesn't return.
  
 Any idea on what could be the cause?
  
 Thanks in advance
  
 --
  
 Alessandro Tagliapietra  
 alexfu.it (http://www.alexfu.it)  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM doesnt get IP

2013-02-25 Thread Guilherme Russi
Hello Aaron, I can ping to 192.168.3.3

ping 192.168.3.3
PING 192.168.3.3 (192.168.3.3) 56(84) bytes of data.
64 bytes from 192.168.3.3: icmp_req=1 ttl=64 time=0.237 ms
64 bytes from 192.168.3.3: icmp_req=2 ttl=64 time=0.193 ms

What am I missing?

Regards.

Guilherme.


2013/2/23 Rahul Sharma rahulsharma...@gmail.com

 In one config, you have specified local_ip as 10.10.10.1 and in other as
 192.168.3.3 . Doesn't they should belong to same network? As per the doc,
 it should be 10.10.10.3? Plus, these both belong to Data-Network, which is
 not controller-network communication but compute-network communication.

 -Regards
 Rahul


 On Sat, Feb 23, 2013 at 12:53 AM, Aaron Rosen aro...@nicira.com wrote:

 From the network+controller node can you ping to 192.168.3.3 (just to
 confirm there is ip connectivity between those).

 Your configs look fine to me. The issue you are having is that your
 network+controller node doesn't have a tunnel to your HV node. I'd suggest
 restarting  the quantum-plugin-openvswitch-agent service on both nodes and
 see if that does the trick in order to get the agent to add the tunnel for
 you. Perhaps you edited this file and didn't restart the agent?

 Aaron

 On Fri, Feb 22, 2013 at 10:55 AM, Guilherme Russi 
 luisguilherme...@gmail.com wrote:

 Here is my controller + network node:

 cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
 [DATABASE]
 # This line MUST be changed to actually run the plugin.
 # Example:
 # sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum
 # Replace 127.0.0.1 above with the IP address of the database used by the
 # main quantum server. (Leave it as is if the database runs on this
 host.)
 sql_connection = mysql://quantum:password@localhost:3306/quantum
 # Database reconnection retry times - in event connectivity is lost
 # set to -1 implies an infinite retry count
 # sql_max_retries = 10
 # Database reconnection interval in seconds - in event connectivity is
 lost
 reconnect_interval = 2

 [OVS]
 # (StrOpt) Type of network to allocate for tenant networks. The
 # default value 'local' is useful only for single-box testing and
 # provides no connectivity between hosts. You MUST either change this
 # to 'vlan' and configure network_vlan_ranges below or change this to
 # 'gre' and configure tunnel_id_ranges below in order for tenant
 # networks to provide connectivity between hosts. Set to 'none' to
 # disable creation of tenant networks.
 #
 # Default: tenant_network_type = local
 # Example: tenant_network_type = gre
 tenant_network_type = gre

 # (ListOpt) Comma-separated list of
 # physical_network[:vlan_min:vlan_max] tuples enumerating ranges
 # of VLAN IDs on named physical networks that are available for
 # allocation. All physical networks listed are available for flat and
 # VLAN provider network creation. Specified ranges of VLAN IDs are
 # available for tenant network allocation if tenant_network_type is
 # 'vlan'. If empty, only gre and local networks may be created.
 #
 # Default: network_vlan_ranges =
 # Example: network_vlan_ranges = physnet1:1000:2999

 # (BoolOpt) Set to True in the server and the agents to enable support
 # for GRE networks. Requires kernel support for OVS patch ports and
 # GRE tunneling.
 #
 # Default: enable_tunneling = False
 enable_tunneling = True

 # (ListOpt) Comma-separated list of tun_min:tun_max tuples
 # enumerating ranges of GRE tunnel IDs that are available for tenant
 # network allocation if tenant_network_type is 'gre'.
 #
 # Default: tunnel_id_ranges =
 # Example: tunnel_id_ranges = 1:1000
 tunnel_id_ranges = 1:1000

 # Do not change this parameter unless you have a good reason to.
 # This is the name of the OVS integration bridge. There is one per
 hypervisor.
 # The integration bridge acts as a virtual patch bay. All VM VIFs are
 # attached to this bridge and then patched according to their network
 # connectivity.
 #
 # Default: integration_bridge = br-int
 integration_bridge = br-int

 # Only used for the agent if tunnel_id_ranges (above) is not empty for
 # the server.  In most cases, the default value should be fine.
 #
 # Default: tunnel_bridge = br-tun
 tunnel_bridge = br-tun

 # Uncomment this line for the agent if tunnel_id_ranges (above) is not
 # empty for the server. Set local-ip to be the local IP address of
 # this hypervisor.
 #
 # Default: local_ip =
 local_ip = 10.10.10.1


 And here is my compute node:

 cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
 [DATABASE]
 # This line MUST be changed to actually run the plugin.
 # Example:
 # sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum
 # Replace 127.0.0.1 above with the IP address of the database used by the
 # main quantum server. (Leave it as is if the database runs on this
 host.)
 sql_connection = mysql://quantum:password@192.168.3.1:3306/quantum
 # Database reconnection retry times - in event connectivity is lost
 # set to -1 implies an infinite retry count
 # sql_max_retries = 10
 # 

[Openstack] Role of administrators

2013-02-25 Thread Ganesh Hariharan
Hi,

I have a generic question related to paradigm shift towards cloud
computing,, will there be same level of requirements to perform
administrative jobs in cloud comparitive to traditional computing

Thanks,
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] horizon/keystone

2013-02-25 Thread Mballo Cherif
Hello everybody !
I need to understand how does horizon use keystone to authenticate user in 
dashboard?
What is really the interaction with horizon and keystone?
I am looking the horizon code but it's not easy to understand.

Thanks you for your help!

Sherif.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] horizon/keystone

2013-02-25 Thread Julie Pichon
Mballo Cherif cherif.mba...@gemalto.com wrote:
 Hello everybody !
 I need to understand how does horizon use keystone to authenticate
 user in dashboard?
 What is really the interaction with horizon and keystone?
 I am looking the horizon code but it's not easy to understand.
 
 Thanks you for your help!
 
 Sherif.

Hi Sherif. Horizon uses a separate plug-in for authentication that you can find 
on GitHub: https://github.com/gabrielhurley/django_openstack_auth/ . The code 
you're looking for probably is in the authenticate() method at 
openstack_auth/backend.py.

https://github.com/gabrielhurley/django_openstack_auth/blob/master/openstack_auth/backend.py#L56

Hope this helps,

Julie

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Comparing OpenStack to OpenNebula

2013-02-25 Thread Sylvain Bauza

Hi Shawn,

Le 25/02/2013 06:20, Shawn Starr a écrit :

Hello folks,

I am starting to look at OpenStack and noticed there are some things it
doesn't seem to be able to do right now?

1) Managing the nova-compute (hypervisor) - I see no options on how to control
what nova-compute nodes can be 'provisioned' into an OpenStack cloud, I'd
consider that a security risk (potentially) if any computer could just
register to become a nova-compute?
There are various ways for implementing security on Nova-compute. One 
would be to grant mysql access for keystone and nova to only some IPs, 
it would be enough for preventing nova-compute to start (and 
consequently avoiding this hypervisor to be elected for new instances).
I do admit this is a very basic test which doesn't prevent the host to 
be compromised, of course.



The reason I ask this question is how do we handle hardware failures? How can
we manually move a instance/VM off a nova-compute? I see instructions on
setting up the hypervisor to move VM instances but no actual commands to issue
a move manually.

2) Can we build a diskless nova-compute? just one kernel/initramfs with the
various configurations, libvirt, file storage network mounts, openvswitch setup
etc inside it?


These two questions can be answered by implementing a shared resource 
system for Nova instances, like GlusterFS and allowing libvirt to 
perform live migrations.

http://docs.openstack.org/trunk/openstack-compute/admin/content/live-migration-usage.html
http://gluster.org/community/documentation//index.php/OSConnect


3) keystone seems a lot of work to setup with all the various URLs, we plan to
streamline this somehow?
I don't get the point. There is only an initial setup to do for creating 
endpoints and services, but that's it.

Even this step can be automated thanks to some 3rd-party tools, like Puppet.
http://docs.openstack.org/trunk/openstack-compute/admin/content/ch_openstack-compute-automated-installations.html




When I used OpenNebula I found the installation similar but simpler (a
clear distinction between hypervisors themselves and managing them and
managing the VM instances overall). While OpenStack is new I would expect it
to be missing functionality currently.


Could you please explain what is your need ?

Hope it helps,
-Sylvain


Thanks,
Shawn

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Mirror in Online the VM

2013-02-25 Thread Razique Mahroua
I mean, is every box running all the OpenStack services (nova-compute, network, volumes, API, so forth and so so on):)
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 20 févr. 2013 à 22:25, Frans Thamura fr...@meruvian.org a écrit :On Thu, Feb 21, 2013 at 4:17 AM, Razique Mahroua razique.mahr...@gmail.com wrote:

Hi Frans,so basically, what you are looking for is a mirroring solution for your OpenStack deployment that has been made on two servers?Are both All-in-One (eg they both provide all the OpenStack services and configured ISO?)

thanksyes, we can call it online mirror-ingwhat do u mean of all in one?F


Razique Mahroua-Nuage  Co

razique.mahr...@gmail.comTel: +33 9 72 37 94 15



NUAGECO-LOGO-Fblan_petit.jpg

Le 20 févr. 2013 à 22:07, Frans Thamura fr...@meruvian.org a écrit :hi alli have a question

i am working to make a mirror or live replication bretween 2 PC forOpenSTack, so if 1 server down, the other will take over, and i hopethe user dont know it.i got also vmotion in vmware,can we do it in openstack?

thx for the helpF___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Mirror in Online the VM

2013-02-25 Thread Frans Thamura
Just got vmotion

Yes like that.

Frans Thamura
Meruvian
Integrated Hypermedia Solution Provider
On Feb 25, 2013 10:51 PM, Razique Mahroua razique.mahr...@gmail.com
wrote:

 I mean, is every box running all the OpenStack services (nova-compute,
 network, volumes, API, so forth and so so on)
 :)

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 20 févr. 2013 à 22:25, Frans Thamura fr...@meruvian.org a écrit :



 On Thu, Feb 21, 2013 at 4:17 AM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 Hi Frans,
 so basically, what you are looking for is a mirroring solution for your
 OpenStack deployment that has been made on two servers?
 Are both All-in-One (eg they both provide all the OpenStack services and
 configured ISO?)
 thanks


 yes, we can call it online mirror-ing

 what do u mean of all in one?

 F

  *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 20 févr. 2013 à 22:07, Frans Thamura fr...@meruvian.org a écrit :

 hi all
 i have a question

 i am working to make a mirror or live replication bretween 2 PC for
 OpenSTack, so if 1 server down, the other will take over, and i hope
 the user dont know it.

 i got also vmotion in vmware,

 can we do it in openstack?

 thx for the help

 F

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Metadata service route from a VM

2013-02-25 Thread Sylvain Bauza

Yet no reply ?

I did the hack, I removed the 169.254.0.0/16 route from my images, but 
this is quite a ugly hack.
Could someone with OpenVswitch/GRE setup please confirm that there is no 
route to create for metadata ?


Thanks,
-Sylvain

Le 21/02/2013 11:33, Sylvain Bauza a écrit :

Anyone ?
I found the reason why a 'quantum-dhcp-agent restart' is fixing the 
route, this is because the lease is DHCPNACK'd at next client refresh 
and the VM is getting a fresh new configuration excluding 
169.254.0.0/16 route.


Community, I beg you to confirm the 169.254.0.0/16 route should *not* 
be pushed to VMs, and 169.254.169.254/32 should be sent thru the 
default route (ie. provider router internal IP).
If it's the case, I'll update all my images to remove that route. If 
not, something is wrong with my Quantum setup that I should fix.


Thanks,
-Sylvain

Le 20/02/2013 15:55, Sylvain Bauza a écrit :

Hi,

Previously using nova-network, all my VMs were having :
 # route -n
Table de routage IP du noyau
Destination Passerelle  Genmask Indic Metric Ref Use 
Iface

10.0.0.00.0.0.0 255.255.255.0   U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 00 
eth0

0.0.0.0 10.0.0.10.0.0.0 UG0 0 0 eth0

Now, this setup seems incorrect with Quantum, as the ARP query goes 
directly from the network node trying to resolve 169.254.169.254 :

[root@toto ~]# curl http://169.254.169.254/
curl: (7) couldn't connect to host

sylvain@folsom02:~$ sudo tcpdump -i qr-f76e4668-fa -nn not ip6 and 
not udp and host 169.254.169.254 -e
tcpdump: verbose output suppressed, use -v or -vv for full protocol 
decode
listening on qr-f76e4668-fa, link-type EN10MB (Ethernet), capture 
size 65535 bytes
15:47:46.009548 fa:16:3e:bf:0b:f6  ff:ff:ff:ff:ff:ff, ethertype ARP 
(0x0806), length 42: Request who-has 169.254.169.254 tell 10.0.0.5, 
length 28
15:47:47.009076 fa:16:3e:bf:0b:f6  ff:ff:ff:ff:ff:ff, ethertype ARP 
(0x0806), length 42: Request who-has 169.254.169.254 tell 10.0.0.5, 
length 28


The only way for me to fix it is to remove the 169.254.0.0/16 route 
on the VM (or for some reason I doesn't understand, by restarting 
quantum-dhcp-agent on the network node) and then L3 routing is 
working correctly :


[root@toto ~]# route del -net 169.254.0.0/16
[root@toto ~]# curl http://169.254.169.254/
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04

sylvain@folsom02:~$ sudo tcpdump -i qg-f2397006-20 -nn not ip6 and 
not udp and host 10.0.0.5 and not port 22 -e
tcpdump: verbose output suppressed, use -v or -vv for full protocol 
decode
listening on qg-f2397006-20, link-type EN10MB (Ethernet), capture 
size 65535 bytes
15:52:58.479234 fa:16:3e:e1:95:20  e0:46:9a:2c:f4:7d, ethertype IPv4 
(0x0800), length 74: 10.0.0.5.55428  192.168.1.71.8775: Flags [S], 
seq 3032859044, win 14600, options [mss 1460,sackOK,TS val 2548891 
ecr 0,nop,wscale 5], length 0
15:52:58.480987 e0:46:9a:2c:f4:7d  fa:16:3e:e1:95:20, ethertype IPv4 
(0x0800), length 74: 192.168.1.71.8775  10.0.0.5.55428: Flags [S.], 
seq 3888257357, ack 3032859045, win 14480, options [mss 
1460,sackOK,TS val 16404712 ecr 2548891,nop,wscale 7], length 0
15:52:58.482211 fa:16:3e:e1:95:20  e0:46:9a:2c:f4:7d, ethertype IPv4 
(0x0800), length 66: 10.0.0.5.55428  192.168.1.71.8775: Flags [.], 
ack 1, win 457, options [nop,nop,TS val 2548895 ecr 16404712], length 0



I can't understand what's wrong with my setup. Could you help me ? I 
would have to undergo a post-up statement for all my images... :(


Thanks,
-Sylvain





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Usage of New Keystone Domains with OpenStack

2013-02-25 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello,

We are moving from OpenStack Essex to Grizzly and I am trying to find out how 
the new domain security collection will be used by the OpenStack services 
like Nova or Glance. I would greatly appreciate any information or 
documentation pointers.

Regards,

Mark Miller

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] VM creation failure

2013-02-25 Thread Javier Alvarez

Hello,

I'm trying to setup OpenStack Essex on Debian and I'm getting an error 
when creating new VMs. The problem arises when trying to allocate the 
network of the VM, but I'm not sure why it's happening, I'm using 
nova-network, and that's the log output:


2013-02-25 18:20:28 AUDIT nova.compute.manager 
[req-4d211944-bd5c-4e88-a8df-c4a9167676e2 
36a7e60b4d134307b61e06949d33735e 9558f53959a04cc992c3f8b6d91bfb9f] 
[instance: 8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60] Starting instance...
2013-02-25 18:20:40 INFO nova.virt.libvirt.connection [-] 
Compute_service record updated for bscgrid21

2013-02-25 18:20:40 INFO nova.compute.manager [-] Updating host status
2013-02-25 18:21:28 ERROR nova.rpc.common 
[req-4d211944-bd5c-4e88-a8df-c4a9167676e2 
36a7e60b4d134307b61e06949d33735e 9558f53959a04cc992c3f8b6d91bfb9f] Timed 
out waiting for RPC response: timed out

2013-02-25 18:21:28 TRACE nova.rpc.common Traceback (most recent call last):
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 490, in 
ensure

2013-02-25 18:21:28 TRACE nova.rpc.common return method(*args, **kwargs)
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 567, in 
_consume
2013-02-25 18:21:28 TRACE nova.rpc.common return 
self.connection.drain_events(timeout=timeout)
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/kombu/connection.py, line 167, in 
drain_events
2013-02-25 18:21:28 TRACE nova.rpc.common return 
self.transport.drain_events(self.connection, **kwargs)
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/kombu/transport/amqplib.py, line 262, 
in drain_events
2013-02-25 18:21:28 TRACE nova.rpc.common return 
connection.drain_events(**kwargs)
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/kombu/transport/amqplib.py, line 94, 
in drain_events
2013-02-25 18:21:28 TRACE nova.rpc.common return 
self.wait_multi(self.channels.values(), timeout=timeout)
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/kombu/transport/amqplib.py, line 100, 
in wait_multi
2013-02-25 18:21:28 TRACE nova.rpc.common chanmap.keys(), 
allowed_methods, timeout=timeout)
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/kombu/transport/amqplib.py, line 159, 
in _wait_multiple
2013-02-25 18:21:28 TRACE nova.rpc.common channel, method_sig, args, 
content = read_timeout(timeout)
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/kombu/transport/amqplib.py, line 132, 
in read_timeout
2013-02-25 18:21:28 TRACE nova.rpc.common return 
self.method_reader.read_method()
2013-02-25 18:21:28 TRACE nova.rpc.common   File 
/usr/lib/python2.7/dist-packages/amqplib/client_0_8/method_framing.py, 
line 221, in read_method

2013-02-25 18:21:28 TRACE nova.rpc.common raise m
2013-02-25 18:21:28 TRACE nova.rpc.common timeout: timed out
2013-02-25 18:21:28 TRACE nova.rpc.common
2013-02-25 18:21:28 ERROR nova.compute.manager 
[req-4d211944-bd5c-4e88-a8df-c4a9167676e2 
36a7e60b4d134307b61e06949d33735e 9558f53959a04cc992c3f8b6d91bfb9f] 
[instance: 8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60] Instance failed network 
setup
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60] Traceback (most recent call last):
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 571, in 
_allocate_network
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60] requested_networks=requested_networks)
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60]   File 
/usr/lib/python2.7/dist-packages/nova/network/api.py, line 178, in 
allocate_for_instance
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60] 'args': args})
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60]   File 
/usr/lib/python2.7/dist-packages/nova/rpc/__init__.py, line 68, in call
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60] return 
_get_impl().call(context, topic, msg, timeout)
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60]   File 
/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py, line 674, in call
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60] return rpc_amqp.call(context, 
topic, msg, timeout, Connection.pool)
2013-02-25 18:21:28 TRACE nova.compute.manager [instance: 
8c5b2d4a-c715-4f4d-8d74-a6bdccbe1b60]   File 
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 343, in call
2013-02-25 18:21:28 

[Openstack] [Kesytone] Logging output modification

2013-02-25 Thread Alejandro Comisario
Hi guys, we are using keystone 2012.1.4 into production, and we are using
keystone logs to stream into kafka, and we wanted to modify the logs
content to add the tenant id into the line.

We saw in the code that keystone leaves that duty to the python logging
method.
We tried several methods to modify the log content, but we didnt nailed.

Does anyone knows how to doit in essex ?

Cheers.
Alejandrito
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Keystone] which config works?

2013-02-25 Thread Kun Huang
Hi all,

In auth_token (at keystoneclient middleware), if we config like below:

auth_uri = http://localhost:5000/
(without auth_protocal setting which default https)

or like below

auth_uri = http://localhost:5000/
auth_protocal = http

Which one SHOULD works?
The first one is brief but doesn't work, because of current implementation.
As a result, we have to set protocal if we need http no matter how I set
the auth_uri.
Could we improve relative codes?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly-3 development milestone available (Keystone, Glance, Nova, Horizon, Quantum, Cinder)

2013-02-25 Thread Martinx - ジェームズ
Cool! Grizzly G3 is on Raring Ringtail!

I'm seeing lots of AWESOME new packages here! Like Ajax Console, SPICE
proxy, Ceilometer, Nova Cells and Baremetal and go on...

But, no Heat package available... Is Heat a requirement to run Grizzly G3?
Or can I start testing it now with those packages?

Will Grizzly be available for Ubuntu 12.04 with all those features
(Ceilometer, Ajax Console, SPICE and etc)?

Tks!
Thiago


On 22 February 2013 10:49, Chuck Short chuck.sh...@canonical.com wrote:

 Hi.


 On 13-02-22 06:16 AM, Martinx - ジェームズ wrote:

 Hi!

 What is the status of Openstack Grizzly-3 Ubuntu packages?


 They will be uploaded to today for raring, precise should be uploaded a
 couple of hours later


  Can we already set it up using apt-get / aptitude? With packaged Heat,
 Ceilometer and etc?


 Yes


  Which version is recommended to test Grizzly-3, Precise (via testing
 UCA), Raring?


 Raring is the most easiest and fastest


  Is Grizzly planed to be the default Openstack for Raring?

  Yes

 Thanks for the AWESOME work on Openstack!

 Regards,
 Thiago


 On 22 February 2013 05:47, Thierry Carrez thie...@openstack.org mailto:
 thie...@openstack.org** wrote:

 Hi everyone,

 The last milestone of the Grizzly development cycle, grizzly-3 is
 now available for testing. This milestone contains almost all of the
 features that will be shipped in the final 2013.1 (Grizzly)
 release on
 April 4, 2013.

 This was an extremely busy milestone, with 100 blueprints implemented
 and more than 450 bugfixes overall. You can find the full list of new
 features and fixed bugs, as well as tarball downloads, at:

 
 https://launchpad.net/**keystone/grizzly/grizzly-3https://launchpad.net/keystone/grizzly/grizzly-3
 
 https://launchpad.net/glance/**grizzly/grizzly-3https://launchpad.net/glance/grizzly/grizzly-3
 
 https://launchpad.net/nova/**grizzly/grizzly-3https://launchpad.net/nova/grizzly/grizzly-3
 
 https://launchpad.net/horizon/**grizzly/grizzly-3https://launchpad.net/horizon/grizzly/grizzly-3
 
 https://launchpad.net/quantum/**grizzly/grizzly-3https://launchpad.net/quantum/grizzly/grizzly-3
 
 https://launchpad.net/cinder/**grizzly/grizzly-3https://launchpad.net/cinder/grizzly/grizzly-3

 Those projects are now temporarily feature-frozen (apart from
 already-granted exceptions) as we switch to testing and bugfixing mode
 in preparation for our first release candidates. Please test, try the
 new features, report bugs and help fix them !

 Regards,

 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.**launchpad.netopenstack@lists.launchpad.net
 
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 
 More help : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp





 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp



 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly-3 development milestone available (Keystone, Glance, Nova, Horizon, Quantum, Cinder)

2013-02-25 Thread Chuck Short
Hi

On 13-02-25 01:34 PM, Martinx - ジェームズ wrote:
 Cool! Grizzly G3 is on Raring Ringtail!

 I'm seeing lots of AWESOME new packages here! Like Ajax Console, SPICE
 proxy, Ceilometer, Nova Cells and Baremetal and go on...

 But, no Heat package available... Is Heat a requirement to run Grizzly
 G3? Or can I start testing it now with those packages?

Heat is not a requirement to run Grizzly G3


 Will Grizzly be available for Ubuntu 12.04 with all those features
 (Ceilometer, Ajax Console, SPICE and etc)?.

Most likely

 Tks!
 Thiago


 On 22 February 2013 10:49, Chuck Short chuck.sh...@canonical.com
 mailto:chuck.sh...@canonical.com wrote:

 Hi.


 On 13-02-22 06:16 AM, Martinx - ジェームズ wrote:

 Hi!

 What is the status of Openstack Grizzly-3 Ubuntu packages?


 They will be uploaded to today for raring, precise should be
 uploaded a couple of hours later


 Can we already set it up using apt-get / aptitude? With
 packaged Heat, Ceilometer and etc?


 Yes


 Which version is recommended to test Grizzly-3, Precise (via
 testing UCA), Raring?


 Raring is the most easiest and fastest


 Is Grizzly planed to be the default Openstack for Raring?

 Yes

 Thanks for the AWESOME work on Openstack!

 Regards,
 Thiago


 On 22 February 2013 05:47, Thierry Carrez
 thie...@openstack.org mailto:thie...@openstack.org
 mailto:thie...@openstack.org mailto:thie...@openstack.org
 wrote:

 Hi everyone,

 The last milestone of the Grizzly development cycle,
 grizzly-3 is
 now available for testing. This milestone contains almost all
 of the
 features that will be shipped in the final 2013.1 (Grizzly)
 release on
 April 4, 2013.

 This was an extremely busy milestone, with 100 blueprints
 implemented
 and more than 450 bugfixes overall. You can find the full list
 of new
 features and fixed bugs, as well as tarball downloads, at:

 https://launchpad.net/keystone/grizzly/grizzly-3
 https://launchpad.net/glance/grizzly/grizzly-3
 https://launchpad.net/nova/grizzly/grizzly-3
 https://launchpad.net/horizon/grizzly/grizzly-3
 https://launchpad.net/quantum/grizzly/grizzly-3
 https://launchpad.net/cinder/grizzly/grizzly-3

 Those projects are now temporarily feature-frozen (apart from
 already-granted exceptions) as we switch to testing and
 bugfixing mode
 in preparation for our first release candidates. Please test,
 try the
 new features, report bugs and help fix them !

 Regards,

 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack

 ___
 Mailing list: https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack
 https://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack
 https://launchpad.net/%7Eopenstack
 More help : https://help.launchpad.net/ListHelp





 ___
 Mailing list: https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack
 More help : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack
 More help : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Glance] Failure in Glance install: 'NoneType' object has no attribute 'method'

2013-02-25 Thread Brian Waldon
What version of webob do you have installed? stable/essex explicitly depends on 
v1.0.8, which ensures that the 'request' attribute is not-None on a Response 
object. As of v1.1.1, the 'request' attribute was no longer a supported part of 
the API. We treat this in Folsom and Grizzly by explicitly setting the 
'request' attribute in glance/common/wsgi.py

On Feb 24, 2013, at 6:32 AM, Zhiqiang Zhao wrote:

 Hi,
 
 I'm a new beginner. I'm trying to learn devstack installation step by step. I 
 set essex/stable braches. But always failure in glance install. Log file 
 shows as following:
 
 File /usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py, line 383, in 
 handle_one_response
 result = self.application(self.environ, start_response)
   File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
 __call__
 resp = self.call_func(req, *args, **self.kwargs)
   File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
 call_func
 return self.func(req, *args, **kwargs)
   File /opt/stack/glance/glance/common/wsgi.py, line 284, in __call__
 response = req.get_response(self.application)
   File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, 
 in send
 application, catch_exc_info=False)
   File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, 
 in call_application
 app_iter = application(self.environ, start_response)
   File /opt/stack/keystone/keystone/middleware/auth_token.py, line 176, in 
 __call__
 return self.app(env, start_response)
   File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
 __call__
 resp = self.call_func(req, *args, **self.kwargs)
   File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
 call_func
 return self.func(req, *args, **kwargs)
   File /opt/stack/glance/glance/common/wsgi.py, line 284, in __call__
 response = req.get_response(self.application)
   File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, 
 in send
 application, catch_exc_info=False)
   File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, 
 in call_application
 app_iter = application(self.environ, start_response)
   File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
 __call__
 resp = self.call_func(req, *args, **self.kwargs)
   File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
 call_func
 return self.func(req, *args, **kwargs)
   File /opt/stack/glance/glance/common/wsgi.py, line 285, in __call__
 return self.process_response(response)
   File /opt/stack/glance/glance/api/middleware/cache.py, line 108, in 
 process_response
 if request.method not in ('GET', 'DELETE'):
 AttributeError: 'NoneType' object has no attribute 'method'
 
 If I use command 'glance index', the error is the same.
 Attachment is config files for glance.
 Did I miss something?
 
 
 configs.rar___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] VM guest can't access outside world.

2013-02-25 Thread Barrow Kwan
Hi,
I just installed Folsom on CentOS 6.3 ( single host with two NIC ).  I can 
launch VM fine and VM get an IP address.  I associate a floating IP, create 
security group to allow ssh and I can ssh to the VM with the floating IP ( from 
a non-openstack node on the same physical network ).  What could be wrong on my 
setup ?  I am using Quantum with linuxbridge.

thanks

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM guest can't access outside world.

2013-02-25 Thread Razique Mahroua
HiIs the Ip forwarding enabled on the server?
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 25 févr. 2013 à 22:15, Barrow Kwan barrowk...@yahoo.com a écrit :Hi,I just installed Folsom on CentOS 6.3 ( single host with two NIC ). I can launch VM fine and VM get an IP address. I associate a floating IP, create security group to allow ssh and I can ssh to the VM with the floating IP ( from a non-openstack node on the same physical network ). What could be wrong on my setup ? I am using Quantum with linuxbridge.thanks___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Usage of New Keystone Domains with OpenStack

2013-02-25 Thread Dolph Mathews
As of Grizzly, the introduction of domains into OpenStack won't have any
impact on the rest of the deployment (AFAIK). Rather, the impact
is currently isolated to keystone and their use is effectively optional
(out of the box, keystone creates a single domain for you to work with --
the 'default' domain). There are a few projects that have expressed
interest in consuming domain data, but I'm not aware of anything landing in
time for Grizzly.


-Dolph


On Mon, Feb 25, 2013 at 11:14 AM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) mark.m.mil...@hp.com wrote:

 Hello,

 We are moving from OpenStack Essex to Grizzly and I am trying to find out
 how the new domain security collection will be used by the OpenStack
 services like Nova or Glance. I would greatly appreciate any information or
 documentation pointers.

 Regards,

 Mark Miller

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Comparing OpenStack to OpenNebula

2013-02-25 Thread Shawn Starr
On Monday, February 25, 2013 10:34:11 PM Jeremy Stanley wrote:
 On 2013-02-25 06:20 -0500 (-0500), Shawn Starr wrote:
 [...]
 
  I see no options on how to control what nova-compute nodes can be
  'provisioned' into an OpenStack cloud, I'd consider that a
  security risk (potentially) if any computer could just register to
  become a nova-compute?
 
 [...]
 
 On 2013-02-25 11:42:47 -0500 (-0500), Shawn Starr wrote:
  I was hoping in future we could have a mechanism via mac address
  to restrict which hypervisor/nova-computes are able to join the
  cluster.
 
 [...]
 
 It bears mention that restricting by MAC is fairly pointless as
 security protections go. There are a number of tricks an adversary
 can play to rewrite the system's MAC address or otherwise
 impersonate other systems at layer 2. Even filtering by IP address
 doesn't provide you much protection if there are malicious actors
 within your local broadcast domain, but at least there disabling
 learning on switches or implementing 802.1x can buy some relief.
 
 Extending the use of MAC address references from the local broadcast
 domain where they're intended to be relevant up into the application
 layer (possibly across multiple routed hops well away from their
 original domain of control) makes them even less effective of a
 system identifier from a security perspective.

Hi Jeremy,

Of course, one can modify/spoof the MAC address and or assign themselves an 
IP. It is more so that new machines aren't immediately added to the cluster 
and start launching VM instances without explicitly being enabled to do so. In 
this case, I am not concerned about impersonators on the network trying to 
join the cluster.

Thanks,
Shawn

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Usage of New Keystone Domains with OpenStack

2013-02-25 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
So one clarifying question?  When an image is marked public in Glance, does 
that mean public for all projects in the domain for which the image belongs OR 
does it mean public for all projects in all domains?

This clarifying question will tell me how nova and glance implicitly support 
domains without having any knowledge about them.

Mark


From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Monday, February 25, 2013 2:01 PM
To: Miller, Mark M (EB SW Cloud - RD - Corvallis)
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Usage of New Keystone Domains with OpenStack

As of Grizzly, the introduction of domains into OpenStack won't have any impact 
on the rest of the deployment (AFAIK). Rather, the impact is currently isolated 
to keystone and their use is effectively optional (out of the box, keystone 
creates a single domain for you to work with -- the 'default' domain). There 
are a few projects that have expressed interest in consuming domain data, but 
I'm not aware of anything landing in time for Grizzly.


-Dolph

On Mon, Feb 25, 2013 at 11:14 AM, Miller, Mark M (EB SW Cloud - RD - 
Corvallis) mark.m.mil...@hp.commailto:mark.m.mil...@hp.com wrote:
Hello,

We are moving from OpenStack Essex to Grizzly and I am trying to find out how 
the new domain security collection will be used by the OpenStack services 
like Nova or Glance. I would greatly appreciate any information or 
documentation pointers.

Regards,

Mark Miller

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Comparing OpenStack to OpenNebula

2013-02-25 Thread andi abes
On Mon, Feb 25, 2013 at 5:46 PM, Shawn Starr shawn.st...@rogers.com wrote:

 On Monday, February 25, 2013 10:34:11 PM Jeremy Stanley wrote:
  On 2013-02-25 06:20 -0500 (-0500), Shawn Starr wrote:
  [...]
 
   I see no options on how to control what nova-compute nodes can be
   'provisioned' into an OpenStack cloud, I'd consider that a
   security risk (potentially) if any computer could just register to
   become a nova-compute?
 
  [...]
 
  On 2013-02-25 11:42:47 -0500 (-0500), Shawn Starr wrote:
   I was hoping in future we could have a mechanism via mac address
   to restrict which hypervisor/nova-computes are able to join the
   cluster.
 
  [...]
 
  It bears mention that restricting by MAC is fairly pointless as
  security protections go. There are a number of tricks an adversary
  can play to rewrite the system's MAC address or otherwise
  impersonate other systems at layer 2. Even filtering by IP address
  doesn't provide you much protection if there are malicious actors
  within your local broadcast domain, but at least there disabling
  learning on switches or implementing 802.1x can buy some relief.
 
  Extending the use of MAC address references from the local broadcast
  domain where they're intended to be relevant up into the application
  layer (possibly across multiple routed hops well away from their
  original domain of control) makes them even less effective of a
  system identifier from a security perspective.

 Hi Jeremy,

 Of course, one can modify/spoof the MAC address and or assign themselves an
 IP. It is more so that new machines aren't immediately added to the cluster
 and start launching VM instances without explicitly being enabled to do
 so. In
 this case, I am not concerned about impersonators on the network trying to
 join the cluster.

 Thanks,
 Shawn

 if you're deploying multiple clusters, are you using different passwords
for each? different mysql connection strings? different IP address for the
controller and MQ?

Assuming the answer to any of those is yes, the a nova compute won't just
connect to the cluster.
If you look at the nova.conf file, you'll see that there are lots of
cluster specifics bits of info in it that should completely assure you that
compute nodes won't just connect to the wrong cluster.

___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Comparing OpenStack to OpenNebula

2013-02-25 Thread Shawn Starr
On Monday, February 25, 2013 05:59:02 PM andi abes wrote:
 On Mon, Feb 25, 2013 at 5:46 PM, Shawn Starr shawn.st...@rogers.com wrote:
  On Monday, February 25, 2013 10:34:11 PM Jeremy Stanley wrote:
   On 2013-02-25 06:20 -0500 (-0500), Shawn Starr wrote:
   [...]
   
I see no options on how to control what nova-compute nodes can be
'provisioned' into an OpenStack cloud, I'd consider that a
security risk (potentially) if any computer could just register to
become a nova-compute?
   
   [...]
   
   On 2013-02-25 11:42:47 -0500 (-0500), Shawn Starr wrote:
I was hoping in future we could have a mechanism via mac address
to restrict which hypervisor/nova-computes are able to join the
cluster.
   
   [...]
   
   It bears mention that restricting by MAC is fairly pointless as
   security protections go. There are a number of tricks an adversary
   can play to rewrite the system's MAC address or otherwise
   impersonate other systems at layer 2. Even filtering by IP address
   doesn't provide you much protection if there are malicious actors
   within your local broadcast domain, but at least there disabling
   learning on switches or implementing 802.1x can buy some relief.
   
   Extending the use of MAC address references from the local broadcast
   domain where they're intended to be relevant up into the application
   layer (possibly across multiple routed hops well away from their
   original domain of control) makes them even less effective of a
   system identifier from a security perspective.
  
  Hi Jeremy,
  
  Of course, one can modify/spoof the MAC address and or assign themselves
  an
  IP. It is more so that new machines aren't immediately added to the
  cluster
  and start launching VM instances without explicitly being enabled to do
  so. In
  this case, I am not concerned about impersonators on the network trying to
  join the cluster.
  
  Thanks,
  Shawn
  
  if you're deploying multiple clusters, are you using different passwords
 
 for each? different mysql connection strings? different IP address for the
 controller and MQ?
 
 Assuming the answer to any of those is yes, the a nova compute won't just
 connect to the cluster.
 If you look at the nova.conf file, you'll see that there are lots of
 cluster specifics bits of info in it that should completely assure you that
 compute nodes won't just connect to the wrong cluster.

Single cluster, assuming I get a initramfs built of a nova compute to PXE boot 
it. It will have it's nova.conf configured to join the cluster. 

If I had multiple ones, i'd use dhcp.conf to choose which initramfs image 
(based on MAC if im testing something, or the network range its on) to PXE 
boot the nova compute and it would join the correct cluster.

Thanks,
Shawn

 
 _
 __
 
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Keystone] my token generated by curling http://localhost:35357/v2.0/tokens is too long...

2013-02-25 Thread Dolph Mathews
+1

However, I'm curious as to what makes it too long, or what's not working.
Can you provide an example?

-Dolph


On Sat, Feb 23, 2013 at 12:33 PM, Anne Gentle a...@openstack.org wrote:

 I believe this is due to a change in default for grizzly-- token_format
 defaults to PKI instead of UUID in the [signing] section of the
 keystone.conf configuration file.


 http://docs.openstack.org/trunk/openstack-compute/admin/content/certificates-for-pki.html

 Anne


 On Sat, Feb 23, 2013 at 12:27 PM, Kun Huang academicgar...@gmail.comwrote:

 Hi, all
 My token generated by curling http://localhost:35357/v2.0/tokens is too
 long...

 Nearly 4000 chars in this token. I have change the max_token_size in
 keystone.conf, but it doesn't work.

 For more, the token is not only long, and also not woring...


 swift@storage-hk:~$ curl -s -d '{auth: {tenantName: service,
 passwordCredentials: {username: swift, password:swiftpass}}}' -H
 'Content-type: application/json' http://localhost:35357/v2.0/tokens |
 python -mjson.tool
 {
 access: {
 metadata: {
 is_admin: 0,
 roles: [
 9fe2ff9ee4384b1894a90878d3e92bab,
 acb42117c39945079ecf56bad7f441a1
 ]
 },
 serviceCatalog: [
 {
 endpoints: [
 {
 adminURL: 
 http://localhost:8774/v1.1/a20b1ea54bb54614956697b5484c9de1;,
 id: eeaec0b2f9af4ecbae58225f05f58a13,
 internalURL: 
 http://localhost:8774/v1.1/a20b1ea54bb54614956697b5484c9de1;,
 publicURL: 
 http://localhost:8774/v1.1/a20b1ea54bb54614956697b5484c9de1;,
 region: RegionOne
 }
 ],
 endpoints_links: [],
 name: nova,
 type: compute
 },
 {
 endpoints: [
 {
 adminURL: http://localhost:9292;,
 id: 7fc9afcca89f44e68875b88e05681dea,
 internalURL: http://localhost:9292;,
 publicURL: http://localhost:9292;,
 region: RegionOne
 }
 ],
 endpoints_links: [],
 name: glance,
 type: image
 },
 {
 endpoints: [
 {
 adminURL: 
 http://localhost:8776/v1/a20b1ea54bb54614956697b5484c9de1;,
 id: d826fa55273d4ed8bed3b744628629cf,
 internalURL: 
 http://localhost:8776/v1/a20b1ea54bb54614956697b5484c9de1;,
 publicURL: 
 http://localhost:8776/v1/a20b1ea54bb54614956697b5484c9de1;,
 region: RegionOne
 }
 ],
 endpoints_links: [],
 name: volume,
 type: volume
 },
 {
 endpoints: [
 {
 adminURL: http://localhost:8773/services/Admin
 ,
 id: 83dd85b5af21449ba5d5b9f530602f87,
 internalURL: 
 http://localhost:8773/services/Cloud;,
 publicURL: 
 http://localhost:8773/services/Cloud;,
 region: RegionOne
 }
 ],
 endpoints_links: [],
 name: ec2,
 type: ec2
 },
 {
 endpoints: [
 {
 adminURL: http://localhost:/v1;,
 id: 674b345c9a9345978a74ed157f7f646d,
 internalURL: 
 http://localhost:/v1/AUTH_a20b1ea54bb54614956697b5484c9de1;,
 publicURL: 
 http://localhost:/v1/AUTH_a20b1ea54bb54614956697b5484c9de1;,
 region: RegionOne
 }
 ],
 endpoints_links: [],
 name: swift,
 type: object-store
 },
 {
 endpoints: [
 {
 adminURL: http://localhost:35357/v2.0;,
 id: c7384f41b3b1487b9dda90577f46dda6,
 internalURL: http://localhost:5000/v2.0;,
 publicURL: http://localhost:5000/v2.0;,
 region: RegionOne
 }
 ],
 endpoints_links: [],
 name: keystone,
 type: identity
 }
 ],
 token: {
 expires: 2013-02-24T18:19:09Z,
 id:
 

Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Joe Gordon
On Sun, Feb 24, 2013 at 3:31 PM, Sam Morrison sorri...@gmail.com wrote:

 I have been playing with the AggregateInstanceExtraSpecs filter and can't
 get it to work.

 In our staging environment it works fine with 4 compute nodes, I have 2
 aggregates to split them into 2.

 When I try to do the same in our production environment which has 80
 compute nodes (splitting them again into 2 aggregates) it doesn't work.

 nova-scheduler starts to go very slow,  I scheduled an instance and gave
 up after 5 minutes, it seemed to be taking ages and the host was at 100%
 cpu. Also got about 500 messages in rabbit that were unacknowledged.


what does the nova-scheduler log say?  Where is the unacknowledged rabbitmq
messages sent from?


 We are running stable/folsom. Does anyone else have this issue or know if
 there have been any fixes in Grizzly relating to this? I couldn't see any
 bugs about it.

 Thanks,
 Sam
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Announcing superglance (convenience wrapper for glanceclient)

2013-02-25 Thread Richard Goodwin
Most of you are probably already pretty familiar with Major Hayden's 
supernova utility, which is indispensable for working with even moderately 
complex Nova deployments.   As I spend most of my days with Glance here @ 
Rackspace, I felt quite envious of these productive Nova folk,  and decided to 
forge ahead and make superglance.

Code can be found here: https://github.com/rtgoodwin/superglance

It is heavily and thankfully based on supernova, so should be pretty easy to 
work with out of the box (and thanks to the excellent docs I borrowed from 
Major, with his blessings!). Clone it, install it, setup your config file, and 
if you're into the whole brevity thing, maybe alias it to sg.

Enjoy my little contribution to the community; it's a small thing, but I use it 
probably 40 times a day and couldn't imagine going a day without it!

Richard Goodwin
Product Manager: Imaging/Glance
Ideation | Intellection | Activator | Relator | Responsibility
Phone: (512) 788 5403 – Cell: (512) 736-7897 (Austin)
Skype: rtgoodwinfile:///callto/::rtgoodwin - Yahoo: 
richardtgoodwinfile:///ymsgr/sendim%3Frichardtgoodwin
AIM: dellovision - IRC: goody / rgoodwin
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Sam Morrison
Hi Joe,

On 26/02/2013, at 11:19 AM, Joe Gordon j...@cloudscaling.com wrote:

 On Sun, Feb 24, 2013 at 3:31 PM, Sam Morrison sorri...@gmail.com wrote:
 I have been playing with the AggregateInstanceExtraSpecs filter and can't get 
 it to work.
 
 In our staging environment it works fine with 4 compute nodes, I have 2 
 aggregates to split them into 2.
 
 When I try to do the same in our production environment which has 80 compute 
 nodes (splitting them again into 2 aggregates) it doesn't work.
 
 nova-scheduler starts to go very slow,  I scheduled an instance and gave up 
 after 5 minutes, it seemed to be taking ages and the host was at 100% cpu. 
 Also got about 500 messages in rabbit that were unacknowledged.
 
 
 what does the nova-scheduler log say?  Where is the unacknowledged rabbitmq 
 messages sent from?

Logs are below. Note the large time gap between selecting a host, this is 
pretty much instantaneous without this filter.

Can't figure out how to see an unacknowledged message in rabbit but my guess is 
it is the compute service updates from all the compute nodes. These aren't 
happening and I think this is the reason that the attempts to schedule further 
down are rejected with is disabled or has not been heard from in a while

Do you see anything that could be an issue? Flags we use for scheduler are 
below also:

Thanks for your help,
Sam


# Scheduler Flags
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
ram_allocation_ratio=1.0
cpu_allocation_ratio=0.92
reserved_host_memory_mb=1024
reserved_host_disk_mb=0
scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,RamFilter,CoreFilter,ComputeFilter
compute_fill_first_cost_fn_weight=1.0



2013-02-25 10:01:35 DEBUG nova.scheduler.filter_scheduler 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Attempting to build 1 
instance(s) schedule_run_instance /usr/lib/python2.7/dist-packages/nova/sc
heduler/filter_scheduler.py:66
2013-02-25 10:01:35 DEBUG nova.scheduler.filters.retry_filter 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts: [].  
(host=qh2-rcc27) host_passes /usr/lib/python2.7/dist-packages/n
ova/scheduler/filters/retry_filter.py:39
2013-02-25 10:02:13 DEBUG nova.scheduler.host_manager 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter passes for 
qh2-rcc27 passes_filters 
/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:178
2013-02-25 10:02:13 DEBUG nova.scheduler.filters.retry_filter 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts: [].  
(host=qh2-rcc26) host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
2013-02-25 10:02:51 DEBUG nova.scheduler.host_manager 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function bound 
method CoreFilter.host_passes of nova.scheduler.filters.core_filter.CoreFilter 
object at 0x43f7a50 failed for qh2-rcc26 passes_filters 
/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
2013-02-25 10:02:51 DEBUG nova.scheduler.filters.retry_filter 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts: [].  
(host=qh2-rcc25) host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
2013-02-25 10:03:28 DEBUG nova.scheduler.filters.compute_filter 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] host 'qh2-rcc25': 
free_ram_mb:71086 free_disk_mb:3035136 is disabled or has not been heard from 
in a while host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/compute_filter.py:37
2013-02-25 10:03:28 DEBUG nova.scheduler.host_manager 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function bound 
method ComputeFilter.host_passes of 
nova.scheduler.filters.compute_filter.ComputeFilter object at 0x43f7210 
failed for qh2-rcc25 passes_filters 
/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
2013-02-25 10:03:28 DEBUG nova.scheduler.filters.retry_filter 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts: [].  
(host=qh2-rcc24) host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
2013-02-25 10:04:05 DEBUG nova.scheduler.filters.compute_filter 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] host 'qh2-rcc24': 
free_ram_mb:99758 free_disk_mb:3296256 is disabled or has not been heard from 
in a while host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/compute_filter.py:37
2013-02-25 10:04:05 DEBUG nova.scheduler.host_manager 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function bound 
method ComputeFilter.host_passes of 
nova.scheduler.filters.compute_filter.ComputeFilter object at 0x43f7210 
failed for qh2-rcc24 passes_filters 
/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
2013-02-25 10:04:05 DEBUG nova.scheduler.filters.retry_filter 
[req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts: [].  
(host=qh2-rcc23) host_passes 

Re: [Openstack] [Keystone] my token generated by curling http://localhost:35357/v2.0/tokens is too long...

2013-02-25 Thread Kun Huang
{
access: {
metadata: {
is_admin: 0,
roles: [
acb42117c39945079ecf56bad7f441a1
]
},
serviceCatalog: [
{
endpoints: [
{
adminURL: 
http://localhost:8774/v1.1/718f6769f52442829cf1c57c1227d2d1;,
id: eeaec0b2f9af4ecbae58225f05f58a13,
internalURL: 
http://localhost:8774/v1.1/718f6769f52442829cf1c57c1227d2d1;,
publicURL: 
http://localhost:8774/v1.1/718f6769f52442829cf1c57c1227d2d1;,
region: RegionOne
}
],
endpoints_links: [],
name: nova,
type: compute
},
{
endpoints: [
{
adminURL: http://localhost:9292;,
id: 7fc9afcca89f44e68875b88e05681dea,
internalURL: http://localhost:9292;,
publicURL: http://localhost:9292;,
region: RegionOne
}
],
endpoints_links: [],
name: glance,
type: image
},
{
endpoints: [
{
adminURL: 
http://localhost:8776/v1/718f6769f52442829cf1c57c1227d2d1;,
id: d826fa55273d4ed8bed3b744628629cf,
internalURL: 
http://localhost:8776/v1/718f6769f52442829cf1c57c1227d2d1;,
publicURL: 
http://localhost:8776/v1/718f6769f52442829cf1c57c1227d2d1;,
region: RegionOne
}
],
endpoints_links: [],
name: volume,
type: volume
},
{
endpoints: [
{
adminURL: http://localhost:8773/services/Admin;,
id: 83dd85b5af21449ba5d5b9f530602f87,
internalURL: http://localhost:8773/services/Cloud
,
publicURL: http://localhost:8773/services/Cloud
,
region: RegionOne
}
],
endpoints_links: [],
name: ec2,
type: ec2
},
{
endpoints: [
{
adminURL: http://localhost:/v1;,
id: 674b345c9a9345978a74ed157f7f646d,
internalURL: 
http://localhost:/v1/AUTH_718f6769f52442829cf1c57c1227d2d1;,
publicURL: 
http://localhost:/v1/AUTH_718f6769f52442829cf1c57c1227d2d1;,
region: RegionOne
}
],
endpoints_links: [],
name: swift,
type: object-store
},
{
endpoints: [
{
adminURL: http://localhost:35357/v2.0;,
id: c7384f41b3b1487b9dda90577f46dda6,
internalURL: http://localhost:5000/v2.0;,
publicURL: http://localhost:5000/v2.0;,
region: RegionOne
}
],
endpoints_links: [],
name: keystone,
type: identity
}
],
token: {
expires: 2013-02-25T17:37:05Z,
id: e5aaef7b6bac4c908b8e299703e1747b,
issued_at: 2013-02-24T17:37:05.823113,
tenant: {
description: Default Tenant,
enabled: true,
id: 718f6769f52442829cf1c57c1227d2d1,
name: demo
}
},
user: {
id: b100d6f3371b4e7c952eaa019a216a93,
name: admin,
roles: [
{
name: admin
}
],
roles_links: [],
username: admin
}
}
}



This first one is PKI format
This one is UUID format
I only change the token_format in keystone.conf



On Tue, Feb 26, 2013 at 6:21 AM, Dolph Mathews dolph.math...@gmail.comwrote:

 +1

 However, I'm curious as to what makes it too long, or what's not
 working. Can you provide an example?

 -Dolph


 On Sat, Feb 23, 2013 at 12:33 PM, Anne Gentle a...@openstack.org wrote:

 I believe this is due to a change in default for grizzly-- token_format
 defaults to PKI instead of UUID in the [signing] section of the
 keystone.conf configuration file.


 http://docs.openstack.org/trunk/openstack-compute/admin/content/certificates-for-pki.html

 Anne


 On Sat, Feb 23, 2013 at 12:27 PM, Kun Huang academicgar...@gmail.comwrote:

 Hi, all
 My token generated by curling 

Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Joe Gordon
On Mon, Feb 25, 2013 at 6:14 PM, Sam Morrison sorri...@gmail.com wrote:

 Hi Joe,

 On 26/02/2013, at 11:19 AM, Joe Gordon j...@cloudscaling.com wrote:

 On Sun, Feb 24, 2013 at 3:31 PM, Sam Morrison sorri...@gmail.com wrote:

 I have been playing with the AggregateInstanceExtraSpecs filter and can't
 get it to work.

 In our staging environment it works fine with 4 compute nodes, I have 2
 aggregates to split them into 2.

 When I try to do the same in our production environment which has 80
 compute nodes (splitting them again into 2 aggregates) it doesn't work.

 nova-scheduler starts to go very slow,  I scheduled an instance and gave
 up after 5 minutes, it seemed to be taking ages and the host was at 100%
 cpu. Also got about 500 messages in rabbit that were unacknowledged.


 what does the nova-scheduler log say?  Where is the unacknowledged
 rabbitmq messages sent from?


 Logs are below. Note the large time gap between selecting a host, this is
 pretty much instantaneous without this filter.

 Can't figure out how to see an unacknowledged message in rabbit but my
 guess is it is the compute service updates from all the compute nodes.
 These aren't happening and I think this is the reason that the attempts to
 schedule further down are rejected with is disabled or has not been heard
 from in a while

 Do you see anything that could be an issue? Flags we use for scheduler are
 below also:

 Thanks for your help,
 Sam


 # Scheduler Flags
 compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
 ram_allocation_ratio=1.0
 cpu_allocation_ratio=0.92
 reserved_host_memory_mb=1024
 reserved_host_disk_mb=0

 scheduler_default_filters=RetryFilter,AggregateInstanceExtraSpecsFilter,RamFilter,CoreFilter,ComputeFilter
 compute_fill_first_cost_fn_weight=1.0



 2013-02-25 10:01:35 DEBUG nova.scheduler.filter_scheduler
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Attempting to build 1
 instance(s) schedule_run_instance /usr/lib/python2.7/dist-packages/nova/sc
 heduler/filter_scheduler.py:66
 2013-02-25 10:01:35 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts:
 [].  (host=qh2-rcc27) host_passes /usr/lib/python2.7/dist-packages/n
 ova/scheduler/filters/retry_filter.py:39
 2013-02-25 10:02:13 DEBUG nova.scheduler.host_manager
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter passes for
 qh2-rcc27 passes_filters
 /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:178
 2013-02-25 10:02:13 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts:
 [].  (host=qh2-rcc26) host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
 2013-02-25 10:02:51 DEBUG nova.scheduler.host_manager
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function
 bound method CoreFilter.host_passes of
 nova.scheduler.filters.core_filter.CoreFilter object at 0x43f7a50 failed
 for qh2-rcc26 passes_filters
 /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
 2013-02-25 10:02:51 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts:
 [].  (host=qh2-rcc25) host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
 2013-02-25 10:03:28 DEBUG nova.scheduler.filters.compute_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] host 'qh2-rcc25':
 free_ram_mb:71086 free_disk_mb:3035136 is disabled or has not been heard
 from in a while host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/compute_filter.py:37
 2013-02-25 10:03:28 DEBUG nova.scheduler.host_manager
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function
 bound method ComputeFilter.host_passes of
 nova.scheduler.filters.compute_filter.ComputeFilter object at 0x43f7210
 failed for qh2-rcc25 passes_filters
 /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
 2013-02-25 10:03:28 DEBUG nova.scheduler.filters.retry_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Previously tried hosts:
 [].  (host=qh2-rcc24) host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39
 2013-02-25 10:04:05 DEBUG nova.scheduler.filters.compute_filter
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] host 'qh2-rcc24':
 free_ram_mb:99758 free_disk_mb:3296256 is disabled or has not been heard
 from in a while host_passes
 /usr/lib/python2.7/dist-packages/nova/scheduler/filters/compute_filter.py:37
 2013-02-25 10:04:05 DEBUG nova.scheduler.host_manager
 [req-d7c77ff6-353a-409a-b32c-68627c1d1bb0 25 23] Host filter function
 bound method ComputeFilter.host_passes of
 nova.scheduler.filters.compute_filter.ComputeFilter object at 0x43f7210
 failed for qh2-rcc24 passes_filters
 /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:175
 2013-02-25 10:04:05 DEBUG nova.scheduler.filters.retry_filter
 

Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Sam Morrison
Hi Joe,

On 26/02/2013, at 1:39 PM, Joe Gordon j...@cloudscaling.com wrote:

 
 
 On Mon, Feb 25, 2013 at 6:14 PM, Sam Morrison sorri...@gmail.com wrote:
 Hi Joe,
 
 On 26/02/2013, at 11:19 AM, Joe Gordon j...@cloudscaling.com wrote:
 
 On Sun, Feb 24, 2013 at 3:31 PM, Sam Morrison sorri...@gmail.com wrote:
 I have been playing with the AggregateInstanceExtraSpecs filter and can't 
 get it to work.
 
 In our staging environment it works fine with 4 compute nodes, I have 2 
 aggregates to split them into 2.
 
 When I try to do the same in our production environment which has 80 compute 
 nodes (splitting them again into 2 aggregates) it doesn't work.
 
 nova-scheduler starts to go very slow,  I scheduled an instance and gave up 
 after 5 minutes, it seemed to be taking ages and the host was at 100% cpu. 
 Also got about 500 messages in rabbit that were unacknowledged.
 
 
 what does the nova-scheduler log say?  Where is the unacknowledged rabbitmq 
 messages sent from?
 
 Logs are below. Note the large time gap between selecting a host, this is 
 pretty much instantaneous without this filter.
 
 Can't figure out how to see an unacknowledged message in rabbit but my guess 
 is it is the compute service updates from all the compute nodes. These aren't 
 happening and I think this is the reason that the attempts to schedule 
 further down are rejected with is disabled or has not been heard from in a 
 while
 
 Do you see anything that could be an issue? Flags we use for scheduler are 
 below also:
 
 Thanks for your help,
 Sam
 
 
 It looks like the scheduler issues are related to the rabbitmq issues.   
 host 'qh2-rcc77' ... is disabled or has not been heard from in a while
 
 What does 'nova host-list' say?   the clocks must all be synced up?
  

Yeah all the clocks are synced up fine. Doing a nova-manage service list gives 
me all :-) and updated at is correct.

We only have one nova-scheduler. It gets locked up and goes at 100% CPU. 
nova-scheduler seems to take the compute service updates off the queue while 
this is happening but doesn't ack them and going by the logs doesn't process 
them. This is why I suspect the hosts are eventually being rejected with a not 
been heard from in a while message. 
This is a symptom though I believe as the real issue is nova-scheduler locking 
up, it seems to take 30-60 seconds for it to process each host to determine if 
it passes the filters.

Does that make sense? Any other ideas on how to debug? 

Cheers,
Sam








___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Chris Behrens

On Feb 25, 2013, at 6:39 PM, Joe Gordon j...@cloudscaling.com wrote:

 
 It looks like the scheduler issues are related to the rabbitmq issues.   
 host 'qh2-rcc77' ... is disabled or has not been heard from in a while
 
 What does 'nova host-list' say?   the clocks must all be synced up?

Good things to check.  It feels like something is spinning way too much within 
this filter, though.  This can also cause the above message.  The scheduler 
pulls all of the records before it starts filtering… and if there's a huge 
delay somewhere, it can start seeing a bunch of hosts as disabled.

The filter doesn't look like a problem.. unless there's a large amount of 
aggregate metadata… and/or a large amount of key/values for the instance_type's 
extra specs.   There *is* a DB call in the filter.  If that's blocking for an 
extended period of time, the whole process is blocked…  But I suspect by the 
'100% cpu' comment, that this is not the case…  So the only thing I can think 
of is that it returns a tremendous amount of metadata.

Adding some extra logging in the filter could be useful.

- Chris



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Keystone]Question: Assignment of default role

2013-02-25 Thread Leo Toyoda
Hi Dolph
 
Thanks a lot for the reply.
I could understand very well.
 
Regards,
Leo Toyoda
 



  _  

From: Dolph Mathews [mailto:dolph.math...@gmail.com] 
Sent: Tuesday, February 26, 2013 7:11 AM
To: Leo Toyoda
Cc: Adam Young; openstack
Subject: Re: [Openstack] [Keystone]Question: Assignment of default role


Yes, those are the two use cases we're supporting, although I'd encourage Case 
2, as it's generally much more intuitive.


-Dolph


On Mon, Feb 25, 2013 at 1:54 AM, Leo Toyoda toyoda-...@cnt.mxw.nes.nec.co.jp 
wrote:


Hi Adam

Thanks a lot for your answer.

It is my understanding follows. Would that be OK with you?
Case1: Create a user *with* specifying the tenant.
* Default role is assigned.
* I need to assign the required roles in keystone user-role-add.
* The user has two roles.

Case2: Create a user *without* specifying the tenant.
* I need to assign the required roles and the tenant in keystone 
user-role-add.
* The user has one role.

Thanks,
Leo Toyoda



 -Original Message-
 From:
 openstack-bounces+toyoda-reo=cnt.mxw.nes.nec.co.jp@lists.launc
 hpad.net
 [mailto:openstack-bounces+toyoda-reo mailto:openstack-bounces%2Btoyoda-reo 
 =cnt.mxw.nes.nec.co.jp@lis
 ts.launchpad.net] On Behalf Of Adam Young
 Sent: Saturday, February 23, 2013 5:31 AM
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] [Keystone]Question: Assignment of
 default role

 Yes, this is new.  We are removing the direct associtation
 between users and projects (Project members) and replacing it
 with a Role (_member_)

 The _ is there to ensure it does not conflict with existing roles.

 The two different ways of associating users to projects was
 causing problems.  With RBAC, we can now enforce policy about
 project membership that we could not do before.





 On 02/21/2013 09:39 PM, Leo Toyoda wrote:
  Hi, everyone
 
  I'm using the master branch devstack.
  I hava a question about assignment of default role (Keystone).
 
  When I create a user to specify the tenant, '_member_' is
 assigned to the roles.
  $ keystone user-create --name test --tenant-id e61..7f6 --pass test
  --email t...@example.com
  +--+---+
  | Property |  Value|
  +--+---+
  |  email   | te...@example.com |
  | enabled  |   True|
  |id| af1..8d2  |
  |   name   |   test|
  | tenantId | e61..7f6  |
  +--+---+
  $ keystone user-role-list --user test --tenant e61..7f6
  +--+--+--+---+
  |id|   name   | user_id  | tenant_id |
  +--+--+--+---+
  | 9fe..bab | _member_ | af1..8d2 | e61..7f6  |
  +--+--+--+---+
 
  Then, assign the Member role to the user.
  Hitting assigned two roles of 'Member' and '_member_'.
  $ keystone user-role-add --user af1..8d2 --role 57d..d1f --tenant
  e61..7f6 $ keystone user-role-list --user af1..8d2 --tenant e61..7f6
  +--+--+--+---+
  |id|   name   | user_id  | tenant_id |
  +--+--+--+---+
  | 57d..d1f |  Member  | af1..8d2 | e61..7f6  | 9fe..bab |
 _member_  |
  | af1..8d2 | e61..7f6  |
  +--+--+--+---+
 
  When I create a user without specifying a tenant, I assign
 'Member' role.
  In this case, Only one role is assigned.
  $ keystone user-create --name test2 --pass test --email
  te...@example.com
  +--+---+
  | Property |  Value|
  +--+---+
  |  email   | te...@example.com |
  | enabled  |  True |
  |id|c22..a6d   |
  |   name   |  test2|
  | tenantId |   |
  +--+---+
  $ keystone user-role-add --user c22..a6d --role 57d..d1f  --tenant
  e61..7f6 $ keystone user-role-list --user c22..a6d --tenant e61..7f6
  +--+--+--+---+
  |id|   name   | user_id  | tenant_id |
  +--+--+--+---+
  | 57d..d1f |  Member  | c22..a6d | e61..7f6  |
  +--+--+--+---+
 
  Is it expected behavior that two rolls are assigned?
 
  Thanks
  Leo Toyoda
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : 

[Openstack] Huge memory consumption of qpid server

2013-02-25 Thread Yufang Zhang
Hi all,

I use qpid server as message queue in openstack. After the cluster running
for a month, I find the qpid server has consumed 400M memory. Considering
the cluster has only 10 nodes, things would be worse as more nodes are
being added into cluster. No memory leaks were found when I used valgrind
to check the qpid server.

So is this reasonable that qpid server comsumes so much memory working with
nova services? Is there any suggestion or workaround for this issue?
 Thanks.

Yufang
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Sam Morrison

On 26/02/2013, at 2:15 PM, Chris Behrens cbehr...@codestud.com wrote:

 
 On Feb 25, 2013, at 6:39 PM, Joe Gordon j...@cloudscaling.com wrote:
 
 
 It looks like the scheduler issues are related to the rabbitmq issues.   
 host 'qh2-rcc77' ... is disabled or has not been heard from in a while
 
 What does 'nova host-list' say?   the clocks must all be synced up?
 
 Good things to check.  It feels like something is spinning way too much 
 within this filter, though.  This can also cause the above message.  The 
 scheduler pulls all of the records before it starts filtering… and if there's 
 a huge delay somewhere, it can start seeing a bunch of hosts as disabled.
 
 The filter doesn't look like a problem.. unless there's a large amount of 
 aggregate metadata… and/or a large amount of key/values for the 
 instance_type's extra specs.   There *is* a DB call in the filter.  If that's 
 blocking for an extended period of time, the whole process is blocked…  But I 
 suspect by the '100% cpu' comment, that this is not the case…  So the only 
 thing I can think of is that it returns a tremendous amount of metadata.
 
 Adding some extra logging in the filter could be useful.
 
 - Chris

Thanks Chris, I have 2 aggregates and 2 keys defined and each of the 80 hosts 
has either one or the other. At the moment every flavour has either one or the 
other too so I don't think it's too much data. 

I've tracked it down to this call:

metadata = db.aggregate_metadata_get_by_host(context, host_state.host)

It's taking forever to complete. Just having a look into that code to see why, 
there is a nested for loop in there so my guess is something to do with that 
although there is hardly any data in our aggregates tables so I can't see it 
taking that long.

Cheers,
Sam



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Huge memory consumption of qpid server

2013-02-25 Thread Victor Palma
Can you provide more context around your start up commands? What is your xms 
set you, how much memory have you allocated? What is your buffer size on the 
broker and client?

Additionally what makes you think it's consuming so much memory? 

Regards,
Victor Palma


On Feb 25, 2013, at 10:01 PM, Yufang Zhang yufang521...@gmail.com wrote:

 Hi all,
 
 I use qpid server as message queue in openstack. After the cluster running 
 for a month, I find the qpid server has consumed 400M memory. Considering the 
 cluster has only 10 nodes, things would be worse as more nodes are being 
 added into cluster. No memory leaks were found when I used valgrind to check 
 the qpid server. 
 
 So is this reasonable that qpid server comsumes so much memory working with 
 nova services? Is there any suggestion or workaround for this issue?  Thanks.
 
 Yufang 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Comparing OpenStack to OpenNebula

2013-02-25 Thread Vishvananda Ishaya
If you set:

enable_new_services=False

in your nova.conf, all new services will be disabled by default and the
scheduler won't start scheduling instances until you explicitly enable them.

Vish

On Feb 25, 2013, at 2:46 PM, Shawn Starr shawn.st...@rogers.com wrote:

 On Monday, February 25, 2013 10:34:11 PM Jeremy Stanley wrote:
 On 2013-02-25 06:20 -0500 (-0500), Shawn Starr wrote:
 [...]
 
 I see no options on how to control what nova-compute nodes can be
 'provisioned' into an OpenStack cloud, I'd consider that a
 security risk (potentially) if any computer could just register to
 become a nova-compute?
 
 [...]
 
 On 2013-02-25 11:42:47 -0500 (-0500), Shawn Starr wrote:
 I was hoping in future we could have a mechanism via mac address
 to restrict which hypervisor/nova-computes are able to join the
 cluster.
 
 [...]
 
 It bears mention that restricting by MAC is fairly pointless as
 security protections go. There are a number of tricks an adversary
 can play to rewrite the system's MAC address or otherwise
 impersonate other systems at layer 2. Even filtering by IP address
 doesn't provide you much protection if there are malicious actors
 within your local broadcast domain, but at least there disabling
 learning on switches or implementing 802.1x can buy some relief.
 
 Extending the use of MAC address references from the local broadcast
 domain where they're intended to be relevant up into the application
 layer (possibly across multiple routed hops well away from their
 original domain of control) makes them even less effective of a
 system identifier from a security perspective.
 
 Hi Jeremy,
 
 Of course, one can modify/spoof the MAC address and or assign themselves an 
 IP. It is more so that new machines aren't immediately added to the cluster 
 and start launching VM instances without explicitly being enabled to do so. 
 In 
 this case, I am not concerned about impersonators on the network trying to 
 join the cluster.
 
 Thanks,
 Shawn
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Chris Behrens
After thinking more, it does seem like we're doing something wrong if the query 
itself is returning 300k rows. :)  I can take a better look at it in front of 
the computer later if no one beats me to it.

On Feb 25, 2013, at 9:28 PM, Chris Behrens cbehr...@codestud.com wrote:

 Replying from my phone, so I can't look, but I wonder if we have an index 
 missing.
 
 On Feb 25, 2013, at 8:54 PM, Sam Morrison sorri...@gmail.com wrote:
 
 On Tue, Feb 26, 2013 at 3:15 PM, Sam Morrison sorri...@gmail.com wrote:
 
 On 26/02/2013, at 2:15 PM, Chris Behrens cbehr...@codestud.com wrote:
 
 
 On Feb 25, 2013, at 6:39 PM, Joe Gordon j...@cloudscaling.com wrote:
 
 
 It looks like the scheduler issues are related to the rabbitmq issues.   
 host 'qh2-rcc77' ... is disabled or has not been heard from in a while
 
 What does 'nova host-list' say?   the clocks must all be synced up?
 
 Good things to check.  It feels like something is spinning way too much 
 within this filter, though.  This can also cause the above message.  The 
 scheduler pulls all of the records before it starts filtering… and if 
 there's a huge delay somewhere, it can start seeing a bunch of hosts as 
 disabled.
 
 The filter doesn't look like a problem.. unless there's a large amount of 
 aggregate metadata… and/or a large amount of key/values for the 
 instance_type's extra specs.   There *is* a DB call in the filter.  If 
 that's blocking for an extended period of time, the whole process is 
 blocked…  But I suspect by the '100% cpu' comment, that this is not the 
 case…  So the only thing I can think of is that it returns a tremendous 
 amount of metadata.
 
 Adding some extra logging in the filter could be useful.
 
 - Chris
 
 Thanks Chris, I have 2 aggregates and 2 keys defined and each of the 80 
 hosts has either one or the other. At the moment every flavour has either 
 one or the other too so I don't think it's too much data.
 
 I've tracked it down to this call:
 
 metadata = db.aggregate_metadata_get_by_host(context, host_state.host)
 
 More debugging has got it down to a query
 
 In db.api.aggregate_metadata_get_by_host:
 
   query = model_query(context, models.Aggregate).join(
   _hosts).filter(models.AggregateHost.host == host).join(
   _metadata)
  ..
  rows = query.all()
 
 With query debug on this resolves to:
 
 SELECT aggregates.created_at AS aggregates_created_at,
 aggregates.updated_at AS aggregates_updated_at, aggregates.deleted_at
 AS aggregates_deleted_at, aggregates.deleted AS aggregates_deleted,
 aggregates.id AS aggregates_id, aggregates.name AS aggregates_name,
 aggregates.availability_zone AS aggregates_availability_zone,
 aggregate_hosts_1.created_at AS aggregate_hosts_1_created_at,
 aggregate_hosts_1.updated_at AS aggregate_hosts_1_updated_at,
 aggregate_hosts_1.deleted_at AS aggregate_hosts_1_deleted_at,
 aggregate_hosts_1.deleted AS aggregate_hosts_1_deleted,
 aggregate_hosts_1.id AS aggregate_hosts_1_id, aggregate_hosts_1.host
 AS aggregate_hosts_1_host, aggregate_hosts_1.aggregate_id AS
 aggregate_hosts_1_aggregate_id FROM aggregates INNER JOIN
 aggregate_hosts AS aggregate_hosts_2 ON aggregates.id =
 aggregate_hosts_2.aggregate_id AND aggregate_hosts_2.deleted = 0 AND
 aggregates.deleted = 0 INNER JOIN aggregate_hosts ON
 aggregate_hosts.aggregate_id = aggregates.id AND
 aggregate_hosts.deleted = 0 AND aggregates.deleted = 0 INNER JOIN
 aggregate_metadata AS aggregate_metadata_1 ON aggregates.id =
 aggregate_metadata_1.aggregate_id AND aggregate_metadata_1.deleted = 0
 AND aggregates.deleted = 0 INNER JOIN aggregate_metadata ON
 aggregate_metadata.aggregate_id = aggregates.id AND
 aggregate_metadata.deleted = 0 AND aggregates.deleted = 0 LEFT OUTER
 JOIN aggregate_hosts AS aggregate_hosts_3 ON aggregates.id =
 aggregate_hosts_3.aggregate_id AND aggregate_hosts_3.deleted = 0 AND
 aggregates.deleted = 0 LEFT OUTER JOIN aggregate_hosts AS
 aggregate_hosts_1 ON aggregate_hosts_1.aggregate_id = aggregates.id
 AND aggregate_hosts_1.deleted = 0 AND aggregates.deleted = 0 WHERE
 aggregates.deleted = 0 AND aggregate_hosts.host = 'qh2-rcc34';
 
 Which in our case returns 328509 rows in set (25.97 sec)
 
 Seems a bit off considering there are 80 rows in aggregate_hosts, 2
 rows in aggregates and 2 rows in aggregate_metadata
 
 In the code rows is only equal to 1 so it seems to be doing something
 inside to code to do this? Don't know too much how sqlalchemy works.
 
 Seems like a bug to me? or maybe our database has something wrong in it?
 
 Cheers,
 Sam

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Metadata service route from a VM

2013-02-25 Thread Dan Wendlandt
Hi Sylvain,

The answer here is that it depends.

If you are using Folsom + Quantum, the only supported mechanism is reaching
the metadata server is via your default gateway, so VMs should not have
specific routes to reach the metadata subnet (I believe this is also the
case for nova-network, so I'm a bit surprised by your original comments in
this thread about using the direct route with nova-network).

In Grizzly, Quantum will support two different mechanisms of reaching
metadata.  One via the router (as before) and another via the DHCP server
IP (with a route for 169.254.169.254/32 injected into the VM via DHCP).
 The latter supports metadata on networks that do not have a router
provided by Quantum.

Dan

On Mon, Feb 25, 2013 at 8:36 AM, Sylvain Bauza
sylvain.ba...@digimind.comwrote:

 Yet no reply ?

 I did the hack, I removed the 169.254.0.0/16 route from my images, but
 this is quite a ugly hack.
 Could someone with OpenVswitch/GRE setup please confirm that there is no
 route to create for metadata ?

 Thanks,
 -Sylvain

 Le 21/02/2013 11:33, Sylvain Bauza a écrit :

  Anyone ?
 I found the reason why a 'quantum-dhcp-agent restart' is fixing the
 route, this is because the lease is DHCPNACK'd at next client refresh and
 the VM is getting a fresh new configuration excluding 169.254.0.0/16route.

 Community, I beg you to confirm the 169.254.0.0/16 route should *not* be
 pushed to VMs, and 169.254.169.254/32 should be sent thru the default
 route (ie. provider router internal IP).
 If it's the case, I'll update all my images to remove that route. If not,
 something is wrong with my Quantum setup that I should fix.

 Thanks,
 -Sylvain

 Le 20/02/2013 15:55, Sylvain Bauza a écrit :

 Hi,

 Previously using nova-network, all my VMs were having :
  # route -n
 Table de routage IP du noyau
 Destination Passerelle  Genmask Indic Metric Ref Use
 Iface
 10.0.0.00.0.0.0 255.255.255.0   U 0 0 0 eth0
 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 00
 eth0
 0.0.0.0 10.0.0.10.0.0.0 UG0 0 0 eth0

 Now, this setup seems incorrect with Quantum, as the ARP query goes
 directly from the network node trying to resolve 169.254.169.254 :
 [root@toto ~]# curl http://169.254.169.254/
 curl: (7) couldn't connect to host

 sylvain@folsom02:~$ sudo tcpdump -i qr-f76e4668-fa -nn not ip6 and not
 udp and host 169.254.169.254 -e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-f76e4668-fa, link-type EN10MB (Ethernet), capture size
 65535 bytes
 15:47:46.009548 fa:16:3e:bf:0b:f6  ff:ff:ff:ff:ff:ff, ethertype ARP
 (0x0806), length 42: Request who-has 169.254.169.254 tell 10.0.0.5, length
 28
 15:47:47.009076 fa:16:3e:bf:0b:f6  ff:ff:ff:ff:ff:ff, ethertype ARP
 (0x0806), length 42: Request who-has 169.254.169.254 tell 10.0.0.5, length
 28

 The only way for me to fix it is to remove the 169.254.0.0/16 route on
 the VM (or for some reason I doesn't understand, by restarting
 quantum-dhcp-agent on the network node) and then L3 routing is working
 correctly :

 [root@toto ~]# route del -net 169.254.0.0/16
 [root@toto ~]# curl http://169.254.169.254/
 1.0
 2007-01-19
 2007-03-01
 2007-08-29
 2007-10-10
 2007-12-15
 2008-02-01
 2008-09-01
 2009-04-04

 sylvain@folsom02:~$ sudo tcpdump -i qg-f2397006-20 -nn not ip6 and not
 udp and host 10.0.0.5 and not port 22 -e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qg-f2397006-20, link-type EN10MB (Ethernet), capture size
 65535 bytes
 15:52:58.479234 fa:16:3e:e1:95:20  e0:46:9a:2c:f4:7d, ethertype IPv4
 (0x0800), length 74: 10.0.0.5.55428  192.168.1.71.8775: Flags [S], seq
 3032859044, win 14600, options [mss 1460,sackOK,TS val 2548891 ecr
 0,nop,wscale 5], length 0
 15:52:58.480987 e0:46:9a:2c:f4:7d  fa:16:3e:e1:95:20, ethertype IPv4
 (0x0800), length 74: 192.168.1.71.8775  10.0.0.5.55428: Flags [S.], seq
 3888257357, ack 3032859045, win 14480, options [mss 1460,sackOK,TS val
 16404712 ecr 2548891,nop,wscale 7], length 0
 15:52:58.482211 fa:16:3e:e1:95:20  e0:46:9a:2c:f4:7d, ethertype IPv4
 (0x0800), length 66: 10.0.0.5.55428  192.168.1.71.8775: Flags [.], ack 1,
 win 457, options [nop,nop,TS val 2548895 ecr 16404712], length 0


 I can't understand what's wrong with my setup. Could you help me ? I
 would have to undergo a post-up statement for all my images... :(

 Thanks,
 -Sylvain




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: 

Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-25 Thread Sam Morrison

On 26/02/2013, at 4:31 PM, Chris Behrens cbehr...@codestud.com wrote:

 After thinking more, it does seem like we're doing something wrong if the 
 query itself is returning 300k rows. :)  I can take a better look at it in 
 front of the computer later if no one beats me to it.

Yeah I think it's more than a missing index :-)

The query does 2 INNER JOINS on aggregate_hosts then 2 INNER JOINS on 
aggregate_metadata then does a further 2 LEFT OUTER JOINS on aggregate_hosts.
Thanks for the help,
Sam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] LBaas :: Service Agent :: Drivers

2013-02-25 Thread Dan Wendlandt
Hi Trinath,

This review is no longer the active review for LBaaS within Quantum for
Grizzly.  Instead, we are going with a simplified approach, here:
https://review.openstack.org/#/c/22794/3

dan

On Sun, Feb 24, 2013 at 10:17 PM, Trinath Somanchi 
trinath.soman...@gmail.com wrote:

 Hi Stackers-

 While going through the code base at

 http://review.openstack.org/#/c/20579/

 I have a doubt with respect to the understanding of Drivers

 Can any one kindly help me understand the Concept of Drivers in the
 Service Agent functionality. What is the role of Drivers? Where do these
 drivers run, in Controller or the Compute node ?

 Thanks in advance, Kindly help me understand the same.,

 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] LBaas :: Service Agent :: Drivers

2013-02-25 Thread Trinath Somanchi
Hi Dan-

Thanks a lot for the update.

As I understand, let me summarize the LBaas Driver and Agent in simple
terms.

[1] Service Agent is an generic implementation supporting many drivers.
[2] Drivers form the applications of the Services. like, HAPROXY for
Loadbalancer Service.

What more functionality can a Service_Plugin-Agent-Driver architecture and
deliver? (I think, I'm asking a very basic question).

Am I in the right path? kindly guide me understand the same.

Thanking you once again...


On Tue, Feb 26, 2013 at 11:09 AM, Dan Wendlandt d...@nicira.com wrote:

 Hi Trinath,

 This review is no longer the active review for LBaaS within Quantum for
 Grizzly.  Instead, we are going with a simplified approach, here:
 https://review.openstack.org/#/c/22794/3

 dan

 On Sun, Feb 24, 2013 at 10:17 PM, Trinath Somanchi 
 trinath.soman...@gmail.com wrote:

 Hi Stackers-

 While going through the code base at

 http://review.openstack.org/#/c/20579/

 I have a doubt with respect to the understanding of Drivers

 Can any one kindly help me understand the Concept of Drivers in the
 Service Agent functionality. What is the role of Drivers? Where do these
 drivers run, in Controller or the Compute node ?

 Thanks in advance, Kindly help me understand the same.,

 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 --
 ~~~
 Dan Wendlandt
 Nicira, Inc: www.nicira.com
 twitter: danwendlandt
 ~~~




-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Huge memory consumption of qpid server

2013-02-25 Thread Yufang Zhang
2013/2/26 Victor Palma palma.vic...@gmail.com

 Can you provide more context around your start up commands? What is your
 xms set you, how much memory have you allocated? What is your buffer size
 on the broker and client?


Sorry. I have limited knowledge about qpid. I just install qpid server via
yum and start qpid server service. Could you please point out how could I
get these configs, so that I can paste them in the mail list?



 Additionally what makes you think it's consuming so much memory?


In my deployment, I just leave 2G memory for host, all the the left memory
are given to instances. So 400M memory for qpid server is a bit dangerous.
I have to restart qpid server every week to protect OOM. And I find that
memory consumation of qpid server in openstack depends on node number, thus
it would be a serious problem as more nodes are added into cluster.


 Regards,
 Victor Palma


 On Feb 25, 2013, at 10:01 PM, Yufang Zhang yufang521...@gmail.com wrote:

  Hi all,
 
  I use qpid server as message queue in openstack. After the cluster
 running for a month, I find the qpid server has consumed 400M memory.
 Considering the cluster has only 10 nodes, things would be worse as more
 nodes are being added into cluster. No memory leaks were found when I used
 valgrind to check the qpid server.
 
  So is this reasonable that qpid server comsumes so much memory working
 with nova services? Is there any suggestion or workaround for this issue?
  Thanks.
 
  Yufang
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_ceilometer_trunk #103

2013-02-25 Thread openstack-testing-bot
Title: precise_grizzly_ceilometer_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_ceilometer_trunk/103/Project:precise_grizzly_ceilometer_trunkDate of build:Mon, 25 Feb 2013 10:01:18 -0500Build duration:2 min 43 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesImported Translations from Transifexby Jenkinseditceilometer/locale/ceilometer.potMake sure that the period is returned as an int as the api expects an int.by asalkeldedittests/storage/base.pyeditceilometer/storage/impl_sqlalchemy.pyeditceilometer/storage/impl_mongodb.pyConsole Output[...truncated 2458 lines...]Install-Time: 0Job: ceilometer_2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1.dscMachine Architecture: amd64Package: ceilometerPackage-Time: 0Source-Version: 2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1Space: 0Status: failedVersion: 2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1Finished at 20130225-1003Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1.dsc']' returned non-zero exit status 3ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1.dsc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/grizzly /tmp/tmpPjoFu4/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmpPjoFu4/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 9335d81316d2f136ac6cd9aa0be5a45887abbf2c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [0cdd947] Make sure that the period is returned as an int as the api expects an int.dch -a [70003c9] Imported Translations from Transifexdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC ceilometer_2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A ceilometer_2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a4.g2d8f7c1+git201302251001~precise-0ubuntu1.dsc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_ceilometer_trunk #104

2013-02-25 Thread openstack-testing-bot
Title: precise_grizzly_ceilometer_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_ceilometer_trunk/104/Project:precise_grizzly_ceilometer_trunkDate of build:Mon, 25 Feb 2013 10:31:09 -0500Build duration:2 min 46 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesRemove compat cfg wrapperby markmcedittools/pip-requireseditceilometer/storage/sqlalchemy/session.pyeditceilometer/storage/sqlalchemy/models.pydeleteceilometer/openstack/common/cfg.pyConsole Output[...truncated 2460 lines...]Job: ceilometer_2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1.dscMachine Architecture: amd64Package: ceilometerPackage-Time: 0Source-Version: 2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1Space: 0Status: failedVersion: 2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1Finished at 20130225-1033Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1.dsc']' returned non-zero exit status 3ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1.dsc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/grizzly /tmp/tmp4BCnHa/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmp4BCnHa/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 9335d81316d2f136ac6cd9aa0be5a45887abbf2c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [0cdd947] Make sure that the period is returned as an int as the api expects an int.dch -a [70003c9] Imported Translations from Transifexdch -a [df5ac5b] Remove compat cfg wrapperdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC ceilometer_2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A ceilometer_2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a6.g0ccc81e+git201302251031~precise-0ubuntu1.dsc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_nova_trunk #769

2013-02-25 Thread openstack-testing-bot
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/769/Project:raring_grizzly_nova_trunkDate of build:Mon, 25 Feb 2013 11:31:42 -0500Build duration:12 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesClean unused kernels and ramdisks from image cacheby markmceditnova/tests/test_imagecache.pyeditnova/virt/libvirt/driver.pyeditnova/virt/libvirt/imagecache.pyConsole Output[...truncated 12687 lines...]Finished at 20130225-1144Build needed 00:08:30, 139704k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1.a4667.g6053ca1+git201302251132~raring-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1.a4667.g6053ca1+git201302251132~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmpNhfMlZ/novamk-build-deps -i -r -t apt-get -y /tmp/tmpNhfMlZ/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 5997c4e21773bf44a6033bb43f1628696324213f..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/nova/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a4667.g6053ca1+git201302251132~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [38997fc] Clean unused kernels and ramdisks from image cachedch -a [f4f6464] Ensure macs can be serialized.dch -a [b01923c] Prevent default security group deletion.dch -a [2763749] libvirt: lxml behavior breaks version check.dch -a [9553d5e] Add missing import_opt for flat_injecteddch -a [9e6ba90] Add processutils from oslo.dch -a [20fb97d] Updates to OSAPI sizelimit middleware.dch -a [014499a] Make guestfs use same libvirt URI as Nova.dch -a [4f0c2dd] Make LibvirtDriver.uri() a staticmethod.dch -a [2ca65ae] Don't set filter name if we use Noop driverdch -a [edf15fd] Removes unnecessary qemu-img dependency on powervm driverdch -a [faf8dce] Adding ability to specify the libvirt cache mode for disk devicesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1.a4667.g6053ca1+git201302251132~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A nova_2013.1.a4667.g6053ca1+git201302251132~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1.a4667.g6053ca1+git201302251132~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'nova_2013.1.a4667.g6053ca1+git201302251132~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: raring_grizzly_nova_trunk #770

2013-02-25 Thread openstack-testing-bot
Title: raring_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/770/Project:raring_grizzly_nova_trunkDate of build:Mon, 25 Feb 2013 13:01:14 -0500Build duration:14 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesImported Translations from Transifexby Jenkinseditnova/locale/nova.potConsole Output[...truncated 21634 lines...]deleting and forgetting pool/main/n/nova/nova-network_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-novncproxy_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-objectstore_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-scheduler_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-spiceproxy_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-volume_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-network_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-plugins_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xvpvncproxy_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/python-nova_2013.1.a4665.g42d058b+git201302241432~raring-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/nova/raring-grizzly']Pushed up to revision 560.INFO:root:Storing current commit for next build: 4fbb245d35ea1da68027b5cbbb75204b3484bfe9INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmp_9jsbe/novamk-build-deps -i -r -t apt-get -y /tmp/tmp_9jsbe/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 5997c4e21773bf44a6033bb43f1628696324213f..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/nova/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a4669.g2f90562+git201302251302~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [4fbb245] Imported Translations from Transifexdch -a [38997fc] Clean unused kernels and ramdisks from image cachedch -a [f4f6464] Ensure macs can be serialized.dch -a [b01923c] Prevent default security group deletion.dch -a [2763749] libvirt: lxml behavior breaks version check.dch -a [9553d5e] Add missing import_opt for flat_injecteddch -a [9e6ba90] Add processutils from oslo.dch -a [20fb97d] Updates to OSAPI sizelimit middleware.dch -a [014499a] Make guestfs use same libvirt URI as Nova.dch -a [4f0c2dd] Make LibvirtDriver.uri() a staticmethod.dch -a [2ca65ae] Don't set filter name if we use Noop driverdch -a [edf15fd] Removes unnecessary qemu-img dependency on powervm driverdch -a [faf8dce] Adding ability to specify the libvirt cache mode for disk devicesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1.a4669.g2f90562+git201302251302~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A nova_2013.1.a4669.g2f90562+git201302251302~raring-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing nova_2013.1.a4669.g2f90562+git201302251302~raring-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include raring-grizzly nova_2013.1.a4669.g2f90562+git201302251302~raring-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/nova/raring-grizzly+ [ ! 0 ]+ jenkins-cli build raring_grizzly_deployEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_deploy #63

2013-02-25 Thread openstack-testing-bot
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/63/Project:raring_grizzly_deployDate of build:Mon, 25 Feb 2013 14:02:32 -0500Build duration:46 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 13395 lines...]INFO:root:Setting up connection to test-05.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-05.os.magners.qa.lexingtonINFO:root:Archiving logs on test-07.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-07.os.magners.qa.lexingtonINFO:root:Archiving logs on test-08.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-08.os.magners.qa.lexingtonINFO:root:Archiving logs on test-09.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-09.os.magners.qa.lexingtonINFO:root:Archiving logs on test-04.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-04.os.magners.qa.lexingtonINFO:root:Archiving logs on test-05.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-05.os.magners.qa.lexingtonINFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-03.os.magners.qa.lexingtonINFO:root:Archiving logs on test-06.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-06.os.magners.qa.lexingtonINFO:root:Archiving logs on test-10.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-10.os.magners.qa.lexingtonINFO:root:Archiving logs on test-02.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-02.os.magners.qa.lexingtonINFO:root:Grabbing information from test-07.os.magners.qa.lexingtonERROR:root:Unable to get information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonERROR:root:Unable to get information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonERROR:root:Unable to get information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonERROR:root:Unable to get information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonERROR:root:Unable to get information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonERROR:root:Unable to get information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonERROR:root:Unable to get information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-10.os.magners.qa.lexingtonERROR:root:Unable to get information from test-10.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonERROR:root:Unable to get information from test-02.os.magners.qa.lexingtonTraceback (most recent call last):  File "/var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py", line 88, in connections[host]["sftp"].close()KeyError: 'sftp'+ exit 1Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_cinder_trunk #189

2013-02-25 Thread openstack-testing-bot
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/189/Project:raring_grizzly_cinder_trunkDate of build:Mon, 25 Feb 2013 15:29:39 -0500Build duration:4 min 54 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 7459 lines...]ERROR:root:Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a12.gb3aa798+git201302251529~raring-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpSdYiA8/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpSdYiA8/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 716b67f4790774858ccc9d4b696ac12f7e793b0b..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/cinder/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a12.gb3aa798+git201302251529~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [614a23a] update install_venv_common to handle bootstrappingdch -a [cde01d5] allow run_tests.sh to report why it faileddch -a [778141a] Remove compat cfg wrapperdch -a [edbfe4c] XenAPINFS: Fix Volume always uploaded as vhd/ovfdch -a [b138481] Fixed cinder-backup start errors seen with devstackdch -a [762f2e1] Cinder devref doc cleanupsdch -a [c3c31fc] Fix various exception pathsdch -a [6670314] Implement metadata options for snapshotsdch -a [8169eb6] Skip timestamp check if 'capabilities' is nonedch -a [f7bcf95] Fix stale volume list for NetApp 7-mode ISCSI driverdch -a [f50c8cb] Implement a basic backup-volume-to-swift servicedch -a [217e194] Moved cinder_emc_config.xml.sample to emc folderdch -a [d2742e1] Uses tempdir module to create/delete xml filedch -a [74d1add] Add HUAWEI volume driver in Cinderdch -a [f2ce698] XenAPINFS: Create volume from image (generic)dch -a [ea2c405] Bump the oslo-config version to address issues.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a12.gb3aa798+git201302251529~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A cinder_2013.1.a12.gb3aa798+git201302251529~raring-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a12.gb3aa798+git201302251529~raring-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'raring-grizzly', '-n', '-A', 'cinder_2013.1.a12.gb3aa798+git201302251529~raring-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_ceilometer_trunk #105

2013-02-25 Thread openstack-testing-bot
Title: precise_grizzly_ceilometer_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_ceilometer_trunk/105/Project:precise_grizzly_ceilometer_trunkDate of build:Mon, 25 Feb 2013 17:01:09 -0500Build duration:4 min 11 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesFix count type in MongoDBby julieneditceilometer/storage/impl_mongodb.pyConsole Output[...truncated 2463 lines...]Machine Architecture: amd64Package: ceilometerPackage-Time: 0Source-Version: 2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1Space: 0Status: failedVersion: 2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1Finished at 20130225-1703Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1.dsc']' returned non-zero exit status 3ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1.dsc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/grizzly /tmp/tmpMQC6Ze/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmpMQC6Ze/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 9335d81316d2f136ac6cd9aa0be5a45887abbf2c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [f8337b5] Fix count type in MongoDBdch -a [0cdd947] Make sure that the period is returned as an int as the api expects an int.dch -a [70003c9] Imported Translations from Transifexdch -a [df5ac5b] Remove compat cfg wrapperdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC ceilometer_2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A ceilometer_2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1.a7.gf8337b5+git201302251701~precise-0ubuntu1.dsc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_cinder_trunk #185

2013-02-25 Thread openstack-testing-bot
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/185/Project:precise_grizzly_cinder_trunkDate of build:Mon, 25 Feb 2013 17:49:36 -0500Build duration:4 min 34 secBuild cause:Started by user James PageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 6538 lines...]ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a12.gb3aa798+git201302251749~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmp4_FQOu/cindermk-build-deps -i -r -t apt-get -y /tmp/tmp4_FQOu/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 716b67f4790774858ccc9d4b696ac12f7e793b0b..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/cinder/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a12.gb3aa798+git201302251749~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [614a23a] update install_venv_common to handle bootstrappingdch -a [cde01d5] allow run_tests.sh to report why it faileddch -a [778141a] Remove compat cfg wrapperdch -a [edbfe4c] XenAPINFS: Fix Volume always uploaded as vhd/ovfdch -a [b138481] Fixed cinder-backup start errors seen with devstackdch -a [762f2e1] Cinder devref doc cleanupsdch -a [c3c31fc] Fix various exception pathsdch -a [6670314] Implement metadata options for snapshotsdch -a [8169eb6] Skip timestamp check if 'capabilities' is nonedch -a [f7bcf95] Fix stale volume list for NetApp 7-mode ISCSI driverdch -a [f50c8cb] Implement a basic backup-volume-to-swift servicedch -a [217e194] Moved cinder_emc_config.xml.sample to emc folderdch -a [d2742e1] Uses tempdir module to create/delete xml filedch -a [74d1add] Add HUAWEI volume driver in Cinderdch -a [f2ce698] XenAPINFS: Create volume from image (generic)dch -a [ea2c405] Bump the oslo-config version to address issues.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.1.a12.gb3aa798+git201302251749~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A cinder_2013.1.a12.gb3aa798+git201302251749~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a12.gb3aa798+git201302251749~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'cinder_2013.1.a12.gb3aa798+git201302251749~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_ceilometer_trunk #106

2013-02-25 Thread openstack-testing-bot
Title: precise_grizzly_ceilometer_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_ceilometer_trunk/106/Project:precise_grizzly_ceilometer_trunkDate of build:Mon, 25 Feb 2013 17:58:40 -0500Build duration:4 min 5 secBuild cause:Started by user James PageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesAllow empty dict as metaquery param for sqlalchemy.by lianhao.luedittests/storage/base.pyeditceilometer/storage/impl_sqlalchemy.pyConsole Output[...truncated 5779 lines...]  Uploading ceilometer_2013.1.a9.gf96476d+git201302251758~precise.orig.tar.gz: done.  Uploading ceilometer_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1.debian.tar.gz: done.  Uploading ceilometer_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'ceilometer_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_amd64.changes']Skipping inclusion of 'ceilometer-agent-central' '2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1' in 'precise-grizzly|main|amd64', as it has already '2013.1.a336.g9335d81+git201302212101~precise-0ubuntu1'.Skipping inclusion of 'ceilometer-agent-compute' '2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1' in 'precise-grizzly|main|amd64', as it has already '2013.1.a336.g9335d81+git201302212101~precise-0ubuntu1'.Skipping inclusion of 'ceilometer-api' '2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1' in 'precise-grizzly|main|amd64', as it has already '2013.1.a336.g9335d81+git201302212101~precise-0ubuntu1'.Skipping inclusion of 'ceilometer-collector' '2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1' in 'precise-grizzly|main|amd64', as it has already '2013.1.a336.g9335d81+git201302212101~precise-0ubuntu1'.Skipping inclusion of 'ceilometer-common' '2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1' in 'precise-grizzly|main|amd64', as it has already '2013.1.a336.g9335d81+git201302212101~precise-0ubuntu1'.Skipping inclusion of 'python-ceilometer' '2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1' in 'precise-grizzly|main|amd64', as it has already '2013.1.a336.g9335d81+git201302212101~precise-0ubuntu1'.Deleting files just added to the pool but not used.(to avoid use --keepunusednewfiles next time)deleting and forgetting pool/main/c/ceilometer/ceilometer-agent-central_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/ceilometer-agent-compute_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/ceilometer-api_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/ceilometer-collector_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/ceilometer-common_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/python-ceilometer_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly']Pushed up to revision 17.INFO:root:Storing current commit for next build: f8337b52fc0b6f5800223143b85c443350bca05fINFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/grizzly /tmp/tmp3LX1F1/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmp3LX1F1/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 9335d81316d2f136ac6cd9aa0be5a45887abbf2c..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [f8337b5] Fix count type in MongoDBdch -a [0cdd947] Make sure that the period is returned as an int as the api expects an int.dch -a [70003c9] Imported Translations from Transifexdch -a [df5ac5b] Remove compat cfg wrapperdch -a [84f5e63] Allow empty dict as metaquery param for sqlalchemy.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC ceilometer_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A ceilometer_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing ceilometer_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly ceilometer_2013.1.a9.gf96476d+git201302251758~precise-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/ceilometer/precise-grizzlyEmail was triggered for: FixedTrigger Success 

[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_deploy #64

2013-02-25 Thread openstack-testing-bot
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/64/Project:raring_grizzly_deployDate of build:Mon, 25 Feb 2013 17:23:05 -0500Build duration:46 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 13083 lines...]INFO:root:Setting up connection to test-04.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-04.os.magners.qa.lexingtonINFO:root:Archiving logs on test-07.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-07.os.magners.qa.lexingtonINFO:root:Archiving logs on test-08.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-08.os.magners.qa.lexingtonINFO:root:Archiving logs on test-09.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-09.os.magners.qa.lexingtonINFO:root:Archiving logs on test-04.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-04.os.magners.qa.lexingtonINFO:root:Archiving logs on test-05.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-05.os.magners.qa.lexingtonINFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-03.os.magners.qa.lexingtonINFO:root:Archiving logs on test-06.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-06.os.magners.qa.lexingtonINFO:root:Archiving logs on test-10.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-10.os.magners.qa.lexingtonINFO:root:Archiving logs on test-02.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-02.os.magners.qa.lexingtonINFO:root:Grabbing information from test-07.os.magners.qa.lexingtonERROR:root:Unable to get information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonERROR:root:Unable to get information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonERROR:root:Unable to get information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonERROR:root:Unable to get information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonERROR:root:Unable to get information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonERROR:root:Unable to get information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonERROR:root:Unable to get information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-10.os.magners.qa.lexingtonERROR:root:Unable to get information from test-10.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonERROR:root:Unable to get information from test-02.os.magners.qa.lexingtonTraceback (most recent call last):  File "/var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py", line 88, in connections[host]["sftp"].close()KeyError: 'sftp'+ exit 1Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_quantum_trunk #356

2013-02-25 Thread openstack-testing-bot
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/356/Project:precise_grizzly_quantum_trunkDate of build:Mon, 25 Feb 2013 21:01:16 -0500Build duration:1 min 22 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesAdd default state_path to quantum.confby gkottoneditetc/l3_agent.inieditetc/dhcp_agent.inieditetc/quantum.confeditetc/metadata_agent.iniConsole Output[...truncated 3435 lines...]Applying patch fix-quantum-configuration.patchpatching file etc/dhcp_agent.iniHunk #1 FAILED at 4.1 out of 1 hunk FAILED -- rejects in file etc/dhcp_agent.inipatching file etc/quantum/plugins/bigswitch/restproxy.inipatching file etc/quantum/plugins/linuxbridge/linuxbridge_conf.inipatching file etc/quantum/plugins/nec/nec.inipatching file etc/quantum/plugins/nicira/nvp.inipatching file etc/quantum/plugins/openvswitch/ovs_quantum_plugin.inipatching file etc/quantum/plugins/ryu/ryu.inipatching file etc/quantum.confHunk #1 succeeded at 43 (offset 4 lines).Hunk #2 succeeded at 227 (offset 14 lines).patching file etc/quantum/plugins/plumgrid/plumgrid.inipatching file etc/quantum/plugins/brocade/brocade.iniPatch fix-quantum-configuration.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-c2dd1d35-6370-4c42-9391-ef9906b2aa1c', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-c2dd1d35-6370-4c42-9391-ef9906b2aa1c', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmp9XBBfD/quantummk-build-deps -i -r -t apt-get -y /tmp/tmp9XBBfD/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/quantum/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a31.g0f4050d+git201302252101~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-c2dd1d35-6370-4c42-9391-ef9906b2aa1c', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-c2dd1d35-6370-4c42-9391-ef9906b2aa1c', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_quantum_trunk #368

2013-02-25 Thread openstack-testing-bot
Title: raring_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/368/Project:raring_grizzly_quantum_trunkDate of build:Mon, 25 Feb 2013 21:01:13 -0500Build duration:2 min 28 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesAdd default state_path to quantum.confby gkottoneditetc/quantum.confeditetc/l3_agent.inieditetc/metadata_agent.inieditetc/dhcp_agent.iniConsole Output[...truncated 4049 lines...]Applying patch fix-quantum-configuration.patchpatching file etc/dhcp_agent.iniHunk #1 FAILED at 4.1 out of 1 hunk FAILED -- rejects in file etc/dhcp_agent.inipatching file etc/quantum/plugins/bigswitch/restproxy.inipatching file etc/quantum/plugins/linuxbridge/linuxbridge_conf.inipatching file etc/quantum/plugins/nec/nec.inipatching file etc/quantum/plugins/nicira/nvp.inipatching file etc/quantum/plugins/openvswitch/ovs_quantum_plugin.inipatching file etc/quantum/plugins/ryu/ryu.inipatching file etc/quantum.confHunk #1 succeeded at 43 (offset 4 lines).Hunk #2 succeeded at 227 (offset 14 lines).patching file etc/quantum/plugins/plumgrid/plumgrid.inipatching file etc/quantum/plugins/brocade/brocade.iniPatch fix-quantum-configuration.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-fe5a26bd-4d16-40f7-be44-4e913f0c6e46', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-fe5a26bd-4d16-40f7-be44-4e913f0c6e46', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmp1MfcuB/quantummk-build-deps -i -r -t apt-get -y /tmp/tmp1MfcuB/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a31.g0f4050d+git201302252101~raring-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-fe5a26bd-4d16-40f7-be44-4e913f0c6e46', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-fe5a26bd-4d16-40f7-be44-4e913f0c6e46', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_deploy #65

2013-02-25 Thread openstack-testing-bot
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/65/Project:raring_grizzly_deployDate of build:Mon, 25 Feb 2013 20:45:26 -0500Build duration:46 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 13019 lines...]INFO:root:Setting up connection to test-09.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-09.os.magners.qa.lexingtonINFO:root:Archiving logs on test-07.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-07.os.magners.qa.lexingtonINFO:root:Archiving logs on test-08.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-08.os.magners.qa.lexingtonINFO:root:Archiving logs on test-09.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-09.os.magners.qa.lexingtonINFO:root:Archiving logs on test-04.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-04.os.magners.qa.lexingtonINFO:root:Archiving logs on test-05.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-05.os.magners.qa.lexingtonINFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-03.os.magners.qa.lexingtonINFO:root:Archiving logs on test-06.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-06.os.magners.qa.lexingtonINFO:root:Archiving logs on test-10.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-10.os.magners.qa.lexingtonINFO:root:Archiving logs on test-02.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-02.os.magners.qa.lexingtonINFO:root:Grabbing information from test-07.os.magners.qa.lexingtonERROR:root:Unable to get information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonERROR:root:Unable to get information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonERROR:root:Unable to get information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonERROR:root:Unable to get information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonERROR:root:Unable to get information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonERROR:root:Unable to get information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonERROR:root:Unable to get information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-10.os.magners.qa.lexingtonERROR:root:Unable to get information from test-10.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonERROR:root:Unable to get information from test-02.os.magners.qa.lexingtonTraceback (most recent call last):  File "/var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py", line 88, in connections[host]["sftp"].close()KeyError: 'sftp'+ exit 1Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_quantum_trunk #369

2013-02-25 Thread openstack-testing-bot
Title: raring_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/369/Project:raring_grizzly_quantum_trunkDate of build:Tue, 26 Feb 2013 00:35:49 -0500Build duration:2 min 53 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesFixes import reorder nitsby zhongyue.naheditquantum/plugins/openvswitch/agent/ovs_quantum_agent.pyeditquantum/agent/ovs_cleanup_util.pyeditquantum/tests/unit/test_routerserviceinsertion.pyeditquantum/agent/netns_cleanup_util.pyeditquantum/plugins/nicira/nicira_nvp_plugin/nicira_db.pyeditquantum/tests/unit/nicira/test_nicira_plugin.pyeditquantum/plugins/plumgrid/plumgrid_nos_plugin/plumgrid_plugin.pyeditquantum/plugins/nec/drivers/pfc.pyeditquantum/tests/unit/extensions/extensionattribute.pyeditquantum/plugins/nicira/nicira_nvp_plugin/nicira_networkgw_db.pyeditquantum/db/routerservicetype_db.pyeditquantum/tests/unit/nicira/test_nvplib.pyeditquantum/tests/unit/nec/test_db.pyeditquantum/tests/unit/test_agent_ext_plugin.pyeditquantum/tests/unit/nicira/test_networkgw.pyeditquantum/plugins/ryu/agent/ryu_quantum_agent.pyeditquantum/tests/unit/test_l3_agent.pyeditquantum/tests/unit/test_api_v2.pyeditquantum/plugins/nec/drivers/trema.pyConsole Output[...truncated 4054 lines...]Hunk #1 FAILED at 4.1 out of 1 hunk FAILED -- rejects in file etc/dhcp_agent.inipatching file etc/quantum/plugins/bigswitch/restproxy.inipatching file etc/quantum/plugins/linuxbridge/linuxbridge_conf.inipatching file etc/quantum/plugins/nec/nec.inipatching file etc/quantum/plugins/nicira/nvp.inipatching file etc/quantum/plugins/openvswitch/ovs_quantum_plugin.inipatching file etc/quantum/plugins/ryu/ryu.inipatching file etc/quantum.confHunk #1 succeeded at 43 (offset 4 lines).Hunk #2 succeeded at 227 (offset 14 lines).patching file etc/quantum/plugins/plumgrid/plumgrid.inipatching file etc/quantum/plugins/brocade/brocade.iniPatch fix-quantum-configuration.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-80091b3c-11a9-4ef6-9513-cda5191aa496', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-80091b3c-11a9-4ef6-9513-cda5191aa496', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpSQKJXy/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpSQKJXy/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 919976ab48c563c7e868acf9c6499386b79e88df..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 2013.1.a33.gb32d83b+git201302260035~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [e3bc93f] Fixes import reorder nitsdch -a [5483199] Add default state_path to quantum.confdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-80091b3c-11a9-4ef6-9513-cda5191aa496', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-80091b3c-11a9-4ef6-9513-cda5191aa496', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_quantum_trunk #357

2013-02-25 Thread openstack-testing-bot
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/357/Project:precise_grizzly_quantum_trunkDate of build:Tue, 26 Feb 2013 00:38:25 -0500Build duration:1 min 20 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesFixes import reorder nitsby zhongyue.naheditquantum/tests/unit/test_routerserviceinsertion.pyeditquantum/tests/unit/extensions/extensionattribute.pyeditquantum/agent/ovs_cleanup_util.pyeditquantum/tests/unit/test_l3_agent.pyeditquantum/db/routerservicetype_db.pyeditquantum/tests/unit/nicira/test_networkgw.pyeditquantum/tests/unit/nicira/test_nicira_plugin.pyeditquantum/tests/unit/test_api_v2.pyeditquantum/plugins/plumgrid/plumgrid_nos_plugin/plumgrid_plugin.pyeditquantum/plugins/nicira/nicira_nvp_plugin/nicira_networkgw_db.pyeditquantum/plugins/ryu/agent/ryu_quantum_agent.pyeditquantum/tests/unit/test_agent_ext_plugin.pyeditquantum/plugins/openvswitch/agent/ovs_quantum_agent.pyeditquantum/agent/netns_cleanup_util.pyeditquantum/tests/unit/nicira/test_nvplib.pyeditquantum/plugins/nicira/nicira_nvp_plugin/nicira_db.pyeditquantum/tests/unit/nec/test_db.pyeditquantum/plugins/nec/drivers/trema.pyeditquantum/plugins/nec/drivers/pfc.pyConsole Output[...truncated 3440 lines...]Hunk #1 FAILED at 4.1 out of 1 hunk FAILED -- rejects in file etc/dhcp_agent.inipatching file etc/quantum/plugins/bigswitch/restproxy.inipatching file etc/quantum/plugins/linuxbridge/linuxbridge_conf.inipatching file etc/quantum/plugins/nec/nec.inipatching file etc/quantum/plugins/nicira/nvp.inipatching file etc/quantum/plugins/openvswitch/ovs_quantum_plugin.inipatching file etc/quantum/plugins/ryu/ryu.inipatching file etc/quantum.confHunk #1 succeeded at 43 (offset 4 lines).Hunk #2 succeeded at 227 (offset 14 lines).patching file etc/quantum/plugins/plumgrid/plumgrid.inipatching file etc/quantum/plugins/brocade/brocade.iniPatch fix-quantum-configuration.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-7eafce07-be6a-4dd6-a5a6-8f42e19aa0b9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-7eafce07-be6a-4dd6-a5a6-8f42e19aa0b9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpVeHBmB/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpVeHBmB/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 919976ab48c563c7e868acf9c6499386b79e88df..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/precise-grizzly --forcedch -b -D precise --newversion 2013.1.a33.gb32d83b+git201302260038~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [e3bc93f] Fixes import reorder nitsdch -a [5483199] Add default state_path to quantum.confdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-7eafce07-be6a-4dd6-a5a6-8f42e19aa0b9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-7eafce07-be6a-4dd6-a5a6-8f42e19aa0b9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp