[openstack-dev] [Keystone] Memcache user token index and PKI tokens

2013-07-15 Thread Kieran Spear
Hi all,

I want to backport the fix for the Token List in Memcache can consume
an entire memcache page bug[1] to Grizzly, but I had a couple of
questions:

1. Why do we need to store the entire token data in the
usertoken-userid key? This data always seems to be hashed before
indexing into the 'token-tokenid' keys anyway. The size of the
memcache data for a user's token list currently grows by 4k every time
a new PKI token is created. It doesn't take long to hit 1MB at this
rate even with the above fix.

2. Every time it creates a new token, Keystone loads each token from
the user's token list with a separate memcache call so it can throw it
away if it's expired. This seems excessive. Is it anything to worry
about? If it just checked the first two tokens you'd get the same
effect on a longer time scale.

I guess part of the answer is to decrease our token expiry time, which
should mitigate both issues. Failing that we'd consider moving to the
SQL backend.

Cheers,
Kieran

[1] https://bugs.launchpad.net/keystone/+bug/1171985

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-15 Thread Stephen Gran

On 15/07/13 09:26, Thomas Goirand wrote:

Dolph,

If you do that, then you will be breaking Debian packages, as they
expect Sqlite as the default, for example when using
DEBIAN_FRONTEND=noninteractive apt-get install keystone (if you choose
MySQL, then you need to enter admin credentials to setup the db). I will
receive tons of piupart failures reports if we can't upgrade with SQLite.

I would really be disappointed if this happens, and get into situations
where I have RC bugs which I can't realistically close by myself.

So really, if it is possible, continue to support it, at least from one
release to the next.


Why not just change the default for Debian?  Sqlite isn't particularly 
useful for actual deployments anyway.


Cheers,
--
Stephen Gran
Senior Systems Integrator - guardian.co.uk
Please consider the environment before printing this email.
--
Visit guardian.co.uk - website of the year

www.guardian.co.ukwww.observer.co.uk www.guardiannews.com   
www.guardian.co.uk/australia

On your mobile, visit m.guardian.co.uk or download the Guardian
iPhone app www.guardian.co.uk/iphone and iPad edition www.guardian.co.uk/iPad 

Save up to 32% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access. 
Visit subscribe.guardian.co.uk

-
This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Description for OpenStack QA

2013-07-15 Thread Thierry Carrez
Sean Dague wrote:
 Official Program Name: OpenStack Quality Assurance
 PTL: Sean Dague
 Mission Statement: Develop, maintain, and initiate tools and plans to
 ensure the upstream stability and quality of OpenStack, and its release
 readiness at any point during the release cycle.
 
 The OpenStack QA program starts with 2 git trees
  * tempest - https://github.com/openstack/tempest
  * grenade - https://github.com/openstack-dev/grenade

Sounds good, please fill out wiki landing page at:
https://wiki.openstack.org/wiki/QA

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bug #1194026

2013-07-15 Thread Thierry Carrez
Nachi Ueno wrote:
 Since this is critical bug which stops gating, I hope this is merged soon.
 I'll fix this code asap if I got review comment.

Great work Nachi. I certainly hope it gets the second Neutron +2 very
soon so we can make the Neutron tests voting again ASAP.

Thanks,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-15 Thread Thomas Goirand
On 07/15/2013 04:32 PM, Stephen Gran wrote:
 On 15/07/13 09:26, Thomas Goirand wrote:
 Dolph,

 If you do that, then you will be breaking Debian packages, as they
 expect Sqlite as the default, for example when using
 DEBIAN_FRONTEND=noninteractive apt-get install keystone (if you choose
 MySQL, then you need to enter admin credentials to setup the db). I will
 receive tons of piupart failures reports if we can't upgrade with SQLite.

 I would really be disappointed if this happens, and get into situations
 where I have RC bugs which I can't realistically close by myself.

 So really, if it is possible, continue to support it, at least from one
 release to the next.
 
 Why not just change the default for Debian?  Sqlite isn't particularly
 useful for actual deployments anyway.

Because that is the only backend that will work without providing
credentials on the keyboard, so it is the only one that will work in a
non-interactive session of apt-get (which is used for all automated
tests in Debian, including piuparts).

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Merge Fedora and Ubuntu DIB elements

2013-07-15 Thread Matthew Farrellee

On 07/15/2013 07:34 AM, Ivan Berezovskiy wrote:

Matt,

I've sent a comment at https://review.openstack.org/#/c/36690/ . So if


I believe the issue is a hadoop.rpm that is out of spec w/ fedora. For 
instance, it claims to own things like /usr.


It also doesn't have a proper post-install to handle the library files.

I've not seen the issue. Please file a bug for it.



we decided to merge elements, I suggest you to do it in the following way:
1. subdirectory root.d doesn't change.
2. subdirectory install.d should be used to install java on Ubuntu and
Fedora
3. subdirecotry post-install.d should be used to install hadoop and
configuring ssh on Ubuntu and Fedora.


What's the motivation for this split?



4. your changes in file first-boot.d/99-setup are OK, we only need to
change them for Fedora 19, because default user in Fedora 19 is 'fedora'.


Agreed. AFAIK we don't have F19 yet, so please file a bug on this. 
Whomever gets it should start tracking DIB (or help DIB get F19).



Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][neutron] Reserved fixed IPs

2013-07-15 Thread Cristian Tomoiaga
Hello everyone,

I am working on implementing fixed IP reservation for tenants. My goal is
to be able to reserve fixed IPs for a tenant and avoid as much as possible
the ephemeral state of an IP.

A basic workflow would be like this:

Tenant or admin reserves one or more fixed IPs. He will than be able to use
one or more of those reserved IPs on his instances (assign them to ports,
support multiple IPs per port).
If no/not enough fixed IPs are reserved, use the current IPAM
implementation otherwise allow the tenant to select from his reserved IPs
and then go through the current IPAM.

I am using fixed routable and non-routable IPs for public and private
networks (provider network , no NAT and no tagging). I will also use
floating IPs for LB, DNS a.s.o.

I have a few questions regarding the development of this since the
documentation is still being worked on and I have to dig through the code a
lot to understand a few things:

1. nova reserve-fixed-ip, this belongs to nova-network now obsolete right ?
2. I though of creating a new model (mainly a db table) to hold the IPs and
the tenant IDs in order to keep the association. I've done this for the
openvswitch plugin in ovs_models_v2 by adding a new model. I can probably
do this globally in /db directly right (especially if I plan on supporting
multiple plugins) ?
3. I was planning on adding to Neutron the api calls nova has for fixed IPs
(ex: fixed-ip-get, reserve, unreserve) Does this seem right ? I am asking
because I believe there is some work towards a new IPAM implementation and
I would like to get some thoughts. I am also asking because to me it seems
a little bit confusing that nova can also manage IPs and I am not sure
if/what functions are
obsolete there.
4. This should go as an extension first (as far as I understand for the
docs). Add the extension to extend the Neutron API and modify the current
IPAM right ?


-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Horizon] Is there precedent for validating user input on data types to APIs?

2013-07-15 Thread Sean Dague
This looks like a good place to add a test to tempest to tickle the same 
behavior that horizon is driving.


I expect this is another issue where we are expecting MySQL type 
coersion for the db, and something that will be exposed on the 
Postgresql Tempest run upstream. We have a standard pattern of fixing 
those in nova once we've got a test to demonstrate it.


Longer term we really need to be doing more front side validation, 
perhaps the new v3 framework will let us get there more easily.


-Sean

On 07/14/2013 11:27 PM, Gabriel Hurley wrote:

I responded on the ticket as well, but here’s my take:

An error like this should absolutely be caught before it raises a
database error. A useful, human-friendly error message should be
returned via the API. Any uncaught exception is a bug. On the other side
of the equation, anything using the API (such as Horizon) should do its
best to pre-validate the input, but if invalid input **is** sent it
should be handled well. The best way to let Horizon devs know what the
problem is is for the API to return an intelligent failure.

All the best,

-Gabriel

*From:*Dirk Müller [mailto:d...@dmllr.de]
*Sent:* Sunday, July 14, 2013 5:20 PM
*To:* OpenStack Development Mailing List
*Subject:* Re: [openstack-dev] [Nova][Horizon] Is there precedent for
validating user input on data types to APIs?

Hi Matt,

Given that the Nova API is public, this needs to be validated in the
API, otherwise the security guys are unhappy.

Of course the API shouldn't get bad data in the first place. That's a
bug in nova client. I have sent reviews for both code fixes but I've not
seen any serious reaction or approval on those for two weeks. Eventually
somebody is going to look at it, I guess.

Greetings,
Dirk



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Description for OpenStack QA

2013-07-15 Thread Sean Dague

On 07/15/2013 07:46 AM, Thierry Carrez wrote:

Chmouel Boudjnah wrote:

The OpenStack QA program starts with 2 git trees
  * tempest - https://github.com/openstack/tempest
  * grenade - https://github.com/openstack-dev/grenade


I haven't read the full discussion on this so apologies if I am
missing something, but why devstack is no part of this?


Devstack falls somewhere between QA and Infrastructure... We raised
briefly the subject of where we should attach it during the initial
discussion on programs, then punted for later discussion.

It falls in the same bucket as other central repositories like
openstack/requirements -- everyone ends up contributing to those so it's
difficult to attach them to any given program/team.


Right, devstack's primary mission is still providing development 
environments to developers. We reuse it in QA and Infra, but it's kind 
of a different beast.


So for now it just remains what it is, which I think is fine. I think 
it's good to be pragmatic about Programs and only fit the git trees that 
naturally fit into them, and just be really concerned that every git 
tree we carry has to be owned by a program.


Sometimes we have useful code just because people are useful and doing 
good things for the community. I'd much rather let that bloom as is then 
spend a lot of time ensuring it fits into an existing program.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna]error while accessing Savanna UI

2013-07-15 Thread Arindam Choudhury
Hi,

I did:

git clone https://github.com/stackforge/savanna-dashboard.git

cd savanna-dashboard

python setup.py install

pip show savannadashboard
---
Name: savannadashboard
Version: 0.2.rc2
Location: /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg
Requires: 

then in  /usr/share/openstack-dashboard/openstack_dashboard/settings.py

HORIZON_CONFIG = {
'dashboards': ('project', 'admin', 'settings', 'savanna',),


INSTALLED_APPS = (
'openstack_dashboard',
'savannadashboard',

and in 
/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py

SAVANNA_URL = 'http://localhost:8386/v1.0'

But whenever I try to access savanna dashboard I get the following error in 
httpd error_access log:

[Mon Jul 15 07:44:35 2013] [error] ERROR:django.request:Internal Server Error: 
/dashboard/savanna/
[Mon Jul 15 07:44:35 2013] [error] Traceback (most recent call last):
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 111, in 
get_response
[Mon Jul 15 07:44:35 2013] [error] response = callback(request, 
*callback_args, **callback_kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
[Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args, 
**kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/horizon/decorators.py, line 54, in dec
[Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args, 
**kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
[Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args, 
**kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/views/generic/base.py, line 48, in 
view
[Mon Jul 15 07:44:35 2013] [error] return self.dispatch(request, *args, 
**kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/django/views/generic/base.py, line 69, in 
dispatch
[Mon Jul 15 07:44:35 2013] [error] return handler(request, *args, **kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/horizon/tables/views.py, line 155, in get
[Mon Jul 15 07:44:35 2013] [error] handled = self.construct_tables()
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/horizon/tables/views.py, line 146, in 
construct_tables
[Mon Jul 15 07:44:35 2013] [error] handled = self.handle_table(table)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/horizon/tables/views.py, line 118, in 
handle_table
[Mon Jul 15 07:44:35 2013] [error] data = self._get_data_dict()
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/horizon/tables/views.py, line 182, in 
_get_data_dict
[Mon Jul 15 07:44:35 2013] [error] self._data = 
{self.table_class._meta.name: self.get_data()}
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/clusters/views.py,
 line 40, in get_data
[Mon Jul 15 07:44:35 2013] [error] clusters = savanna.clusters.list()
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/clusters.py,
 line 74, in list
[Mon Jul 15 07:44:35 2013] [error] return self._list('/clusters', 
'clusters')
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/base.py,
 line 84, in _list
[Mon Jul 15 07:44:35 2013] [error] resp = self.api.client.get(url)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/httpclient.py,
 line 28, in get
[Mon Jul 15 07:44:35 2013] [error] headers={'x-auth-token': self.token})
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/requests/api.py, line 55, in get
[Mon Jul 15 07:44:35 2013] [error] return request('get', url, **kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/requests/api.py, line 44, in request
[Mon Jul 15 07:44:35 2013] [error] return session.request(method=method, 
url=url, **kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/requests/sessions.py, line 335, in request
[Mon Jul 15 07:44:35 2013] [error] resp = self.send(prep, **send_kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/requests/sessions.py, line 438, in send
[Mon Jul 15 07:44:35 2013] [error] r = adapter.send(request, **kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File 
/usr/lib/python2.6/site-packages/requests/adapters.py, line 327, in send
[Mon Jul 15 07:44:35 2013] [error] raise ConnectionError(e)
[Mon Jul 15 07:44:35 2013] [error] ConnectionError: 
HTTPConnectionPool(host='localhost', 

Re: [openstack-dev] cells checks on patches

2013-07-15 Thread Andrew Laski
I will also be working to help get cells passing tests.  I just setup a 
blueprint on the Nova side for this, 
https://blueprints.launchpad.net/nova/+spec/cells-gating.


On 07/13/13 at 05:00pm, Chris Behrens wrote:

I can make a commitment to help getting cells passing.  Basically, I'd like to 
do whatever I can to make sure we can have a useful gate on cells.  
Unfortunately I'm going to be mostly offline for the next 10 days or so, 
however. :)

I thought there was a sec group patch up for cells, but I've not fully reviewed 
it.

The generic cannot communicate with cell 'child' almost sounds like some 
other basic issue I'll see if I can take a peak during my layovers tonight.

On Jul 13, 2013, at 8:28 AM, Sean Dague s...@dague.net wrote:


On 07/13/2013 10:50 AM, Dan Smith wrote:

Currently cells can even get past devstack exercises, which
are very
minor sanity checks for the environment (nothing tricky).


I thought that the plan was to deprecate the devstack exercises and
just use tempest. Is that not the case? I'd bet that the devstack
exercises are just not even on anyone's radar. Since the excellent work
you QA folks did to harden those tests before grizzly, I expect most
people take them for granted now :)

Digging into the logs just a bit, I see what looks like early failures
related to missing security group issues in the cells manager log. I
know there are some specific requirements in how things have to be set
up for cells, so I think it's likely that we'll need to do some
tweaking of configs to get all of this right.

We enabled the test knowing that it wasn't going to pass for a while,
and it's only been running for less than 24 hours. In the same way that
the grenade job had (until recently) been failing on everything, the
point of enabling the cells test now is so that we can start iterating
on fixes so that we can hopefully have some amount of regular test
coverage before havana.


Like I said, as long as someone is going to work on it, I'm happy. :) I just 
don't want this to be an enable the tests and hope magically fairies come to 
fix them issue. That's what we did on full neutron tests, and it's been 
bouncing around like that for a while.

We are planning on disabling the devstack exercises, it wasn't so much that, 
it's that it looks like there is fundamental lack of functioning nova on 
devstack for cells right now. The security groups stack trace is just a side 
effect of cells falling over in a really low level way (this is what's before 
and after the trace).

2013-07-13 00:12:18.605 ERROR nova.cells.scheduler 
[req-dcbb868c-98a7-4d65-94b3-e1234c50e623 demo demo] Couldn't communicate with 
cell 'child'

2013-07-13 00:12:18.606 ERROR nova.cells.scheduler 
[req-dcbb868c-98a7-4d65-94b3-e1234c50e623 demo demo] Couldn't communicate with 
any cells

Again, mostly I want to know that we've got a blueprint or bug that's high 
priority and someone's working on it. It did take a while to get grenade there 
(we're 2 bugs away from being able to do it repeatably in the gate), but during 
that time we did have people working on it. It just takes a while to get to the 
bottom of these issues some times, so I want people to have a realistic 
expectation on how quickly we'll go from running upstream to gating.

   -Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tempest testing for optional middlewares

2013-07-15 Thread Joe Hakim Rahme
Thank you Sean.

In case someone checks this in the future, it's worth mentioning that any
new field added to the conf file has to be declared in tempest/config.py 
first.

Joe

On Jul 12, 2013, at 8:26 PM, Sean Dague s...@dague.net wrote:

 On 07/12/2013 02:15 PM, Joe Hakim Rahme wrote:
 Hello everyone,
 
 I'm addressing this email to the dev list because I couldn't find a way
 to get in touch with the testing team. Hopefully someone here will have
 the answer to my question or can point me to the correct people to ask.
 
 I am writing Tempest tests that cover the behavior of some optional
 Swift middlewares (precisely account_quota and container_quota).
 It occurs to me that these middlewares are optional and may not be
  present in every Swift installation. In this case, I'd like Tempest to skip
 this test rather than fail it.
 
 What's the correct way of detecting the presence of the middleware
 before launching the test?
 
 In the tempest.conf you should create a variable for foo_available, 
 defaulting to false. Then if someone wants to test it we set it to true. Then 
 you can decorate your tests (or class) to skip if that variable is false.
 
 This is an example of it in the code - 
 https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_server_actions.py#L150
 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Description for OpenStack QA

2013-07-15 Thread Thierry Carrez
Sean Dague wrote:
 On 07/15/2013 07:46 AM, Thierry Carrez wrote:
 Chmouel Boudjnah wrote:
 The OpenStack QA program starts with 2 git trees
   * tempest - https://github.com/openstack/tempest
   * grenade - https://github.com/openstack-dev/grenade

 I haven't read the full discussion on this so apologies if I am
 missing something, but why devstack is no part of this?

 Devstack falls somewhere between QA and Infrastructure... We raised
 briefly the subject of where we should attach it during the initial
 discussion on programs, then punted for later discussion.

 It falls in the same bucket as other central repositories like
 openstack/requirements -- everyone ends up contributing to those so it's
 difficult to attach them to any given program/team.
 
 Right, devstack's primary mission is still providing development
 environments to developers. We reuse it in QA and Infra, but it's kind
 of a different beast.
 
 So for now it just remains what it is, which I think is fine. I think
 it's good to be pragmatic about Programs and only fit the git trees that
 naturally fit into them, and just be really concerned that every git
 tree we carry has to be owned by a program.
 
 Sometimes we have useful code just because people are useful and doing
 good things for the community. I'd much rather let that bloom as is then
 spend a lot of time ensuring it fits into an existing program.

I'd generally agree with that. The only issue with devstack is that
under the old project-based taxonomy it was classified as a gating
project and therefore granted ATC status to its contributors. Under the
new program-based taxonomy, if it's not adopted by a program then it
would fall off the official ATC scope.

That said, if nobody specific wants to own it it could also be
co-adopted by multiple programs (I'd say QA and Infra). That would close
that taxonomy change loophole as far as I am concerned...

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna]error while accessing Savanna UI

2013-07-15 Thread Matthew Farrellee

On 07/15/2013 08:45 AM, Arindam Choudhury wrote:

Hi,

I did:

git clone https://github.com/stackforge/savanna-dashboard.git

cd savanna-dashboard

python setup.py install

pip show savannadashboard
---
Name: savannadashboard
Version: 0.2.rc2
Location:
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg
Requires:

then in  /usr/share/openstack-dashboard/openstack_dashboard/settings.py

HORIZON_CONFIG = {
 'dashboards': ('project', 'admin', 'settings', 'savanna',),


INSTALLED_APPS = (
 'openstack_dashboard',
 'savannadashboard',

and in
/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py

SAVANNA_URL = 'http://localhost:8386/v1.0'

But whenever I try to access savanna dashboard I get the following error
in httpd error_access log:

[Mon Jul 15 07:44:35 2013] [error] ERROR:django.request:Internal Server
Error: /dashboard/savanna/
[Mon Jul 15 07:44:35 2013] [error] Traceback (most recent call last):
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line
111, in get_response
[Mon Jul 15 07:44:35 2013] [error] response = callback(request,
*callback_args, **callback_kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
[Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
**kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/horizon/decorators.py, line 54, in dec
[Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
**kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
[Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
**kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/django/views/generic/base.py, line
48, in view
[Mon Jul 15 07:44:35 2013] [error] return self.dispatch(request,
*args, **kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/django/views/generic/base.py, line
69, in dispatch
[Mon Jul 15 07:44:35 2013] [error] return handler(request, *args,
**kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/horizon/tables/views.py, line 155, in get
[Mon Jul 15 07:44:35 2013] [error] handled = self.construct_tables()
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/horizon/tables/views.py, line 146, in
construct_tables
[Mon Jul 15 07:44:35 2013] [error] handled = self.handle_table(table)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/horizon/tables/views.py, line 118, in
handle_table
[Mon Jul 15 07:44:35 2013] [error] data = self._get_data_dict()
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/horizon/tables/views.py, line 182, in
_get_data_dict
[Mon Jul 15 07:44:35 2013] [error] self._data =
{self.table_class._meta.name: self.get_data()}
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/clusters/views.py,
line 40, in get_data
[Mon Jul 15 07:44:35 2013] [error] clusters = savanna.clusters.list()
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/clusters.py,
line 74, in list
[Mon Jul 15 07:44:35 2013] [error] return self._list('/clusters',
'clusters')
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/base.py,
line 84, in _list
[Mon Jul 15 07:44:35 2013] [error] resp = self.api.client.get(url)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/httpclient.py,
line 28, in get
[Mon Jul 15 07:44:35 2013] [error] headers={'x-auth-token': self.token})
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/requests/api.py, line 55, in get
[Mon Jul 15 07:44:35 2013] [error] return request('get', url, **kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/requests/api.py, line 44, in request
[Mon Jul 15 07:44:35 2013] [error] return
session.request(method=method, url=url, **kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/requests/sessions.py, line 335, in
request
[Mon Jul 15 07:44:35 2013] [error] resp = self.send(prep, **send_kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/requests/sessions.py, line 438, in send
[Mon Jul 15 07:44:35 2013] [error] r = adapter.send(request, **kwargs)
[Mon Jul 15 07:44:35 2013] [error]   File
/usr/lib/python2.6/site-packages/requests/adapters.py, line 327, in send
[Mon Jul 15 07:44:35 2013] [error] raise ConnectionError(e)
[Mon Jul 15 07:44:35 2013] [error] ConnectionError:

Re: [openstack-dev] [savanna]error while accessing Savanna UI

2013-07-15 Thread Arindam Choudhury
Its solved. 

I started a cluster and then its working.

 Date: Mon, 15 Jul 2013 09:57:20 -0400
 From: m...@redhat.com
 To: openstack-dev@lists.openstack.org
 CC: arin...@live.com; savanna-...@lists.launchpad.net
 Subject: Re: [openstack-dev] [savanna]error while accessing Savanna UI
 
 On 07/15/2013 08:45 AM, Arindam Choudhury wrote:
  Hi,
 
  I did:
 
  git clone https://github.com/stackforge/savanna-dashboard.git
 
  cd savanna-dashboard
 
  python setup.py install
 
  pip show savannadashboard
  ---
  Name: savannadashboard
  Version: 0.2.rc2
  Location:
  /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg
  Requires:
 
  then in  /usr/share/openstack-dashboard/openstack_dashboard/settings.py
 
  HORIZON_CONFIG = {
   'dashboards': ('project', 'admin', 'settings', 'savanna',),
 
 
  INSTALLED_APPS = (
   'openstack_dashboard',
   'savannadashboard',
 
  and in
  /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
 
  SAVANNA_URL = 'http://localhost:8386/v1.0'
 
  But whenever I try to access savanna dashboard I get the following error
  in httpd error_access log:
 
  [Mon Jul 15 07:44:35 2013] [error] ERROR:django.request:Internal Server
  Error: /dashboard/savanna/
  [Mon Jul 15 07:44:35 2013] [error] Traceback (most recent call last):
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/django/core/handlers/base.py, line
  111, in get_response
  [Mon Jul 15 07:44:35 2013] [error] response = callback(request,
  *callback_args, **callback_kwargs)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
  [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
  **kwargs)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/horizon/decorators.py, line 54, in dec
  [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
  **kwargs)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
  [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
  **kwargs)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/django/views/generic/base.py, line
  48, in view
  [Mon Jul 15 07:44:35 2013] [error] return self.dispatch(request,
  *args, **kwargs)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/django/views/generic/base.py, line
  69, in dispatch
  [Mon Jul 15 07:44:35 2013] [error] return handler(request, *args,
  **kwargs)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 155, in get
  [Mon Jul 15 07:44:35 2013] [error] handled = self.construct_tables()
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 146, in
  construct_tables
  [Mon Jul 15 07:44:35 2013] [error] handled = self.handle_table(table)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 118, in
  handle_table
  [Mon Jul 15 07:44:35 2013] [error] data = self._get_data_dict()
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 182, in
  _get_data_dict
  [Mon Jul 15 07:44:35 2013] [error] self._data =
  {self.table_class._meta.name: self.get_data()}
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/clusters/views.py,
  line 40, in get_data
  [Mon Jul 15 07:44:35 2013] [error] clusters = savanna.clusters.list()
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/clusters.py,
  line 74, in list
  [Mon Jul 15 07:44:35 2013] [error] return self._list('/clusters',
  'clusters')
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/base.py,
  line 84, in _list
  [Mon Jul 15 07:44:35 2013] [error] resp = self.api.client.get(url)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/httpclient.py,
  line 28, in get
  [Mon Jul 15 07:44:35 2013] [error] headers={'x-auth-token': self.token})
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/requests/api.py, line 55, in get
  [Mon Jul 15 07:44:35 2013] [error] return request('get', url, **kwargs)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/requests/api.py, line 44, in request
  [Mon Jul 15 07:44:35 2013] [error] return
  session.request(method=method, url=url, **kwargs)
  [Mon Jul 15 07:44:35 2013] [error]   File
  /usr/lib/python2.6/site-packages/requests/sessions.py, line 335, in
  request
  [Mon Jul 15 07:44:35 2013] 

Re: [openstack-dev] Proposal for new Program: OpenStack Deployment

2013-07-15 Thread Anne Gentle
Any thoughts on these questions?

Thanks,
Anne


On Wed, Jul 10, 2013 at 9:47 AM, Anne Gentle
annegen...@justwriteclick.comwrote:

 Hi Robert,

 What's your plan for documenting the efforts so that others can do this in
 their environments? Is there any documentation currently for which you can
 send links?

 The Doc team is especially interested in configuration docs and
 installation docs as those are the toughest to produce in a timely,
 accurate manner now. We have a blueprint for automatically generating docs
 from configuration options in the code. We are trying to determine a good
 path for install docs for meeting the release deliverable -- much
 discussion at
 http://lists.openstack.org/pipermail/openstack-docs/2013-July/002114.html.
 Your input welcomed.

 Thanks,
 Anne


 On Wed, Jul 10, 2013 at 3:40 AM, Robert Collins robe...@robertcollins.net
  wrote:

 On 10 July 2013 20:01, Thierry Carrez thie...@openstack.org wrote:
 
  Robert Collins wrote:
   Official Title: OpenStack Deployment
   PTL: Robert Collins robe...@robertcollins.net
   mailto:robe...@robertcollins.net
   Mission Statement:
 Develop and maintain tooling and infrastructure able to
 deploy OpenStack in production, using OpenStack itself wherever
 possible.
  
   I believe everyone is familiar with us, but just in case, here is some
   background: we're working on deploying OpenStack to bare metal using
   OpenStack components and cloud deployment strategies - such as Heat
 for
   service orchestration, Nova for machine provisioning Neutron for
 network
   configuration, golden images for rapid deployment... etc etc. So far
 we
   have straight forward deployment of bare metal clouds both without
 Heat
   (so that we can bootstrap from nothing), and with Heat (for the
   bootstrapped layer), and are working on the KVM cloud layer at the
 moment.
 
  Could you provide the other pieces of information mentioned at:
  https://wiki.openstack.org/wiki/Governance/NewPrograms

 ack:

 * Detailed mission statement (including why their effort is essential
 to the completion of the OpenStack mission)

 I think this is covered. In case its not obvious: if you can't install
 OpenStack easily, it becomes a lot harder to deliver to users. So
 deployment is essential (and at the moment the market is assessing the
 cost of deploying OpenStack at ~ 60K - so we need to make it a lot
 cheaper).

 * Expected deliverables and repositories

 We'll deliver and maintain working instructions and templates for
 deploying OpenStack.
 Repositories that are 'owned' by Deployment today
 diskimage-builder
 tripleo-image-elements
 triple-heat-templates
 os-apply-config
 os-collect-config
 os-refresh-config
 toci [triple-o-CI][this is something we're discussing with infra about
 where it should live... and mordred and jeblair disagree with each
 other :)].
 tripleo-incubator [we're still deciding if we'll have an actual CLI
 tool or just point folk at the other bits, and in the interim stuff
 lives here].


 * How 'contribution' is measured within the program (by default,
 commits to the repositories associated to the program)

 Same as rest of OpenStack : commits to any of these repositories, and
 we need some way of recognising non-code contributions like extensive
 docs/bug management etc, but we don't have a canned answer for the
 non-code aspects.

 * Main team members
 Is this the initial review team, or something else? If its the review
 team, then me/Clint/Chris Jones/Devananda.
 If something else then I propose we start with those who have commits
 in the last 6 months, namely [from a quick git check, this may be
 imperfect]:
 Me
 Clint Byrum
 Chris Jones
 Ghe Rivero
 Chris Krelle
 Devananda van der Veen
 Derek Higgins
 Cody Somerville
 Arata Notsu
 Dan Prince
 Elizabeth Krumbach
 Joe Gordon
 Lucas Alvares Gomes
 Steve Baker
 Tim Miller

 Proposed initial program lead (PTL)
 I think yours truely makes as much sense as anything :).

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Cloud Services

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 annegen...@justwriteclick.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-15 Thread Morgan Fainberg
On  2013/07/13, Stephen Gran a wrote :

 On 15/07/13 10:46, Thomas Goirand wrote:

 On 07/15/2013 04:32 PM, Stephen Gran wrote:

 On 15/07/13 09:26, Thomas Goirand wrote:

 Dolph,

 If you do that, then you will be breaking Debian packages, as they
 expect Sqlite as the default, for example when using
 DEBIAN_FRONTEND=noninteractive apt-get install keystone (if you choose
 MySQL, then you need to enter admin credentials to setup the db). I will
 receive tons of piupart failures reports if we can't upgrade with
 SQLite.

 I would really be disappointed if this happens, and get into situations
 where I have RC bugs which I can't realistically close by myself.

 So really, if it is possible, continue to support it, at least from one
 release to the next.


 Why not just change the default for Debian?  Sqlite isn't particularly
 useful for actual deployments anyway.


 Because that is the only backend that will work without providing
 credentials on the keyboard, so it is the only one that will work in a
 non-interactive session of apt-get (which is used for all automated
 tests in Debian, including piuparts).


 It strikes me that making the least useful option for users the default in
 order to pass a test suite is suboptimal.  I'm sure this conversation would
 be better continued off list if you're interested.

Cheers,
 --
 Stephen Gran
 Senior Systems Integrator - guardian.co.uk


I would have to agree here.  If the case for using a suboptimal solution
here is to pass tests another approach should be taken.

If this is a legitimate issue, maybe we should look at what Neutron is
doing, as they are using alembric for migrations already.

Cheers,
Morgan Fainberg


-- 
Sent from my iPhone (please excuse the brevity and any typos)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna]error while accessing Savanna UI

2013-07-15 Thread Ruslan Kamaldinov
Congratulations!

On Jul 15, 2013, at 6:13 PM, Arindam Choudhury arin...@live.com wrote:

 Its solved. 
 
 I started a cluster and then its working.
 
 Date: Mon, 15 Jul 2013 09:57:20 -0400
 From: m...@redhat.com
 To: openstack-dev@lists.openstack.org
 CC: arin...@live.com; savanna-...@lists.launchpad.net
 Subject: Re: [openstack-dev] [savanna]error while accessing Savanna UI
 
 On 07/15/2013 08:45 AM, Arindam Choudhury wrote:
 Hi,
 
 I did:
 
 git clone https://github.com/stackforge/savanna-dashboard.git
 
 cd savanna-dashboard
 
 python setup.py install
 
 pip show savannadashboard
 ---
 Name: savannadashboard
 Version: 0.2.rc2
 Location:
 /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg
 Requires:
 
 then in /usr/share/openstack-dashboard/openstack_dashboard/settings.py
 
 HORIZON_CONFIG = {
 'dashboards': ('project', 'admin', 'settings', 'savanna',),
 
 
 INSTALLED_APPS = (
 'openstack_dashboard',
 'savannadashboard',
 
 and in
 /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
 
 SAVANNA_URL = 'http://localhost:8386/v1.0'
 
 But whenever I try to access savanna dashboard I get the following error
 in httpd error_access log:
 
 [Mon Jul 15 07:44:35 2013] [error] ERROR:django.request:Internal Server
 Error: /dashboard/savanna/
 [Mon Jul 15 07:44:35 2013] [error] Traceback (most recent call last):
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/django/core/handlers/base.py, line
 111, in get_response
 [Mon Jul 15 07:44:35 2013] [error] response = callback(request,
 *callback_args, **callback_kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
 [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
 **kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/horizon/decorators.py, line 54, in dec
 [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
 **kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/horizon/decorators.py, line 38, in dec
 [Mon Jul 15 07:44:35 2013] [error] return view_func(request, *args,
 **kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/django/views/generic/base.py, line
 48, in view
 [Mon Jul 15 07:44:35 2013] [error] return self.dispatch(request,
 *args, **kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/django/views/generic/base.py, line
 69, in dispatch
 [Mon Jul 15 07:44:35 2013] [error] return handler(request, *args,
 **kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 155, in get
 [Mon Jul 15 07:44:35 2013] [error] handled = self.construct_tables()
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 146, in
 construct_tables
 [Mon Jul 15 07:44:35 2013] [error] handled = self.handle_table(table)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 118, in
 handle_table
 [Mon Jul 15 07:44:35 2013] [error] data = self._get_data_dict()
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/horizon/tables/views.py, line 182, in
 _get_data_dict
 [Mon Jul 15 07:44:35 2013] [error] self._data =
 {self.table_class._meta.name: self.get_data()}
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/clusters/views.py,
 line 40, in get_data
 [Mon Jul 15 07:44:35 2013] [error] clusters = savanna.clusters.list()
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/clusters.py,
 line 74, in list
 [Mon Jul 15 07:44:35 2013] [error] return self._list('/clusters',
 'clusters')
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/base.py,
 line 84, in _list
 [Mon Jul 15 07:44:35 2013] [error] resp = self.api.client.get(url)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/savannadashboard-0.2.rc2-py2.6.egg/savannadashboard/api/httpclient.py,
 line 28, in get
 [Mon Jul 15 07:44:35 2013] [error] headers={'x-auth-token': self.token})
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/requests/api.py, line 55, in get
 [Mon Jul 15 07:44:35 2013] [error] return request('get', url, **kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/requests/api.py, line 44, in request
 [Mon Jul 15 07:44:35 2013] [error] return
 session.request(method=method, url=url, **kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 /usr/lib/python2.6/site-packages/requests/sessions.py, line 335, in
 request
 [Mon Jul 15 07:44:35 2013] [error] resp = self.send(prep, **send_kwargs)
 [Mon Jul 15 07:44:35 2013] [error] File
 

Re: [openstack-dev] Program Description for OpenStack QA

2013-07-15 Thread Thierry Carrez
Dolph Mathews wrote:
 On Monday, July 15, 2013, Thierry Carrez wrote:
 I'd generally agree with that. The only issue with devstack is that
 under the old project-based taxonomy it was classified as a gating
 project and therefore granted ATC status to its contributors. Under the
 new program-based taxonomy, if it's not adopted by a program then it
 would fall off the official ATC scope.
 
 How many people contribute to devstack that don't also contribute to
 some other ATC-granting project?

Hopefully none :) But it's not just a question of falling off the
ATC-granting scope... it's also that devstack is a critical piece of our
CI system and I'd therefore prefer if it was taken care of by a team (or
set of teams) under the Technical Committee authority.

It used to be a project separate from OpenStack but we brought it to
the fold for that precise reason.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for new Program: OpenStack Deployment

2013-07-15 Thread Clint Byrum
Excerpts from Anne Gentle's message of 2013-07-10 07:47:19 -0700:
 Hi Robert,
 
 What's your plan for documenting the efforts so that others can do this in
 their environments? Is there any documentation currently for which you can
 send links?
 

We've been documenting the bootstrap procedure in this file:

https://github.com/tripleo/incubator/blob/master/devtest.md

This is intended as much as a how to get started as it is how does
this actually work?. We intend to have a much leaner set of instructions
that will have many of these steps scripted.

Also elements have their own README.md which is intended to inform deployers
what to do with each piece:

https://github.com/stackforge/tripleo-image-elements/tree/master/elements/neutron-openvswitch-agent
https://github.com/stackforge/tripleo-image-elements/tree/master/elements/keystone

Long term I see us assembling those into a holistic deployment guide, and
mapping the configurations and relationships defined in the Heat templates
and elements to the corresponding parts of each project's manual.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-15 Thread Adam Young

On 07/15/2013 05:46 AM, Thomas Goirand wrote:

On 07/15/2013 04:32 PM, Stephen Gran wrote:

On 15/07/13 09:26, Thomas Goirand wrote:

Dolph,

If you do that, then you will be breaking Debian packages, as they
expect Sqlite as the default, for example when using
DEBIAN_FRONTEND=noninteractive apt-get install keystone (if you choose
MySQL, then you need to enter admin credentials to setup the db). I will
receive tons of piupart failures reports if we can't upgrade with SQLite.

I would really be disappointed if this happens, and get into situations
where I have RC bugs which I can't realistically close by myself.

So really, if it is possible, continue to support it, at least from one
release to the next.

Why not just change the default for Debian?  Sqlite isn't particularly
useful for actual deployments anyway.

Because that is the only backend that will work without providing
credentials on the keyboard, so it is the only one that will work in a
non-interactive session of apt-get (which is used for all automated
tests in Debian, including piuparts).


That is a really, really, really bad reason.



Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Description for OpenStack QA

2013-07-15 Thread Dean Troyer
On Mon, Jul 15, 2013 at 9:59 AM, Thierry Carrez thie...@openstack.org wrote:
 Hopefully none :) But it's not just a question of falling off the
 ATC-granting scope... it's also that devstack is a critical piece of our
 CI system and I'd therefore prefer if it was taken care of by a team (or
 set of teams) under the Technical Committee authority.

 It used to be a project separate from OpenStack but we brought it to
 the fold for that precise reason.

I was reluctant to contribute to a plethora of programs if it was not
necessary but it does not feel like there is a good fit for DevStack's
multiple purposes.  In my mind it is still first and foremost a
development tool and quick way to build an OpenStack implementation
for multiple other uses such as CI gating, Grenade, POC demos,
whatever.

Program proposal forthcoming...

dt

--

Dean Troyer
dtro...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Memcache user token index and PKI tokens

2013-07-15 Thread Adam Young

On 07/15/2013 04:06 AM, Kieran Spear wrote:

Hi all,

I want to backport the fix for the Token List in Memcache can consume
an entire memcache page bug[1] to Grizzly, but I had a couple of
questions:

1. Why do we need to store the entire token data in the
usertoken-userid key? This data always seems to be hashed before
indexing into the 'token-tokenid' keys anyway. The size of the
memcache data for a user's token list currently grows by 4k every time
a new PKI token is created. It doesn't take long to hit 1MB at this
rate even with the above fix.
Yep. The reason, though, is that we either take a memory/storage hit 
(store the whole token) or a performance hit (reproduce the token data) 
and we've gone for the storage hit.





2. Every time it creates a new token, Keystone loads each token from
the user's token list with a separate memcache call so it can throw it
away if it's expired. This seems excessive. Is it anything to worry
about? If it just checked the first two tokens you'd get the same
effect on a longer time scale.

I guess part of the answer is to decrease our token expiry time, which
should mitigate both issues. Failing that we'd consider moving to the
SQL backend.
HOw about doing both?  But if you move to the sql backend, rememeber to 
periodically clean up the token table, or you will have storage issues 
there as well.  No silver bullet, I am afraid.




Cheers,
Kieran

[1] https://bugs.launchpad.net/keystone/+bug/1171985

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Program Proposal: DevStack

2013-07-15 Thread Dean Troyer
DevStack plays multiple roles in the development process for OpenStack.


Official Title: DevStack

PTL: Dean Troyer dtro...@gmail.com

Mission Statement: To provide an installation of OpenStack from git
repository master, or specific branches, suitable for development and
operational testing.  It also attempts to document the process and
provide examples of command line usage.

DevStack is not a general OpenStack installer and does not support all
installation combinations.

Documentation: http://devstack.org
GitHub: https://github.com/openstack-dev/devstack
LaunchPad: https://launchpad.net/devstack
Program Wiki: https://wiki.openstack.org/wiki/DevStack


dt

--

Dean Troyer
dtro...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Execute Tempest test with admin privileges

2013-07-15 Thread Joe Hakim Rahme
Hello,

I am trying to write a test in Tempest that would cover the behavior of
the Swift Account Quotas middleware. The use case I'm trying to cover
is that the test would create an account, put a quota on it, and try to
upload a file larger and a file smaller than the quota.

Here's what I have in my class:

class AccountQuotaTest(base.BaseObjectTest):
   @classmethod
   def setUpClass(cls):
   super(AccountQuotaTest, cls).setUpClass()
   cls.container_name = rand_name(name=TestContainer)
   cls.container_client.create_container(cls.container_name)

   # Add the account quota metadata
   size = 10
   metadata = {Quota-Bytes: size}
   cls.account_client.create_account_metadata(metadata=metadata)

However when I execute the tests, I get a Unauthorized exception. I
guess it makes sense, since only the admin (or any account with
ResellerAdmin role) can set this metadata.

How can I proceed so that admin sets the quota on the account?

I hope I'm clear enough, don't hesitate to ask if anything's not clear.

Joe


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-15 Thread Dolph Mathews
On Sun, Jul 14, 2013 at 12:29 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi Dolph,

 Yes, I agree that there are some problems with sqlite and migrations.
 But I am not agree with approach of fully removing sqlite.

 It is pretty useful for testing.


To be clear, I don't have any interest in dropping support for running
keystone against sqlite as backend for development / testing. It's
certainly useful for running functional tests in-memory. However, such
databases don't need to be created by db_sync or using any migrations
whatsoever.

It's the migrations themselves where I want to drop support for sqlite. No
one is migrating production data in a sqlite database.



 So the right approach that we are trying to provide in whole OpenStack is
 next:

 1) Use alembic as migration tool (don't support sqlite in migrations)


In my original email, I failed to cite where I got the idea (because I
couldn't find a link, perhaps someone else knows of it?), but the same
concern was expressed by some alembic-related documentation or blog post
that I had recently read. I don't expect to see keystone making much
progress towards switching to alembic until icehouse or even later. In the
mean time, I'd like to do what we can to limit our pain while working with
sqlaclhemy-migrate (which mostly involves writing entirely different code
paths to handle sqlite's lack of support for proper schema evolution).


 2) Test migrations in two ways:
 a) run all migrations with real data against supported backends (mysql,
 psql)
 b) test that models and migrations are synced in all backends (mysql, psql)


+1


 3) Unit tests should be run against DB that was created from models (not
 from migrations)


+1


 4) Unit tests should support all backends (sqlite also)


+1




 If you are interested in this sphere I could try to describe current state
 in more words.


I think we're already in complete agreement! I just wasn't clear in my
original email (apologies!).




 Best regards,
 Boris Pavlovic



 On Sat, Jul 13, 2013 at 12:19 AM, Monty Taylor mord...@inaugust.comwrote:



 On 07/11/2013 01:12 PM, Dolph Mathews wrote:
  Just as a general statement, outside the scope of openstack, I don't
  think sqlite is intended to support schema evolution. From the sqlite
  docs [1]: SQLite supports a limited subset of ALTER TABLE. [...] It is
  not possible to rename a column, remove a column, or add or remove
  constraints from a table.
 
  We've been through hell trying to support migrations on sqlite, because
  we test against sqlite, and because we test our migrations... on sqlite.
  So, we've already shot ourselves in the foot. We're clearly moving
  towards gating against mysql + postgresql, so in the mean time, let's
  limit the amount of effort we put into further support sqlite migrations
  until we can safely rip it out altogether.
 
  [1]: http://www.sqlite.org/lang_altertable.html

 I agree. The reason to use sqlite in unitests and stuff is because it's
 easy and doesn't require users and system things and everything. If
 we're spending extra effort to maintain the simple thing, then it's
 probably not a simple thing.

 As an aside, (ignore the fact that I'm a former Drizzle core dev) it
 might be worthwhile taking 30 minutes one day and exploring a drizzle
 database test fixture. One of the things we did in drizzle was make it
 not need any bootstrapping and to work sanely with no config files ...
 so launching a drizzle on a spare port, running database tests against
 it and then deleting it should actually be super simple - and at the
 worst no harder than doing what glance does in their functional tests.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-15 Thread Dolph Mathews
On Fri, Jul 12, 2013 at 3:19 PM, Monty Taylor mord...@inaugust.com wrote:



 On 07/11/2013 01:12 PM, Dolph Mathews wrote:
  Just as a general statement, outside the scope of openstack, I don't
  think sqlite is intended to support schema evolution. From the sqlite
  docs [1]: SQLite supports a limited subset of ALTER TABLE. [...] It is
  not possible to rename a column, remove a column, or add or remove
  constraints from a table.
 
  We've been through hell trying to support migrations on sqlite, because
  we test against sqlite, and because we test our migrations... on sqlite.
  So, we've already shot ourselves in the foot. We're clearly moving
  towards gating against mysql + postgresql, so in the mean time, let's
  limit the amount of effort we put into further support sqlite migrations
  until we can safely rip it out altogether.
 
  [1]: http://www.sqlite.org/lang_altertable.html

 I agree. The reason to use sqlite in unitests and stuff is because it's
 easy and doesn't require users and system things and everything. If
 we're spending extra effort to maintain the simple thing, then it's
 probably not a simple thing.


I agree that it's easy for unit  functional testing. It's a simple
solution and works fairly well, although I know we'd catch more issues if
we ran our functional tests against a database that supported static
typing, real booleans, etc.



 As an aside, (ignore the fact that I'm a former Drizzle core dev) it
 might be worthwhile taking 30 minutes one day and exploring a drizzle
 database test fixture. One of the things we did in drizzle was make it
 not need any bootstrapping and to work sanely with no config files ...
 so launching a drizzle on a spare port, running database tests against
 it and then deleting it should actually be super simple - and at the
 worst no harder than doing what glance does in their functional tests.


That sounds like an viable improvement over sqlite in general...
unfortunately, the drizzle site appears to be unmaintained? (at least at
the moment) The documentation link [1] from here [2] returns a 404, and
these docs [3] return a 403. Launchpad bug activity [4] doesn't seem
particularly active either :-/

[1] http://docs.drizzle.org/
[2] http://www.drizzle.org/content/documentation
[3] https://drizzle.readthedocs.org/en/latest/
[4]
https://bugs.launchpad.net/drizzle/+bugs?orderby=-date_last_updatedsearch=Searchfield.status%3Alist=FIXCOMMITTEDfield.status%3Alist=FIXRELEASED




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday July 16th at 19:00 UTC

2013-07-15 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday July 16th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Appliance support

2013-07-15 Thread Eugene Nikanorov
Hi Ravi,

There are plans to support appliances (hardware or virtual) but not for
Havana.
Regarding vendor contributions - I'd like to know it as well.


Thanks,
Eugene.



On Thu, Jul 11, 2013 at 5:04 AM, Ravi Chunduru ravi...@gmail.com wrote:

 Hi,
  I would like to know if we have any plans/proposal to move forward from
 haproxy in network namespace to support haproxy in a virtual appliance.
 And any vendor contribution in drivers for their appliances.

 Thanks,
 -Ravi.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bug #1194026

2013-07-15 Thread Edgar Magana
This is ready to be merged, already has two core reviews but Salvatore ask
for some changes in the commit message.

Thanks,

Edgar

On 7/15/13 1:47 AM, Thierry Carrez thie...@openstack.org wrote:

Nachi Ueno wrote:
 Since this is critical bug which stops gating, I hope this is merged
soon.
 I'll fix this code asap if I got review comment.

Great work Nachi. I certainly hope it gets the second Neutron +2 very
soon so we can make the Neutron tests voting again ASAP.

Thanks,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSLO] Comments/Questions on Messaging Wiki

2013-07-15 Thread William Henry
- Original Message -

 On Fri, Jul 12, 2013 at 8:09 PM, William Henry  whe...@redhat.com  wrote:

  Sent from my iPhone
 

  On Jul 12, 2013, at 5:27 PM, Doug Hellmann  doug.hellm...@dreamhost.com 
  wrote:
 

   On Fri, Jul 12, 2013 at 5:40 PM, William Henry  whe...@redhat.com 
   wrote:
  
 

Hi all,
   
  
 

I've been reading through the Messaging Wiki and have some comments.
Not
criticisms, just comments and questions.
   
  
 
I have found this to be a very useful document. Thanks.
   
  
 

1. There are multiple backend transport drivers which implement the
API
semantics using different messaging systems - e.g. RabbitMQ, Qpid,
ZeroMQ.
While both sides of a connection must use the same transport driver
configured in the same way, the API avoids exposing details of
transports
so
that code written using one transport should work with any other
transport.
   
  
 

The good news for AMQP 1.0 users is that technically boths sides of
the
connection do not have to use same transport driver. In pre-AMQP 1.0
days
this was the case. But today interoperability between AMQP 1.0
implementations has been demonstrated.
   
  
 

   In this case I think we mean the Transport driver from within Oslo. So
   you
   could not connect a ZMQ Transport on one end to an AMQP Transport on the
   other. It will be an implementation detail of the AMQP Transport class to
   decide whether it supports more than one version of AMQP, or if the
   different versions are implemented as different Transports.
  
 

2. I notice under the RPC concepts section that you mention Exchanges
as
a
container in which topics are scoped. Is this exchange a pre AMQP 1.0
artifact or just a general term for oslo.messaging that is loosely
based
on
the pre-AMQP 1.0 artifact called an Exchange? i.e. are you assuming
that
messaging implementations have something called an exchange? Or do you
mean
that messaging implementations can scope a topic and in oslo we call
that
scoping an exchange?
   
  
 

   The latter.
  
 

  Ack. Good. Fits very well into AMQP 1.0 then ;-)
 

3. Some messaging nomenclature: The way the wiki describes RPC  Invoke
Method on One of Multiple Servers  is more like a queue than a topic.
In
messaging a queue is something that multiple consumers can attach to
and
one
of them gets and services a message/request. A topic is where 1+
consumers
are connected and each receives a the message and each can service it
as
it sees fit. In pre-AMQP 1.0 terms what this seems to describe is a
direct
exchange. And a direct excahnge can have multiple consumers listening
to
a
queue on that exchange. (Remember that fanout is just a generalization
of
topic in that all consumers get all fanout messages - there are no
sub-topics etc.)
   
  
 

In AMQP 1.0 the addressing doesn't care or know about exchanges but it
can
support this queue type behavior on an address or topic type behavior
on
an
address.
   
  
 

I know this isn't about AMQP specifically but therefore this is even
more
important. Topics are pub/sub with multiple consumer/services
responding
to
a single message. Queues are next consumer up gets the next message.
   
  
 

(BTW I've seen this kind of confusion also in early versions of
MCollective
in Puppet.)
   
  
 

It might be better to change some of the references to topic to
address.
This would solve the problem. i.e. a use case where one of many servers
listening on an address services a message/request. And later all of
servers
listening on an address service a message/request. Addressing also
solves
the one-to-one as the address is specific to the server (and the others
don't have to receive and reject the message).
   
  
 

   Too many of these terms are overloaded. :-)
  
 

  Yep. But topic pup/sub is certainly different to a queue. ;-)
 

   I'm not sure of the details of how topic and address are different in
   AMQP 1.0. The word address implies to me that the message sender knows
   where the message receiver is in some concrete sense. We don't want those
   semantics in a lot of our use cases. If the address is abstract, then
   it
   sounds like it works much as a topic does. Maybe you can expand on the
   differences?
  
 

  Nope the address is essentially a namespace. The send knows not where it
  ends
  up. Hence in some applications it doesn't even know of its a topic or a
  queue an it could go to one or many depending.
 

 OK, that sounds like it would be part of the Transport's handling of a Target
 (
 https://github.com/markmc/oslo.messaging/blob/master/oslo/messaging/target.py
 ).

Thanks Doug. This is interesting. What's the difference between an exchange and 
a namespace? If exchange is a 

Re: [openstack-dev] Program Proposal: DevStack

2013-07-15 Thread Russell Bryant
On 07/15/2013 11:39 AM, Dean Troyer wrote:
 DevStack plays multiple roles in the development process for OpenStack.

Does it really make sense to be its own program?  There was mention of
just making it a part of infra or QA.  QA actually makes the most sense
to me, since devstack's primary use case is to make it easy to test
OpenStack.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Description for OpenStack QA

2013-07-15 Thread Russell Bryant
On 07/15/2013 08:07 AM, Sean Dague wrote:
 On 07/15/2013 07:46 AM, Thierry Carrez wrote:
 Chmouel Boudjnah wrote:
 The OpenStack QA program starts with 2 git trees
   * tempest - https://github.com/openstack/tempest
   * grenade - https://github.com/openstack-dev/grenade

 I haven't read the full discussion on this so apologies if I am
 missing something, but why devstack is no part of this?

 Devstack falls somewhere between QA and Infrastructure... We raised
 briefly the subject of where we should attach it during the initial
 discussion on programs, then punted for later discussion.

 It falls in the same bucket as other central repositories like
 openstack/requirements -- everyone ends up contributing to those so it's
 difficult to attach them to any given program/team.
 
 Right, devstack's primary mission is still providing development
 environments to developers. We reuse it in QA and Infra, but it's kind
 of a different beast.
 
 So for now it just remains what it is, which I think is fine. I think
 it's good to be pragmatic about Programs and only fit the git trees that
 naturally fit into them, and just be really concerned that every git
 tree we carry has to be owned by a program.

I actually think it fits well into the QA umbrella.  Yes, it's for dev
environments, but what it provides is the ability to easily test your
changes while developing.  And as a result, it works great for QA/Infra
needs.  An easy OpenStack test environment is what devstack really
provides, IMO.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso-config dev requirement

2013-07-15 Thread Doug Hellmann
On Mon, Jul 15, 2013 at 11:03 AM, Monty Taylor mord...@inaugust.com wrote:

 I was looking in to dependency processing as part of some pbr change,
 which got me to look at the way we're doing oslo-config dev requirements
 again. To start, this email is not about causing us to change what we're
 doing, only possibly the mechanics of what we put in the
 requirements.txt file- or to get a more specific example of what we're
 solving so that I can make a test case for it and ensure we're handling
 it properly.

 Currently, we have this:

 -f

 http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2.0a3
 oslo.config=1.2.0a3

 As the way to specify to install - 1.2.0a3 of oslo.config. I believe
 this construct has grown in response to a sequence of issues, but it's
 complex and fragile, so I'd like to explore what's going on.

 The simplest answer would be simply to replace it with:

 http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz

 which will quite happily cause pip to install the contents of that
 tarball. It does not declare a version, but it's not necessary to,
 because the tarball has only one version that it is. Is there a problem
 we have identified where the wrong thing is happening?


 I've tested that I get the right thing in a virtualenv if I make that
 change from pip installing a tarball, pip installing the requirements
 directly and python setup.py install. Is there anything I'm missing?

 Monty



Without the version specifier, we are relying on all projects to install
the right version from that tarball link when we run devstack, but we have
no guarantee that they are moving to new releases in lockstep.

Doug



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSLO] Comments/Questions on Messaging Wiki

2013-07-15 Thread Doug Hellmann
On Mon, Jul 15, 2013 at 1:15 PM, William Henry whe...@redhat.com wrote:



 --




 On Fri, Jul 12, 2013 at 8:09 PM, William Henry whe...@redhat.com wrote:



 Sent from my iPhone

 On Jul 12, 2013, at 5:27 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Fri, Jul 12, 2013 at 5:40 PM, William Henry whe...@redhat.com wrote:

 Hi all,

 I've been reading through the Messaging Wiki and have some comments. Not
 criticisms, just comments and questions.
  I have found this to be a very useful document. Thanks.

 1. There are multiple backend transport drivers which implement the API
 semantics using different messaging systems - e.g. RabbitMQ, Qpid, ZeroMQ.
 While both sides of a connection must use the same transport driver
 configured in the same way, the API avoids exposing details of transports
 so that code written using one transport should work with any other
 transport.

 The good news for AMQP 1.0 users is that technically boths sides of the
 connection do not have to use same transport driver. In pre-AMQP 1.0 days
 this was the case. But today interoperability between AMQP 1.0
 implementations has been demonstrated.


 In this case I think we mean the Transport driver from within Oslo. So
 you could not connect a ZMQ Transport on one end to an AMQP Transport on
 the other. It will be an implementation detail of the AMQP Transport class
 to decide whether it supports more than one version of AMQP, or if the
 different versions are implemented as different Transports.


 2. I notice under the RPC concepts section that you mention Exchanges as
 a container in which topics are scoped. Is this exchange a pre AMQP 1.0
 artifact or just a general term for oslo.messaging that is loosely based on
 the pre-AMQP 1.0 artifact called an Exchange? i.e. are you assuming that
 messaging implementations have something called an exchange? Or do you mean
 that messaging implementations can scope a topic and in oslo we call that
 scoping an exchange?


 The latter.


 Ack. Good. Fits very well into AMQP 1.0 then ;-)


 3. Some messaging nomenclature: The way the wiki describes RPC Invoke
 Method on One of Multiple Servers is more like a queue than a topic.
 In messaging a queue is something that multiple consumers can attach to and
 one of them gets and services a message/request.  A topic is where 1+
 consumers are connected and each receives a the message and each can
 service it as it sees fit.  In pre-AMQP 1.0 terms what this seems to
 describe is a direct exchange. And a direct excahnge can have multiple
 consumers listening to a queue on that exchange.  (Remember that fanout is
 just a generalization of topic in that all consumers get all fanout
 messages - there are no sub-topics etc.)

 In AMQP 1.0 the addressing doesn't care or know about exchanges but it
 can support this queue type behavior on an address or topic type behavior
 on an address.

 I know this isn't about AMQP specifically but therefore this is even
 more important. Topics are pub/sub with multiple consumer/services
 responding to a single message. Queues are next consumer up gets the next
 message.



 (BTW I've seen this kind of confusion also in early versions of
 MCollective in Puppet.)

 It might be better to change some of the references to topic to
 address. This would solve the problem. i.e. a use case where one of many
 servers listening on an address services a message/request. And later all
 of servers listening on an address service a message/request. Addressing
 also solves the one-to-one as the address is specific to the server (and
 the others don't have to receive and reject the message).


 Too many of these terms are overloaded. :-)


 Yep. But topic pup/sub is certainly different to a queue. ;-)


 I'm not sure of the details of how topic and address are different in
 AMQP 1.0. The word address implies to me that the message sender knows
 where the message receiver is in some concrete sense. We don't want those
 semantics in a lot of our use cases. If the address is abstract, then it
 sounds like it works much as a topic does. Maybe you can expand on the
 differences?



 Nope the address is essentially a namespace. The send knows not where it
 ends up. Hence in some applications it doesn't even know of its a topic or
 a queue an it could go to one or many depending.


 OK, that sounds like it would be part of the Transport's handling of a
 Target (
 https://github.com/markmc/oslo.messaging/blob/master/oslo/messaging/target.py
 ).

 Thanks Doug. This is interesting.  What's the difference between an
 exchange and a namespace? If exchange is a scope and namespace is
 essentially a scope, then why have both?


The namespace relates to the API implementation inside the receiver. The
way it currently works is the receiver subscribes to messages on a
topic/exchange pair to have AMQP route messages to it, and then it looks
inside the message for further dispatch to an object that knows about 

Re: [openstack-dev] Revert Pass instance host-id to Quantum using port bindings extension.

2013-07-15 Thread Aaron Rosen
On Sun, Jul 14, 2013 at 6:48 PM, Robert Kukura rkuk...@redhat.com wrote:

 On 07/12/2013 04:17 PM, Aaron Rosen wrote:
  Hi,
 
 
  On Fri, Jul 12, 2013 at 6:47 AM, Robert Kukura rkuk...@redhat.com
  mailto:rkuk...@redhat.com wrote:
 
  On 07/11/2013 04:30 PM, Aaron Rosen wrote:
   Hi,
  
   I think we should revert this patch that was added here
   (https://review.openstack.org/#/c/29767/). What this patch does is
  when
   nova-compute calls into quantum to create the port it passes in the
   hostname on which the instance was booted on. The idea of the
  patch was
   that providing this information would allow hardware device
 vendors
   management stations to allow them to segment the network in a more
   precise manager (for example automatically trunk the vlan on the
   physical switch port connected to the compute node on which the vm
   instance was started).
  
   In my opinion I don't think this is the right approach. There are
   several other ways to get this information of where a specific port
   lives. For example, in the OVS plugin case the agent running on the
   nova-compute node can update the port in quantum to provide this
   information. Alternatively, quantum could query nova using the
   port.device_id to determine which server the instance is on.
  
   My motivation for removing this code is I now have the free cycles
 to
   work on
  
 
 https://blueprints.launchpad.net/nova/+spec/nova-api-quantum-create-port
discussed here
  
  (
 http://lists.openstack.org/pipermail/openstack-dev/2013-May/009088.html)
   .
   This was about moving the quantum port creation from the
 nova-compute
   host to nova-api if a network-uuid is passed in. This will allow
 us to
   remove all the quantum logic from the nova-compute nodes and
   simplify orchestration.
  
   Thoughts?
 
  Aaron,
 
  The ml2-portbinding BP I am currently working on depends on nova
 setting
  the binding:host_id attribute on a port before accessing
  binding:vif_type. The ml2 plugin's MechanismDrivers will use the
  binding:host_id with the agents_db info to see what (if any) L2
 agent is
  running on that host, or what other networking mechanisms might
 provide
  connectivity for that host. Based on this, the port's
 binding:vif_type
  will be set to the appropriate type for that agent/mechanism.
 
  When an L2 agent is involved, the associated ml2 MechanismDriver will
  use the agent's interface or bridge mapping info to determine whether
  the agent on that host can connect to any of the port's network's
  segments, and select the specific segment (network_type,
  physical_network, segmentation_id) to be used. If there is no
  connectivity possible on the host (due to either no L2 agent or other
  applicable mechanism, or no mapping for any of the network's
 segment's
  physical_networks), the ml2 plugin will set the binding:vif_type
  attribute to BINDING_FAILED. Nova will then be able to gracefully put
  the instance into an error state rather than have the instance boot
  without the required connectivity.
 
  I don't see any problem with nova creating the port before
 scheduling it
  to a specific host, but the binding:host_id needs to be set before
 the
  binding:vif_type attribute is accessed. Note that the host needs to
 be
  determined before the vif_type can be determined, so it is not
 possible
  to rely on the agent discovering the VIF, which can't be created
 until
  the vif_type is determined.
 
 
  So what your saying is the current workflow is this: nova-compute
  creates a port in quantum passing in the host-id (which is the hostname
  of the compute host). Now quantum looks in the agent table in it's
  database to determine the VIF type that should be used based on the
  agent that is running on the nova-compute node?

 Most plugins just return a hard-wired value for binding:vif_type. The
 ml2 plugin supports heterogeneous deployments, and therefore needs more
 flexibility, so this is whats being implemented in the agent-based ml2
 mechanism drivers. Other mechanism drivers (i.e. controller-based) would
 work differently. In addition to VIF type selection, port binding in ml2
 also involves determining if connectivity is possible, and selecting the
 network segment to use, and these are also based on binding:host_id.


Can you go into more details about what you mean by heterogeneous
deployments (i.e what the topology looks like)? Why would connectivity not
be possible? I'm confused why things would be configured in such a way
where the scheduler wants to launch an instance on a node where quantum is
not able to provide connectivity for.



   My question would be why
  the nova-compute node doesn't already know which 

[openstack-dev] [Savanna] Savanna 0.2 is released!

2013-07-15 Thread Sergey Lukjanov
Hello everyone,

I'm very happy to announce the immediate release of Savanna 0.2. This release 
contains 3 components: Savanna core, plugin for OpenStack Dashboard and 
diskimage-builder elements.

Release Notes (https://wiki.openstack.org/wiki/Savanna/ReleaseNotes/0.2): 

* Plugin Provisioning Mechanism implemented
* Vanilla Hadoop plugin implemented with the following features supported:
* creation of Hadoop clusters with different topologies
* scaling: resizing existing node groups and adding new ones
* support of Swift as input and output for Hadoop jobs
* diskimage-builder elements for automation of Hadoop images creation
* Cinder supported as block storage provider
* Anti-affinity supported for Hadoop processes
* OpenStack Dashboard plugin which supports almost all the operations exposed 
through Savanna REST API (screencast will be available soon)
* Integration tests for Vanilla plugin

Savanna wiki: https://wiki.openstack.org/wiki/Savanna
Launchpad project: https://launchpad.net/savanna
Savanna docs: https://savanna.readthedocs.org/en/latest/index.html (quickstart 
and installation, user and dev guides)

Enjoy!

P.S. Savanna Dashboard isn't available yet, but will be very soon.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSLO] Comments/Questions on Messaging Wiki

2013-07-15 Thread Doug Hellmann
On Mon, Jul 15, 2013 at 4:28 PM, Russell Bryant rbry...@redhat.com wrote:

 On 07/15/2013 02:36 PM, Doug Hellmann wrote:
  The namespace relates to the API implementation inside the receiver. The
  way it currently works is the receiver subscribes to messages on a
  topic/exchange pair to have AMQP route messages to it, and then it looks
  inside the message for further dispatch to an object that knows about
  that API. That lets the nova API implementation be split up among
  different objects, for example. I'm not sure why it evolved that way,
  instead of using separate topics and having the messaging layer do all
  of the routing. Maybe we should take another look at that part of the
  new API design.

 In retrospect, yes, a separate topic would have worked.  The namespace
 was very convenient for the current nova implementation, but that
 doesn't mean it was the best design.  The code that sets up which topics
 to consume from is very generic and applies to *all* services.  So,
 instead of reworking this to let it be different per-service, I did the
 namespace thing, which worked without having to change any other nova code.


That's a completely understandable pragmatic solution. :-)

Doug



 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] VMwareAPI sub-team status update, Monday July 15th 2013

2013-07-15 Thread Shawn Hartsock

Well folks, I put in some long hours last week just to try and get something in 
for Havana-2. It looks like every blueprint we were working on has been moved 
out to Havana-3. Havana-3 is September 6th. But, following my highly subjective 
formula ... I'd say if you aren't to a point you *think* you are done by August 
15th I would say there is little chance you'll make it through review in time 
for Havana at all.

Here's what we're watching...

Blueprints *moved* to Havana-3:
* https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage - 
started but depends on
** https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy - 
revising after review
* 
https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
 - good progress 

It's not all bad news. The good news is that for Havana-2 we got fixes 
committed for some very important bugs!

Havana-2 Bugs FIXED!

Critical
* https://bugs.launchpad.net/nova/+bug/1178369
* https://bugs.launchpad.net/nova/+bug/1183192
* https://bugs.launchpad.net/nova/+bug/1183452

High
* https://bugs.launchpad.net/nova/+bug/1180779

Medium
* https://bugs.launchpad.net/nova/+bug/1186944
* https://bugs.launchpad.net/nova/+bug/1192256

If you've not looked at +bug/1186944 I suggest you do. It turns out that this 
patch is really really critical for real-world debugging and troubleshooting 
work. The patch takes the old instance-00ffaa0011 (hex code for row index) 
based instance names and replaces them with a UUID instead! This seems trivial 
but it actually makes it so much easier to figure out what's going on when you 
have to do real troubleshooting work for people. So I'm really for back-porting 
this one ASAP. Good job Yaguang Tang!

Personally, I'll be finishing up my blueprint work and working on one bug this 
week, once that's done, I'll shift the bulk of my attention to doing reviews. 
I'll need someone to run the meeting in IRC for me on July 24th. Any volunteers?

Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI


# Shawn Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] lambda() Errors in Quantum/Neutron Grizzly (2013.1.2)

2013-07-15 Thread Craig E. Ward
I am seeing strange errors in a single-node OpenStack Grizzly installation. The 
logs are complaining about a mismatch of arguments and cover the linuxbridge, 
dhcp, and l3 agents. Below is a sample:


  TypeError: lambda() takes exactly 2 arguments (3 given)

The numbers expected and given are not consistent. It looks like a coding 
error, but I can't believe such an error would have made it into a distribution 
so it must be that I've configured something incorrectly. I've attached a text 
file with more detailed examples. Any help diagnosing this problem will be much 
appreciated.


What am I doing wrong? What other information would be useful to look at?

Thanks,

Craig

--
Craig E. Ward
Information Sciences Institute
University of Southern California
cw...@isi.edu

From quantum-linuxbridge-agent
==
ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
  File /usr/lib/python2.6/site-packages/quantum/common/rpc.py, line 43, in dispatch
quantum_ctxt, version, method, **kwargs)
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/dispatcher.py, line 133, in dispatch
return getattr(proxyobj, method)(ctxt, **kwargs)
  File /usr/lib/python2.6/site-packages/quantum/db/dhcp_rpc_base.py, line 40, in get_active_networks
plugin.auto_schedule_networks(context, host)
  File /usr/lib/python2.6/site-packages/quantum/db/agentschedulers_db.py, line 302, in auto_schedule_networks
self.network_scheduler.auto_schedule_networks(self, context, host)
  File /usr/lib/python2.6/site-packages/quantum/scheduler/dhcp_agent_scheduler.py, line 84, in auto_schedule_networks
agents_db.Agent.admin_state_up == True)
TypeError: lambda() takes exactly 2 arguments (4 given)
ERROR [quantum.openstack.common.rpc.common] Returning exception lambda() takes exactly 2 arguments (4 given) to caller

ERROR [quantum.openstack.common.rpc.amqp] Exception during message handling
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)
  File /usr/lib/python2.6/site-packages/quantum/common/rpc.py, line 43, in dispatch
quantum_ctxt, version, method, **kwargs)
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/dispatcher.py, line 133, in dispatch
return getattr(proxyobj, method)(ctxt, **kwargs)
  File /usr/lib/python2.6/site-packages/quantum/db/l3_rpc_base.py, line 47, in sync_routers
plugin.auto_schedule_routers(context, host, router_id)
  File /usr/lib/python2.6/site-packages/quantum/db/agentschedulers_db.py, line 350, in auto_schedule_routers
self, context, host, router_id)
  File /usr/lib/python2.6/site-packages/quantum/scheduler/l3_agent_scheduler.py, line 51, in auto_schedule_routers
agents_db.Agent.admin_state_up == True)
TypeError: lambda() takes exactly 2 arguments (4 given)
ERROR [quantum.openstack.common.rpc.common] Returning exception lambda() takes exactly 2 arguments (4 given) to caller


From quantum-dhcp-agent
===

2013-07-15 18:10:25ERROR [quantum.agent.dhcp_agent] Unable to sync network state.
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py, line 152, in sync_state
active_networks = set(self.plugin_rpc.get_active_networks())
  File /usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py, line 364, in get_active_networks
topic=self.topic)
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py, line 80, in call
return rpc.call(context, self._get_topic(topic), msg, timeout)
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py, line 140, in call
return _get_impl().call(CONF, context, topic, msg, timeout)
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_kombu.py, line 798, in call
rpc_amqp.get_connection_pool(conf, Connection))
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py, line 613, in call
rv = list(rv)
  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py, line 562, in __iter__
raise result
TypeError: lambda() takes exactly 2 arguments (4 given)
Traceback (most recent call last):

  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py, line 430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)

  File /usr/lib/python2.6/site-packages/quantum/common/rpc.py, line 43, in dispatch
quantum_ctxt, version, method, **kwargs)

  File /usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/dispatcher.py, line 133, in dispatch
return getattr(proxyobj, method)(ctxt, **kwargs)

  File 

Re: [openstack-dev] [Savanna] Savanna 0.2 is released!

2013-07-15 Thread Matthew Farrellee
Well done all, this release was no small effort! Especially, great 
collaboration and use of tools available from the OpenStack community.


Best,


matt

On 07/15/2013 06:14 PM, Sergey Lukjanov wrote:

Hello everyone,

I'm very happy to announce the immediate release of Savanna 0.2. This release 
contains 3 components: Savanna core, plugin for OpenStack Dashboard and 
diskimage-builder elements.

Release Notes (https://wiki.openstack.org/wiki/Savanna/ReleaseNotes/0.2):

* Plugin Provisioning Mechanism implemented
* Vanilla Hadoop plugin implemented with the following features supported:
 * creation of Hadoop clusters with different topologies
 * scaling: resizing existing node groups and adding new ones
 * support of Swift as input and output for Hadoop jobs
* diskimage-builder elements for automation of Hadoop images creation
* Cinder supported as block storage provider
* Anti-affinity supported for Hadoop processes
* OpenStack Dashboard plugin which supports almost all the operations exposed 
through Savanna REST API (screencast will be available soon)
* Integration tests for Vanilla plugin

Savanna wiki: https://wiki.openstack.org/wiki/Savanna
Launchpad project: https://launchpad.net/savanna
Savanna docs: https://savanna.readthedocs.org/en/latest/index.html (quickstart 
and installation, user and dev guides)

Enjoy!

P.S. Savanna Dashboard isn't available yet, but will be very soon.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] lambda() Errors in Quantum/Neutron Grizzly (2013.1.2)

2013-07-15 Thread Edgar Magana
Craig,

It will help if you can add more information about your set-up:
release version?
devstack configuration (if you are using it)
configuration files

recently, if you are using master branch this error is really weird because
we renamed all quantum references to neutron.

Thanks,

Edgar


On Mon, Jul 15, 2013 at 4:23 PM, Craig E. Ward cw...@isi.edu wrote:

 I am seeing strange errors in a single-node OpenStack Grizzly
 installation. The logs are complaining about a mismatch of arguments and
 cover the linuxbridge, dhcp, and l3 agents. Below is a sample:

   TypeError: lambda() takes exactly 2 arguments (3 given)

 The numbers expected and given are not consistent. It looks like a coding
 error, but I can't believe such an error would have made it into a
 distribution so it must be that I've configured something incorrectly. I've
 attached a text file with more detailed examples. Any help diagnosing this
 problem will be much appreciated.

 What am I doing wrong? What other information would be useful to look at?

 Thanks,

 Craig

 --
 Craig E. Ward
 Information Sciences Institute
 University of Southern California
 cw...@isi.edu


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for new Program: OpenStack Deployment

2013-07-15 Thread Robert Collins
On 11 July 2013 02:47, Anne Gentle annegen...@justwriteclick.com wrote:
 Hi Robert,

 What's your plan for documenting the efforts so that others can do this in
 their environments? Is there any documentation currently for which you can
 send links?

Sorry for the slow reply to this; the thread got lost in my inbox,
then the weekend... bah!

Anyhow, Clint has sent through current docs.

The plan we have is to get the thing working end to end (which we are
mostly there on) and then decide what should be instructions, and what
should be automation. E.g. exactly what the UI for tripleo's native
bits are (whatever they may be).

 The Doc team is especially interested in configuration docs and installation
 docs as those are the toughest to produce in a timely, accurate manner now.
 We have a blueprint for automatically generating docs from configuration
 options in the code. We are trying to determine a good path for install docs
 for meeting the release deliverable -- much discussion at
 http://lists.openstack.org/pipermail/openstack-docs/2013-July/002114.html.
 Your input welcomed.

A few random thoughts:

http://lists.openstack.org/pipermail/openstack-docs/2013-July/002131.html
is an interesting perspective... Certainly we consider documentation a
primary part of what we're doing : got to teach people how to use the
tooling. So I guess I disagree with that post - if you can't install a
product, you can't use it.

I don't think we need to cover all the options and possibilities - we
should be covering the 95% of use cases: small/ medium / big profiles;
reference network setup, reference DB etc. I would like to get TripleO
into the official manuals of course... I think as we mature we'll
start moving docs from our trees into the reference manuals.

I expect TripleO/OpenStack Deployment docs to be updated continually
(even if/when we add support for released versions) - because the
environment folk deploy in doesn't stand still.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] common codes

2013-07-15 Thread Gareth
Hi, all

There are some common codes in most of projects, such as opnstack/common,
db, and some else (?). I know a good way is using 'import oslo' is ok,
instead of copy those codes here and there. And now we already have project
oslo and trove, but how and when do we handle old codes, remove that in
next major release?

-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] common codes

2013-07-15 Thread Michael Basnight
On Jul 15, 2013, at 7:22 PM, Gareth academicgar...@gmail.com wrote:

 Hi, all
 
 There are some common codes in most of projects, such as opnstack/common, db, 
 and some else (?). I know a good way is using 'import oslo' is ok, instead of 
 copy those codes here and there. And now we already have project oslo and 
 trove, but how and when do we handle old codes, remove that in next major 
 release?

From the trove perspective we are trying to keep our Oslo code updated as often 
as possible. Once the code leaves incubator status (the code copy you mention), 
we will adopt the individual libraries. I believe oslo.messaging is the next on 
our list. 

As for timeline, we try to stay current with one caveat. We stop pulling large 
updates in as milestone deadlines approach. So pull in updates early in the 
milestone, so that they are there for the milestone, and eventually the 
release. We have a review inflight waiting for the h2 cutoff so we can merge it 
[1] that has the latest oslo. This approach may very somewhat from other 
projects so ill let the PTLs chime in :)

Is there specific code you are referring to? 

[1] https://review.openstack.org/#/c/36140/___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] common codes

2013-07-15 Thread Yaguang Tang
we put openstack common code in oslo
, and sync to other projects to keep the common code in each project is
aways up to date, when oslo is mature enough, then we will publish oslo as
a openstack common library.  the common code in each project just need to
change from from nova.openstack.common import something
to from oslo.openstack.common import something after oslo is released ,
as the common code is aways sync from oslo, so there isn't any big change.

correct me if my understanding is wrong.
在 2013-7-16 上午10:25,Gareth academicgar...@gmail.com写道:

 Hi, all

 There are some common codes in most of projects, such as opnstack/common,
 db, and some else (?). I know a good way is using 'import oslo' is ok,
 instead of copy those codes here and there. And now we already have project
 oslo and trove, but how and when do we handle old codes, remove that in
 next major release?

 --
 Gareth

 *Cloud Computing, OpenStack, Fitness, Basketball*
 *OpenStack contributor*
 *Company: UnitedStack http://www.ustack.com*
 *My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] common codes

2013-07-15 Thread Gareth
Michael, thanks for your perspective. It's easy to understand. But I just
have some questions, not specific problems.

Yaguang, I like the 'import' way too. But is there a global timeline of
making oslo mature? Is this decided by oslo team or release plan?


On Tue, Jul 16, 2013 at 11:00 AM, Yaguang Tang
yaguang.t...@canonical.comwrote:

 we put openstack common code in oslo
 , and sync to other projects to keep the common code in each project is
 aways up to date, when oslo is mature enough, then we will publish oslo as
 a openstack common library.  the common code in each project just need to
 change from from nova.openstack.common import something
 to from oslo.openstack.common import something after oslo is released ,
 as the common code is aways sync from oslo, so there isn't any big change.

 correct me if my understanding is wrong.
 在 2013-7-16 上午10:25,Gareth academicgar...@gmail.com写道:

 Hi, all

 There are some common codes in most of projects, such as opnstack/common,
 db, and some else (?). I know a good way is using 'import oslo' is ok,
 instead of copy those codes here and there. And now we already have project
 oslo and trove, but how and when do we handle old codes, remove that in
 next major release?

 --
 Gareth

 *Cloud Computing, OpenStack, Fitness, Basketball*
 *OpenStack contributor*
 *Company: UnitedStack http://www.ustack.com*
 *My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-15 Thread Thomas Goirand
On 07/15/2013 11:07 PM, Adam Young wrote:
 On 07/15/2013 05:46 AM, Thomas Goirand wrote:
 On 07/15/2013 04:32 PM, Stephen Gran wrote:
 On 15/07/13 09:26, Thomas Goirand wrote:
 Dolph,

 If you do that, then you will be breaking Debian packages, as they
 expect Sqlite as the default, for example when using
 DEBIAN_FRONTEND=noninteractive apt-get install keystone (if you choose
 MySQL, then you need to enter admin credentials to setup the db). I
 will
 receive tons of piupart failures reports if we can't upgrade with
 SQLite.

 I would really be disappointed if this happens, and get into situations
 where I have RC bugs which I can't realistically close by myself.

 So really, if it is possible, continue to support it, at least from one
 release to the next.
 Why not just change the default for Debian?  Sqlite isn't particularly
 useful for actual deployments anyway.
 Because that is the only backend that will work without providing
 credentials on the keyboard, so it is the only one that will work in a
 non-interactive session of apt-get (which is used for all automated
 tests in Debian, including piuparts).
 
 That is a really, really, really bad reason.

Ok, then, I think I didn't express myself correctly, so I will try again.

In Debian, by policy, any package should be able to be installed using
DEBIAN_FRONTEND=noninteractive apt-get install. What I do in my postinst
is calling db_sync, because that isn't something our users should even
care about, since it can be automated. The final result is that, for
many package like Keystone and Glance, simply doing apt-get install is
enough to make it work, without needing any configuration file edition.
I want to be able to keep that nice feature.

If we remove support for upgrading from one version to the next, then
either I should remove the support for this, or make a special case for
when sqlite is in use, and not setup any database in that case. Or the
other option is to completely remove sqlite support (if we remove the
possibility to upgrade, then I believe it should be done), and only do
db_sync whenever the database is setup and working. That would also mean
to not start the daemon either, in such a case. This removes really a
lot of automated package testings, and I don't think it is a bad reason
(don't we have a strong culture of automated testing inside the project?).

If the support for SQLite (db upgrades) has to go, I will understand and
adapt. I haven't and probably wont find the time for doing the actual
work to support SQLite upgrades, and therefore, it probably is easier
for me to state, though, I believe it is my duty to raise my concerns,
and tell that I do not support this decision.

What direction do you think this should take? Your thoughts?

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Review request: Blurprint of API validation

2013-07-15 Thread Ken'ichi Ohmichi

Hi Russell,

I assigned you as the approver of bp nova-api-validation-fw,
so could you take a look at this bp?

You have changed the priority from Medium to Low these days.
Could you show your concerns about this bp if you have.


Thanks
Ken'ichi Ohmichi

---
On Tue, 9 Jul 2013 20:45:57 +0900
Ken'ichi Ohmichi oomi...@mxs.nes.nec.co.jp wrote:
 
 Hi,
 
 The blueprint nova-api-validation-fw has not been approved yet.
 I hope the core patch of this blueprint is merged to Havana-2,
 because of completing comprehensive API validation of Nova v3 API
 for Havana release. What should we do for it?
 
 
 The summary of nova-api-validation-fw:
   * The patch review is in good progress.
   * nova-api-validation-fw is API input validation framework.
   * Simplify the code in the extensions of Nova v3 API.
   * Define API schema with JSON Schema.
   * Define common data formats to standardize whole of Nova v3 API.
   * Possible to choose APIs which are applied with this framework.
   * Possible to migrate other web framework.
 
 
 The details of nova-api-validation-fw are the following:
 
 * The patch review is in good progress.
   7 reviewers have marked +1 now.
   We are waiting for the approval of this blueprint before merging the
   patch. (https://review.openstack.org/#/c/25358/)
 
 * nova-api-validation-fw is API input validation framework.
   This framework validates API parameters in a request body before the
   execution of API method. If the parameter is invalid, nova-api returns
   a BadRequest response including its reason.  
 
 * Simplify the code in the extensions of Nova v3 API.
   There are a lot of validation code in each API method.
   After applying this framework, we will be able to separate them from
   API methods by defining API schema. That makes the code more readable.
   Also through trying to define API schema of each API, we will find the
   lacks of validation code and complement them for better v3 API.
   Necessary API schemas are for API which contains a request body. They
   are 37 APIs on Nova v3 API now, and they will increase later because
   the tasks, which port v2 API to v3 API, is in progress.
 
 * Define API schema with JSON Schema.
   JSON Schema contains many features for validation, the details are
   written on http://json-schema.org/.
   Here is the schema of v3 keypairs API as a sample:
   == Request body sample of keypairs API ===
   {
 keypair: {
 name: keypair-dab428fe-6186-4a14-b3de-92131f76cd39,
 public_key: ssh-rsa B3NzaC1yc2EA[..]== Generated by Nova
   }
   == API schema of keypairs API 
   {
   'type': 'object',
   'properties': {
   'keypair': {
   'type': 'object',
   'properties': {
   'name': {'type': 'string', 'minLength': 1, 'maxLength': 
 255},
   'public_key': {'type': 'string'},
   },
   'required': ['name'],
   },
   },
   'required': ['keypair'],
   }
 
 * Define common data formats to standardize whole of Nova v3 API.
   We can define common data formats with FormatChecker of JSON Schema
   library.
 
   e.g. Data format of 'boolean':
 @jsonschema.FormatChecker.cls_checks('boolean')
 def validate_boolean_format(instance):
 return instance.upper() == 'TRUE' or instance.upper() == 'FALSE'
 
   The formats can be used in API schema:
 'onSharedStorage': {
 'type': 'string', 'format': 'boolean',
 },
 
   By re-using these common formats in many API schemas, that will be
   able to standardize whole of Nova v3 API.
 
 * Possible to choose APIs which are applied with this framework.
   We can apply this framework to Nova v3 API only, the existing API (v2) can
   be out of scope of this framework.
 
 * Possible to migrate other web framework.
   The API validation of this framework is executed with the decorator of API
   method, that means this framework does not depend on web framework.
   Also when migrating other web framework(e.g. Pecan/WSME) in the future, we
   will be able to use this framework.
 
 
 Thanks
 Ken'ichi Ohmichi
 
 ---
 On Fri, 5 Jul 2013 16:27:39 +0900
 Ken'ichi Ohmichi oomi...@mxs.nes.nec.co.jp wrote:
  
  Hi,
  
  I have submitted a blueprint[1] and a patch[2] for Nova API validation.
  I request someone to review the blueprint and hopefully approve it.
  
  This blueprint was approved once, but the Definition has been changed to
  Discussion because we discussed what is better validation mechanism WSME
  original validation or JSON Schema. IIUC, we have found the advantages
  of JSON Schema through the discussion. [3]
  
  Also we have found the way to overlap JSON Schema with WSME, so we will
  be able to use JSON Schema also when the web framework of Nova is changed
  to WSME. [4]
  
  My latest patch has been implemented with JSON Schema[2], and I think API
  validation with JSON Schema is useful for Nova v3 API, because the API has
 

Re: [openstack-dev] common codes

2013-07-15 Thread Zhongyue Luo
Gareth,

https://wiki.openstack.org/wiki/Oslo#Principles

I believe this link will answer most of your answers.


On Tue, Jul 16, 2013 at 11:16 AM, Gareth academicgar...@gmail.com wrote:

 Michael, thanks for your perspective. It's easy to understand. But I just
 have some questions, not specific problems.

 Yaguang, I like the 'import' way too. But is there a global timeline of
 making oslo mature? Is this decided by oslo team or release plan?


 On Tue, Jul 16, 2013 at 11:00 AM, Yaguang Tang yaguang.t...@canonical.com
  wrote:

 we put openstack common code in oslo
 , and sync to other projects to keep the common code in each project is
 aways up to date, when oslo is mature enough, then we will publish oslo as
 a openstack common library.  the common code in each project just need to
 change from from nova.openstack.common import something
 to from oslo.openstack.common import something after oslo is released ,
 as the common code is aways sync from oslo, so there isn't any big change.

 correct me if my understanding is wrong.
 在 2013-7-16 上午10:25,Gareth academicgar...@gmail.com写道:

  Hi, all

 There are some common codes in most of projects, such as
 opnstack/common, db, and some else (?). I know a good way is using 'import
 oslo' is ok, instead of copy those codes here and there. And now we already
 have project oslo and trove, but how and when do we handle old codes,
 remove that in next major release?

 --
 Gareth

 *Cloud Computing, OpenStack, Fitness, Basketball*
 *OpenStack contributor*
 *Company: UnitedStack http://www.ustack.com*
 *My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Gareth

 *Cloud Computing, OpenStack, Fitness, Basketball*
 *OpenStack contributor*
 *Company: UnitedStack http://www.ustack.com*
 *My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Intel SSG/STOD/DCST/CIT*
880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
China
+862161166500
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-15 Thread Dolph Mathews
On Mon, Jul 15, 2013 at 10:44 PM, Thomas Goirand z...@debian.org wrote:

 On 07/15/2013 11:07 PM, Adam Young wrote:
  On 07/15/2013 05:46 AM, Thomas Goirand wrote:
  On 07/15/2013 04:32 PM, Stephen Gran wrote:
  On 15/07/13 09:26, Thomas Goirand wrote:
  Dolph,
 
  If you do that, then you will be breaking Debian packages, as they
  expect Sqlite as the default, for example when using
  DEBIAN_FRONTEND=noninteractive apt-get install keystone (if you choose
  MySQL, then you need to enter admin credentials to setup the db). I
  will
  receive tons of piupart failures reports if we can't upgrade with
  SQLite.
 
  I would really be disappointed if this happens, and get into
 situations
  where I have RC bugs which I can't realistically close by myself.
 
  So really, if it is possible, continue to support it, at least from
 one
  release to the next.
  Why not just change the default for Debian?  Sqlite isn't particularly
  useful for actual deployments anyway.
  Because that is the only backend that will work without providing
  credentials on the keyboard, so it is the only one that will work in a
  non-interactive session of apt-get (which is used for all automated
  tests in Debian, including piuparts).
 
  That is a really, really, really bad reason.

 Ok, then, I think I didn't express myself correctly, so I will try again.

 In Debian, by policy, any package should be able to be installed using
 DEBIAN_FRONTEND=noninteractive apt-get install. What I do in my postinst
 is calling db_sync, because that isn't something our users should even
 care about, since it can be automated. The final result is that, for
 many package like Keystone and Glance, simply doing apt-get install is
 enough to make it work, without needing any configuration file edition.
 I want to be able to keep that nice feature.


Make it work is an entirely different goal than make a production-ready
deployment. If your goal in using sqlite is just to make it work then
I'm not sure that I would expect such an install to survive to the next
release, anyway... rendering migration support as a nice-to-have. I can't
imagine that any end users would be happy with a sqlite-based deployment
for anything other than experimentation and testing.


 If we remove support for upgrading from one version to the next, then
 either I should remove the support for this, or make a special case for
 when sqlite is in use, and not setup any database in that case.


As a parallel, we special case sqlite:///:memory: for testing purposes by
automatically building the schema from models, without running migrations.


 Or the
 other option is to completely remove sqlite support (if we remove the
 possibility to upgrade, then I believe it should be done), and only do
 db_sync whenever the database is setup and working. That would also mean
 to not start the daemon either, in such a case. This removes really a
 lot of automated package testings, and I don't think it is a bad reason
 (don't we have a strong culture of automated testing inside the project?).

 If the support for SQLite (db upgrades) has to go, I will understand and
 adapt. I haven't and probably wont find the time for doing the actual
 work to support SQLite upgrades, and therefore, it probably is easier
 for me to state, though, I believe it is my duty to raise my concerns,
 and tell that I do not support this decision.


I'm glad you spoke up, as I wasn't aware anyone was dependent on support
for sqlite migrations outside of our own tests. Thanks!



 What direction do you think this should take? Your thoughts?


I'd still like to pursue dropping support for sqlite migrations, albeit not
as aggressively as I would have preferred. With a stakeholder, I think
it's requisite to continue support through Havana. Perhaps at the fall
summit we can evaluate our position on both alembic and sqlite migrations.



 Cheers,

 Thomas Goirand (zigo)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev