Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-16 Thread Thomas Goirand
On 07/16/2013 12:07 PM, Michael Still wrote:
 On Tue, Jul 16, 2013 at 1:44 PM, Thomas Goirand z...@debian.org wrote:
 
 In Debian, by policy, any package should be able to be installed using
 DEBIAN_FRONTEND=noninteractive apt-get install. What I do in my postinst
 is calling db_sync, because that isn't something our users should even
 care about, since it can be automated. The final result is that, for
 many package like Keystone and Glance, simply doing apt-get install is
 enough to make it work, without needing any configuration file edition.
 I want to be able to keep that nice feature.
 
 Is there any policy about how long a package can take to install?

No.

 db_sync might make many minutes for large installations. For example,
 folsom - grizzly takes about 7 minutes in my tests.
 
 Michael

Gosh!!! I haven't experienced that! Or is it that my upgrade path was
broken somehow? :) I shall test more.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-16 Thread Thomas Goirand
On 07/16/2013 01:50 PM, Dolph Mathews wrote:
 Make it work is an entirely different goal than make a
 production-ready deployment. If your goal in using sqlite is just to
 make it work then I'm not sure that I would expect such an install to
 survive to the next release, anyway... rendering migration support as a
 nice-to-have. I can't imagine that any end users would be happy with a
 sqlite-based deployment for anything other than experimentation and testing.

In that case, IMO we should strongly state that fact, and don't let our
users believe that SQLite is supported (my view is that if upgrades
aren't, then it's as if SQLite wasn't supported at all).

 If the support for SQLite (db upgrades) has to go, I will understand and
 adapt. I haven't and probably wont find the time for doing the actual
 work to support SQLite upgrades, and therefore, it probably is easier
 for me to state, though, I believe it is my duty to raise my concerns,
 and tell that I do not support this decision.
 
 I'm glad you spoke up

Thanks! :)

 What direction do you think this should take? Your thoughts?
 
 
 I'd still like to pursue dropping support for sqlite migrations, albeit
 not as aggressively as I would have preferred. With a stakeholder, I
 think it's requisite to continue support through Havana. Perhaps at the
 fall summit we can evaluate our position on both alembic and sqlite
 migrations.

Could you explain a bit more what could be done to fix it in an easy
way, even if it's not efficient? I understand that ALTER doesn't work
well. Though would we have the possibility to just create a new
temporary table with the correct fields, and copy the existing content
in it, then rename the temp table so that it replaces the original one?

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-16 Thread Michael Still
On Tue, Jul 16, 2013 at 4:05 PM, Thomas Goirand z...@debian.org wrote:
 On 07/16/2013 12:07 PM, Michael Still wrote:
 On Tue, Jul 16, 2013 at 1:44 PM, Thomas Goirand z...@debian.org wrote:

 In Debian, by policy, any package should be able to be installed using
 DEBIAN_FRONTEND=noninteractive apt-get install. What I do in my postinst
 is calling db_sync, because that isn't something our users should even
 care about, since it can be automated. The final result is that, for
 many package like Keystone and Glance, simply doing apt-get install is
 enough to make it work, without needing any configuration file edition.
 I want to be able to keep that nice feature.

 Is there any policy about how long a package can take to install?

 No.

Hmmm. Ok. I do think its something for you to track though.

 db_sync might make many minutes for large installations. For example,
 folsom - grizzly takes about 7 minutes in my tests.

 Gosh!!! I haven't experienced that! Or is it that my upgrade path was
 broken somehow? :) I shall test more.

That's with a DB that records 30 million instances. If you're using a
trivial test database then you're not replicating the experience for a
real deployment.

Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-16 Thread Michael Still
On Tue, Jul 16, 2013 at 4:17 PM, Thomas Goirand z...@debian.org wrote:

 Could you explain a bit more what could be done to fix it in an easy
 way, even if it's not efficient? I understand that ALTER doesn't work
 well. Though would we have the possibility to just create a new
 temporary table with the correct fields, and copy the existing content
 in it, then rename the temp table so that it replaces the original one?

There are a bunch of nova migrations that already work that way...
Checkout the *sqlite* files in
nova/db/sqlalchemy/migrate_repo/versions/

Cheers,
Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Project release status meeting - 21:00 UTC

2013-07-16 Thread Thierry Carrez
Today in the Project  release status meeting, havana-2 is upon us !
We'll finalize havana-2 contents and get the sign-off for
miletsone-proposed branch cuts at the end of the day.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads should be present (if you can't make it, please name
a substitute on [1]). Everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130716T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] savanna version 0.3 - added UI mockups for EDP workflow

2013-07-16 Thread Ruslan Kamaldinov
Chad,

I'd like to see more details about job progress on the Job list view. It
should display current progress, logs, errors.

For Hive, Pig and Oozie flows it would be useful to list all the jobs from
the task. Something similar to https://github.com/twitter/ambrose would be
great (without fancy graphs).


Ruslan


On Fri, Jul 12, 2013 at 7:14 PM, Chad Roberts crobe...@redhat.com wrote:

 I have added some initial UI mockups for version 0.3.
 Any comments are appreciated.

 https://wiki.openstack.org/wiki/Savanna/UIMockups/JobCreation

 Thanks,
 Chad

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Memcache user token index and PKI tokens

2013-07-16 Thread Kieran Spear

Hi Morgan,

On 16/07/2013, at 1:02 AM, Morgan Fainberg m...@metacloud.com wrote:
 On Mon, Jul 15, 2013 at 1:06 AM, Kieran Spear kisp...@gmail.com wrote:
 
 Hi Kieran,
 
 I'd be happy to help you with the backporting of this fix if you need any 
 assistance.
 
 Here are some quick answers to your questions:
 
 1. The individual tokens are stored in independent keys in their entirety, 
 since the memcache backend replaces the SQL backend.  In the 
 usertoken-userid we should only be storing the hashed value.  It looks like 
 there is a bug there (I'll check and get a fix proposed in gerrit today to 
 address this).

That's what it looks like. I've pushed out a one-line fix to hash the token 
data on our cloud and it works well.

 
 2.  There has been a lot of discussion on this topic and some other solutions 
 have been proposed (such as use a clock-mechanism to expire tokens, etc).  
 However, to stay compatible with the current business logic, which requires 
 the ability to revoke subsets of tokens for the user (instead. all tokens for 
 the user on every change to the user's roles/project associations, etc), a 
 list of current tokens for the user must be maintained.  Ideally, the token 
 list shouldn't ever be massive, meaning the extra calls to memcache should be 
 limited.  The general plan is to develop another approach to solve this issue 
 without the need of keeping a user-token-list, but it just wasn't viewed as 
 possible for Havana.  Checking the first two tokens wouldn't exactly solve 
 the issue, as tokens can in theory be requested with varying expiration 
 times; this means that tokens in the middle of the list could be expired 
 whereas the tokens at the start of the list are invalid.

Thanks for your work in this area. Good point on the expiry times. I'm probably 
guilty of premature optimisation here anyway. Will work on back porting this 
shortly.

Kieran

 
 Keeping the expiry time low is definitely a good idea.
 
 
 Let me know if you want any further details.  I'd be happy to elaborate / 
 fill-in more.
 
 Cheers,
 Morgan Fainberg
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Memcache user token index and PKI tokens

2013-07-16 Thread Kieran Spear

On 16/07/2013, at 1:10 AM, Adam Young ayo...@redhat.com wrote:
 On 07/15/2013 04:06 AM, Kieran Spear wrote:
 Hi all,
 
 I want to backport the fix for the Token List in Memcache can consume
 an entire memcache page bug[1] to Grizzly, but I had a couple of
 questions:
 
 1. Why do we need to store the entire token data in the
 usertoken-userid key? This data always seems to be hashed before
 indexing into the 'token-tokenid' keys anyway. The size of the
 memcache data for a user's token list currently grows by 4k every time
 a new PKI token is created. It doesn't take long to hit 1MB at this
 rate even with the above fix.
 Yep. The reason, though, is that we either take a memory/storage hit (store 
 the whole token) or a performance hit (reproduce the token data) and we've 
 gone for the storage hit.

In this case it looks like we're taking a hit from both, since the PKI token 
id from the user token index is retrieved, then hashed and then that key is 
used to retrieve the token from the tokens-%s page anyway.

 
 
 
 2. Every time it creates a new token, Keystone loads each token from
 the user's token list with a separate memcache call so it can throw it
 away if it's expired. This seems excessive. Is it anything to worry
 about? If it just checked the first two tokens you'd get the same
 effect on a longer time scale.
 
 I guess part of the answer is to decrease our token expiry time, which
 should mitigate both issues. Failing that we'd consider moving to the
 SQL backend.
 HOw about doing both?  But if you move to the sql backend, rememeber to 
 periodically clean up the token table, or you will have storage issues there 
 as well.  No silver bullet, I am afraid.

I think we're going to stick with memcache for now (the devil we know :)). With 
(1) and (2) fixed and the token expiration time tweaked I think memcache will 
do okay.

Kieran

 
 
 Cheers,
 Kieran
 
 [1] https://bugs.launchpad.net/keystone/+bug/1171985
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Is there need to implement CloudWatch compatible API for Ceilometer?

2013-07-16 Thread Guangyu Suo
Hi, all

Since ceilometer is becoming stronger at monitoring side, more and more
companies will use ceilometer as their monitoring service, and AWS
CloudWatch API is more general for monitoring service, so I am wondering
whether there is need to implement CloudWatch compatible API to make
ceilometer more general. If so, I'd like to do my effort.

Thanks !
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Proposal: DevStack

2013-07-16 Thread Flavio Percoco

On 15/07/13 14:14 -0400, Russell Bryant wrote:

On 07/15/2013 11:39 AM, Dean Troyer wrote:

DevStack plays multiple roles in the development process for OpenStack.


Does it really make sense to be its own program?  There was mention of
just making it a part of infra or QA.  QA actually makes the most sense
to me, since devstack's primary use case is to make it easy to test
OpenStack.



I agree with Russell. 


I could also see this as part of the Infrastructure program, which
statement is:

Develop and maintain the tooling and infrastructure needed to
support the development process and general operation of the
OpenStack project.


Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Moving task flow to conductor - concern about scale

2013-07-16 Thread Day, Phil
Hi Folks,

Reviewing some the changes to move control flows into conductor made me wonder 
about an issue that I haven't seen discussed so far (apologies if it was and 
I've missed it):

In the original context of using Conductor as a database proxy then the number 
of conductor instances is directly related to the number of compute hosts I 
need them to serve.   I don't have a fee for what this ratio is (as we haven't 
switched yet) but based on the discussions in Portland I have the expectation 
that even with the eventlet performance fix in place there could still need to 
be 10's for a large deployment.

What I not sure is that I would also want to have the same number of conductor 
instances for task control flow - historically even running 2 schedulers has 
been a problem, so the thought of having 10's of them makes me very concerned 
at the moment.   However I can't see any way to specialise a conductor to only 
handle one type of request.

So I guess my question is, given that it may have to address two independent 
scale drivers, is putting task work flow and DB proxy functionality into the 
same service really the right thing to do - or should there be some separation 
between them.

Don't get me wrong - I'm not against the concept of having the task work flow 
in a well defined place - just wondering if conductor is really the logical 
place to do it rather than , for example,  making this part of an extended set 
of functionality for the scheduler (which is already a separate service with 
its own scaling properties).

Thoughts ?

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

2013-07-16 Thread Daniel P. Berrange
On Tue, Jul 16, 2013 at 09:41:55AM -0400, Solly Ross wrote:
 (This email is with regards to https://review.openstack.org/#/c/36316/)
 
 Hello All,
 
 I have been implementing the Guru Meditation Report blueprint
 (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report),
 and the question of a templating engine was raised.  Currently, my
 version of the code includes the Jinja2 templating engine
 (http://jinja.pocoo.org/), which is modeled after the Django
 templating engine (it was designed to be an implementation of the
 Django templating engine without requiring the use of Django), which
 is used in Horizon.  Apparently, the Cheetah templating engine
 (http://www.cheetahtemplate.org/) is used in a couple places in Nova.
 
 IMO, the Jinja template language produces much more readable templates,
 and I think is the better choice for inclusion in the Report framework.
  It also shares a common format with Django (making it slightly easier
 to write for people coming from that area), and is also similar to
 template engines for other languages. What does everyone else think?

Repeating my comments from the review...

I don't have an opinion on whether Jinja or Cheetah is a better
choice, since I've essentially never used either of them (beyond
deleting usage of ceetah from libvirt). I do, however, feel we
should not needlessly use multiple different templating libraries
across OpenStack. We should take care to standardize on one option
that is suitable for all our needs. So if the consensus is that
Jinja is better, then IMHO, there would need to be an blueprint
+ expected timeframe to port existing Ceetah usage to use Jinja.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving task flow to conductor - concern about scale

2013-07-16 Thread Dan Smith
 In the original context of using Conductor as a database proxy then
 the number of conductor instances is directly related to the number
 of compute hosts I need them to serve. 

Just a point of note, as far as I know, the plan has always been to
establish conductor as a thing that sits between the api and compute
nodes. However, we started with the immediate need, which was the
offloading of database traffic.

 What I not sure is that I would also want to have the same number of
 conductor instances for task control flow - historically even running
 2 schedulers has been a problem, so the thought of having 10's of
 them makes me very concerned at the moment.   However I can't see any
 way to specialise a conductor to only handle one type of request.

Yeah, I don't think the way it's currently being done allows for
specialization.

Since you were reviewing actual task code, can you offer any specifics
about the thing(s) that concern you? I think that scaling conductor (and
its tasks) horizontally is an important point we need to achieve, so if
you see something that needs tweaking, please point it out.

Based on what is there now and proposed soon, I think it's mostly fairly
safe, straightforward, and really no different than what two computes do
when working together for something like resize or migrate.

 So I guess my question is, given that it may have to address two
 independent scale drivers, is putting task work flow and DB proxy
 functionality into the same service really the right thing to do - or
 should there be some separation between them.

I think that we're going to need more than one task node, and so it
seems appropriate to locate one scales-with-computes function with
another.

Thanks!

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Proposal: DevStack

2013-07-16 Thread John Griffith
On Tue, Jul 16, 2013 at 7:42 AM, Mark McLoughlin mar...@redhat.com wrote:

 On Mon, 2013-07-15 at 14:14 -0400, Russell Bryant wrote:
  On 07/15/2013 11:39 AM, Dean Troyer wrote:
   DevStack plays multiple roles in the development process for OpenStack.
 
  Does it really make sense to be its own program?  There was mention of
  just making it a part of infra or QA.  QA actually makes the most sense
  to me, since devstack's primary use case is to make it easy to test
  OpenStack.

 How about if the title of the program was 'Developer Tools'?

 Is devstack the only project which would fall into that bucket?

 Cheers,
 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Due to the importance particularly for development purposes I would like to
see devstack as its own program.  It's a critical piece of OpenStack from
development to gating and I personally would be more comfortable if it was
associated as a dedicated program.

Given some of the program/project proposals quite frankly I was a bit
surprised this met any real question?

Perhaps folks could elaborate more on why it would be better for this to be
wrapped under QA for example?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Proposal: DevStack

2013-07-16 Thread Thierry Carrez
Mark McLoughlin wrote:
 On Mon, 2013-07-15 at 14:14 -0400, Russell Bryant wrote:
 On 07/15/2013 11:39 AM, Dean Troyer wrote:
 DevStack plays multiple roles in the development process for OpenStack.

 Does it really make sense to be its own program?  There was mention of
 just making it a part of infra or QA.  QA actually makes the most sense
 to me, since devstack's primary use case is to make it easy to test
 OpenStack.
 
 How about if the title of the program was 'Developer Tools'?
 
 Is devstack the only project which would fall into that bucket?

The trick with programs is that it's not just about finding themes and
aggregate existing projects around them. At the heart of a program is an
existing team of people that works (well) together towards that common goal.

Putting devstack in a Developer tools bucket only makes sense if the
team that cares about devstack also cares about those other developer
tools. You don't really want to create an ugly stepchild effect by
adding random projects onto unwanting people's plates.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Memcache user token index and PKI tokens

2013-07-16 Thread Morgan Fainberg
On Tue, Jul 16, 2013 at 4:01 AM, Kieran Spear kisp...@gmail.com wrote:

 On 16/07/2013, at 1:10 AM, Adam Young ayo...@redhat.com wrote:
 On 07/15/2013 04:06 AM, Kieran Spear wrote:
 Hi all,

 I want to backport the fix for the Token List in Memcache can consume
 an entire memcache page bug[1] to Grizzly, but I had a couple of
 questions:

 1. Why do we need to store the entire token data in the
 usertoken-userid key? This data always seems to be hashed before
 indexing into the 'token-tokenid' keys anyway. The size of the
 memcache data for a user's token list currently grows by 4k every time
 a new PKI token is created. It doesn't take long to hit 1MB at this
 rate even with the above fix.
 Yep. The reason, though, is that we either take a memory/storage hit (store 
 the whole token) or a performance hit (reproduce the token data) and we've 
 gone for the storage hit.

 In this case it looks like we're taking a hit from both, since the PKI token 
 id from the user token index is retrieved, then hashed and then that key is 
 used to retrieve the token from the tokens-%s page anyway.




 2. Every time it creates a new token, Keystone loads each token from
 the user's token list with a separate memcache call so it can throw it
 away if it's expired. This seems excessive. Is it anything to worry
 about? If it just checked the first two tokens you'd get the same
 effect on a longer time scale.

 I guess part of the answer is to decrease our token expiry time, which
 should mitigate both issues. Failing that we'd consider moving to the
 SQL backend.
 HOw about doing both?  But if you move to the sql backend, rememeber to 
 periodically clean up the token table, or you will have storage issues there 
 as well.  No silver bullet, I am afraid.

 I think we're going to stick with memcache for now (the devil we know :)). 
 With (1) and (2) fixed and the token expiration time tweaked I think memcache 
 will do okay.

 Kieran



 Cheers,
 Kieran

 [1] https://bugs.launchpad.net/keystone/+bug/1171985

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Kieran,

I've looked into the potential bug you described and it appears that
there has been a change in the master branch to support the idea of
pluggable token providers (much better implementation than the driver
being responsible for the token itself).  This change modified how the
memcache driver stored the IDs, and performed the CMS hashing function
when the manager returned the token_id to the driver, instead of
in-line within the driver.  The original fix should have been correct
in hashing the PKI token to the short-form ID.  Your fix to simply
hash the tokens is the correct one and more closely mirrors how the
original fix was implemented.

If you are interested in the reviews that implement the new pluggable
provider(s): https://review.openstack.org/#/c/33858/ (V3) and
https://review.openstack.org/#/c/34421/ (V2.0).

Going with the shorter TTL on the Tokens is a good idea for various
reasons depending on the token driver.  I know that the SQL driver
(provided you cleanup expired tokens) has worked well for my company,
but I want to move to the memcache driver soon.

Cheers,
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-16 Thread Morgan Fainberg
On Tue, Jul 16, 2013 at 12:55 AM, Michael Still mi...@stillhq.com wrote:
 On Tue, Jul 16, 2013 at 4:17 PM, Thomas Goirand z...@debian.org wrote:

 Could you explain a bit more what could be done to fix it in an easy
 way, even if it's not efficient? I understand that ALTER doesn't work
 well. Though would we have the possibility to just create a new
 temporary table with the correct fields, and copy the existing content
 in it, then rename the temp table so that it replaces the original one?

 There are a bunch of nova migrations that already work that way...
 Checkout the *sqlite* files in
 nova/db/sqlalchemy/migrate_repo/versions/

 Cheers,
 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

If we really want to support the concept of SQLite migrations, the way
nova does it seems to be the most sane.  I'm not 100% convinced that
SQLite migrations are worth supporting, but then again, I am not the
target to use them (besides in a simple development capacity, and I
still validate against MySQL/Postgres mostly).  If there is a demand
for SQLite, I'd say Michael has hit it on the head and the way Nova
handles this is a fairly clean mechanism and far more supportable over
the short and medium term(s) compared to working around migrate issues
with SQLite and the limited Alter support.

In one of the discussions in IRC I had offered to help with the effort
of moving away from SQLite migration testing; if the nova-way is the
way we want to go, I'll be happy to help contribute to that.

Cheers,
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-16 Thread Thomas Goirand
On 07/16/2013 03:55 PM, Michael Still wrote:
 On Tue, Jul 16, 2013 at 4:17 PM, Thomas Goirand z...@debian.org wrote:
 
 Could you explain a bit more what could be done to fix it in an easy
 way, even if it's not efficient? I understand that ALTER doesn't work
 well. Though would we have the possibility to just create a new
 temporary table with the correct fields, and copy the existing content
 in it, then rename the temp table so that it replaces the original one?
 
 There are a bunch of nova migrations that already work that way...
 Checkout the *sqlite* files in
 nova/db/sqlalchemy/migrate_repo/versions/

Why can't we do that with Keystone then? Is it too much work? It doesn't
seem hard to do (just probably a bit annoying and boring ...). Does it
represent too much work for the case of Keystone?

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Seperate out 'soft-deleted' instances from 'deleted' ones?

2013-07-16 Thread Dan Smith
  I'd like to see if there are any opinions on if this should come
  back as 'SOFT_DELETED' or if everyone is OK with mapping
  soft-delete to 'DELETED' in the v3 API?
 
 I would like to see them merged.  Having multiple kinds of deleted 
 records is really confusing, and leads to bugs.  The more public we
 make this, the harder it will be to fix it in the future.

My preference is that we only expose one type of deleted state. I've
not yet seen a compelling reason to do otherwise.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

2013-07-16 Thread Doug Hellmann
On Tue, Jul 16, 2013 at 9:51 AM, Daniel P. Berrange berra...@redhat.comwrote:

 On Tue, Jul 16, 2013 at 09:41:55AM -0400, Solly Ross wrote:
  (This email is with regards to https://review.openstack.org/#/c/36316/)
 
  Hello All,
 
  I have been implementing the Guru Meditation Report blueprint
  (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report),
  and the question of a templating engine was raised.  Currently, my
  version of the code includes the Jinja2 templating engine
  (http://jinja.pocoo.org/), which is modeled after the Django
  templating engine (it was designed to be an implementation of the
  Django templating engine without requiring the use of Django), which
  is used in Horizon.  Apparently, the Cheetah templating engine
  (http://www.cheetahtemplate.org/) is used in a couple places in Nova.
 
  IMO, the Jinja template language produces much more readable templates,
  and I think is the better choice for inclusion in the Report framework.
   It also shares a common format with Django (making it slightly easier
  to write for people coming from that area), and is also similar to
  template engines for other languages. What does everyone else think?

 Repeating my comments from the review...

 I don't have an opinion on whether Jinja or Cheetah is a better
 choice, since I've essentially never used either of them (beyond
 deleting usage of ceetah from libvirt). I do, however, feel we
 should not needlessly use multiple different templating libraries
 across OpenStack. We should take care to standardize on one option
 that is suitable for all our needs. So if the consensus is that
 Jinja is better, then IMHO, there would need to be an blueprint
 + expected timeframe to port existing Ceetah usage to use Jinja.

 Regards,
 Daniel


The most current release of Cheetah is from 2010. I don't have a problem
adding a new dependency on a tool that is actively maintained, with a plan
to migrate off of the older tool to come later.

The Neutron team seems to want to use Mako (
https://review.openstack.org/#/c/37177/). Maybe we should pick one? Keep in
mind that we won't always be generating XML or HTML, so my first question
is how well does Mako work for plain text?

Doug


 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
 |: http://libvirt.org  -o- http://virt-manager.org:|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DB][Migrations] Switching to using of Alembic

2013-07-16 Thread Roman Podolyaka
Hello, stackers!

Most of you who is interested in work around DB in OpenStack must have read
this thread [1] started by Boris Pavlovic. Boris made an overview of the
work our team is doing to make DB code better.

One of our main goals is to switch from sqlalchemy-migrate to Alembic for
applying of DB schema migrations. sqlalchemy-migrate was abandoned for a
long time, and even now when it's become adopted by OpenStack community,
we'd better use a project which is supported by upstream (especially in the
case when the author of this project is the same person who also authored
SQLAlchemy).

The switch isn't going to be simple though. We have a few problems:

1) stable releases must be supported for some time, so we can't switch from
migrate to alembic immediately

The switch should probably be made when previous migrations scripts are
compacted, so all new migrations scripts will use alembic. Switching of
such big projects as Nova is hard, so we decided to gain some experience
with porting of smaller ones first. Alexei Kornienko is currently working
on adding support of Alembic migrations in Ceilometer [3].

Our long term goal is to switch all projects from using of
sqlalchemy-migrate to Alembic.

2) we rely on schema migrations to set up an SQLite database for running
tests

Nova and possibly other projects use schema migrations to set up an SQLite
database for running tests. Unfortunately, we can't use models definitions
for generation of initial DB schema, because those definitions do not
correspond migration scripts. Our team is working on fixing of this issue
[2].

As you may now SQLite has limited support of ALTER DDL statements [4]. Nova
code contains a few auxiliary functions to make ALTER work in SQLite.
Unfortunately, Alembic doesn't support ALTER in SQLite on purpose [5]. In
order to run our tests on SQLite right now using Alembic as a schema
migration tool, we should add ALTER support to it first.

We are going to implement ALTER support in Alembic for SQLite in the next
few weeks.

As always, your comments in ML and reviews are always welcome.

Thanks,
Roman

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-July/011253.html
[2]
https://blueprints.launchpad.net/nova/+spec/db-sync-models-with-migrations
[3]
https://review.openstack.org/#/q/status:open+project:openstack/ceilometer+branch:master+topic:bp/convert-to-alembic,n,z
[4] http://www.sqlite.org/lang_altertable.html
[5] https://bitbucket.org/zzzeek/alembic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

2013-07-16 Thread Doug Hellmann
Great, I think I had the Mako syntax mixed up with a different templating
language that depended on having a DOM to work on.

Can someone put together a more concrete analysis than this is working so
we can compare the tools? :-)

Doug

On Tue, Jul 16, 2013 at 12:29 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Doug

 Mako looks OK for config generation
 This is code in review.

 https://review.openstack.org/#/c/33148/23/neutron/services/vpn/device_drivers/template/ipsec.conf.template



 2013/7/16 Doug Hellmann doug.hellm...@dreamhost.com:
 
 
 
  On Tue, Jul 16, 2013 at 9:51 AM, Daniel P. Berrange berra...@redhat.com
 
  wrote:
 
  On Tue, Jul 16, 2013 at 09:41:55AM -0400, Solly Ross wrote:
   (This email is with regards to
 https://review.openstack.org/#/c/36316/)
  
   Hello All,
  
   I have been implementing the Guru Meditation Report blueprint
   (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report),
   and the question of a templating engine was raised.  Currently, my
   version of the code includes the Jinja2 templating engine
   (http://jinja.pocoo.org/), which is modeled after the Django
   templating engine (it was designed to be an implementation of the
   Django templating engine without requiring the use of Django), which
   is used in Horizon.  Apparently, the Cheetah templating engine
   (http://www.cheetahtemplate.org/) is used in a couple places in Nova.
  
   IMO, the Jinja template language produces much more readable
 templates,
   and I think is the better choice for inclusion in the Report
 framework.
It also shares a common format with Django (making it slightly easier
   to write for people coming from that area), and is also similar to
   template engines for other languages. What does everyone else think?
 
  Repeating my comments from the review...
 
  I don't have an opinion on whether Jinja or Cheetah is a better
  choice, since I've essentially never used either of them (beyond
  deleting usage of ceetah from libvirt). I do, however, feel we
  should not needlessly use multiple different templating libraries
  across OpenStack. We should take care to standardize on one option
  that is suitable for all our needs. So if the consensus is that
  Jinja is better, then IMHO, there would need to be an blueprint
  + expected timeframe to port existing Ceetah usage to use Jinja.
 
  Regards,
  Daniel
 
 
  The most current release of Cheetah is from 2010. I don't have a problem
  adding a new dependency on a tool that is actively maintained, with a
 plan
  to migrate off of the older tool to come later.
 
  The Neutron team seems to want to use Mako
  (https://review.openstack.org/#/c/37177/). Maybe we should pick one?
 Keep in
  mind that we won't always be generating XML or HTML, so my first
 question is
  how well does Mako work for plain text?
 
  Doug
 
 
  --
  |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/
  :|
  |: http://libvirt.org  -o-
 http://virt-manager.org
  :|
  |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/
  :|
  |: http://entangle-photo.org   -o-
 http://live.gnome.org/gtk-vnc
  :|
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DB][Migrations] Switching to using of Alembic

2013-07-16 Thread Doug Hellmann
On Tue, Jul 16, 2013 at 11:51 AM, Roman Podolyaka
rpodoly...@mirantis.comwrote:

 Hello, stackers!

 Most of you who is interested in work around DB in OpenStack must have
 read this thread [1] started by Boris Pavlovic. Boris made an overview of
 the work our team is doing to make DB code better.

 One of our main goals is to switch from sqlalchemy-migrate to Alembic for
 applying of DB schema migrations. sqlalchemy-migrate was abandoned for a
 long time, and even now when it's become adopted by OpenStack community,
 we'd better use a project which is supported by upstream (especially in the
 case when the author of this project is the same person who also authored
 SQLAlchemy).

 The switch isn't going to be simple though. We have a few problems:

 1) stable releases must be supported for some time, so we can't switch
 from migrate to alembic immediately

 The switch should probably be made when previous migrations scripts are
 compacted, so all new migrations scripts will use alembic. Switching of
 such big projects as Nova is hard, so we decided to gain some experience
 with porting of smaller ones first. Alexei Kornienko is currently working
 on adding support of Alembic migrations in Ceilometer [3].


I like the idea of switching when we do a major release. I don't think we
need to port the old migrations to alembic, though, as I said to Alexei on
the review. We should be able to continue to have sqlalchemy-migrate
distributed as part of OpenStack for the legacy migrations until those
migrations can be dropped entirely. Updating would mean running
sqlalchemy-migrate, and then alembic, to apply the full set of migrations.
From what I understand the policy to be, since we have sqlalchemy-migrate
scripts in Havana we could stop creating new ones in Icehouse and drop the
use of sqlalchemy-migrate entirely in the J release when the Havana
migrations are removed.

Doug



 Our long term goal is to switch all projects from using of
 sqlalchemy-migrate to Alembic.

 2) we rely on schema migrations to set up an SQLite database for running
 tests

 Nova and possibly other projects use schema migrations to set up an SQLite
 database for running tests. Unfortunately, we can't use models definitions
 for generation of initial DB schema, because those definitions do not
 correspond migration scripts. Our team is working on fixing of this issue
 [2].


 As you may now SQLite has limited support of ALTER DDL statements [4].
 Nova code contains a few auxiliary functions to make ALTER work in SQLite.
 Unfortunately, Alembic doesn't support ALTER in SQLite on purpose [5]. In
 order to run our tests on SQLite right now using Alembic as a schema
 migration tool, we should add ALTER support to it first.

 We are going to implement ALTER support in Alembic for SQLite in the next
 few weeks.

 As always, your comments in ML and reviews are always welcome.

 Thanks,
 Roman

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011253.html
 [2]
 https://blueprints.launchpad.net/nova/+spec/db-sync-models-with-migrations
 [3]
 https://review.openstack.org/#/q/status:open+project:openstack/ceilometer+branch:master+topic:bp/convert-to-alembic,n,z
 [4] http://www.sqlite.org/lang_altertable.html
 [5] https://bitbucket.org/zzzeek/alembic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DB][Migrations] Switching to using of Alembic

2013-07-16 Thread Dolph Mathews
On Tue, Jul 16, 2013 at 11:53 AM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Tue, Jul 16, 2013 at 11:51 AM, Roman Podolyaka rpodoly...@mirantis.com
  wrote:

 Hello, stackers!

 Most of you who is interested in work around DB in OpenStack must have
 read this thread [1] started by Boris Pavlovic. Boris made an overview of
 the work our team is doing to make DB code better.

 One of our main goals is to switch from sqlalchemy-migrate to Alembic for
 applying of DB schema migrations. sqlalchemy-migrate was abandoned for a
 long time, and even now when it's become adopted by OpenStack community,
 we'd better use a project which is supported by upstream (especially in the
 case when the author of this project is the same person who also authored
 SQLAlchemy).

 The switch isn't going to be simple though. We have a few problems:

 1) stable releases must be supported for some time, so we can't switch
 from migrate to alembic immediately

 The switch should probably be made when previous migrations scripts are
 compacted, so all new migrations scripts will use alembic. Switching of
 such big projects as Nova is hard, so we decided to gain some experience
 with porting of smaller ones first. Alexei Kornienko is currently working
 on adding support of Alembic migrations in Ceilometer [3].


 I like the idea of switching when we do a major release. I don't think we
 need to port the old migrations to alembic, though, as I said to Alexei on
 the review. We should be able to continue to have sqlalchemy-migrate
 distributed as part of OpenStack for the legacy migrations until those
 migrations can be dropped entirely. Updating would mean running
 sqlalchemy-migrate, and then alembic, to apply the full set of migrations.
 From what I understand the policy to be, since we have sqlalchemy-migrate
 scripts in Havana we could stop creating new ones in Icehouse and drop the
 use of sqlalchemy-migrate entirely in the J release when the Havana
 migrations are removed.


This would be my preferred approach as well.


 Doug



 Our long term goal is to switch all projects from using of
 sqlalchemy-migrate to Alembic.

 2) we rely on schema migrations to set up an SQLite database for running
 tests

 Nova and possibly other projects use schema migrations to set up an
 SQLite database for running tests. Unfortunately, we can't use models
 definitions for generation of initial DB schema, because those definitions
 do not correspond migration scripts. Our team is working on fixing of this
 issue [2].


 As you may now SQLite has limited support of ALTER DDL statements [4].
 Nova code contains a few auxiliary functions to make ALTER work in SQLite.
 Unfortunately, Alembic doesn't support ALTER in SQLite on purpose [5]. In
 order to run our tests on SQLite right now using Alembic as a schema
 migration tool, we should add ALTER support to it first.

 We are going to implement ALTER support in Alembic for SQLite in the next
 few weeks.

 As always, your comments in ML and reviews are always welcome.

 Thanks,
 Roman

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2013-July/011253.html
 [2]
 https://blueprints.launchpad.net/nova/+spec/db-sync-models-with-migrations
 [3]
 https://review.openstack.org/#/q/status:open+project:openstack/ceilometer+branch:master+topic:bp/convert-to-alembic,n,z
 [4] http://www.sqlite.org/lang_altertable.html
 [5] https://bitbucket.org/zzzeek/alembic

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] lambda() Errors in Quantum/Neutron Grizzly (2013.1.2)

2013-07-16 Thread Craig E. Ward

Edgar,

Below are the versions of the quantum/neutron packages and of RabbitMQ. I've 
attached a text file with the debug lines from each agent reporting on their 
configuration parameters.


The Quantum/Neutron packages came from 
http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/. I 
believed them to be built from the June 6, 2013 Quantum Grizzly release.


Thanks,

Craig

Name: openstack-quantum
Arch: noarch
Version : 2013.1.2
Release : 1.el6
Size: 83 k
Repo: installed
From repo   : /openstack-quantum-2013.1.2-1.el6.noarch
Summary : OpenStack Networking Service
URL : http://launchpad.net/quantum/
License : ASL 2.0
Description : Quantum is a virtual network service for Openstack. Just like
: OpenStack Nova provides an API to dynamically request and 
configure
: virtual servers, Quantum provides an API to dynamically request 
and
: configure virtual networks. These networks connect interfaces 
from
: other OpenStack services (e.g., virtual NICs from Nova VMs). The
: Quantum API supports extensions to provide advanced network
: capabilities (e.g., QoS, ACLs, network monitoring, etc.)

Name: openstack-quantum-linuxbridge
Arch: noarch
Version : 2013.1.2
Release : 1.el6
Size: 153 k
Repo: installed
From repo   : /openstack-quantum-linuxbridge-2013.1.2-1.el6.noarch
Summary : Quantum linuxbridge plugin
URL : http://launchpad.net/quantum/
License : ASL 2.0
Description : Quantum provides an API to dynamically request and configure 
virtual
: networks.
:
: This package contains the quantum plugin that implements virtual
: networks as VLANs using Linux bridging.

Name: python-quantum
Arch: noarch
Version : 2013.1.2
Release : 1.el6
Size: 2.8 M
Repo: installed
From repo   : /python-quantum-2013.1.2-1.el6.noarch
Summary : Quantum Python libraries
URL : http://launchpad.net/quantum/
License : ASL 2.0
Description : Quantum provides an API to dynamically request and configure 
virtual
: networks.
:
: This package contains the quantum Python library.

Name: rabbitmq-server
Arch: noarch
Version : 2.6.1
Release : 1.el6
Size: 1.5 M
Repo: installed
From repo   : dodcs-sw-iso
Summary : The RabbitMQ server
URL : http://www.rabbitmq.com/
License : MPLv1.1
Description : RabbitMQ is an implementation of AMQP, the emerging standard for 
high
: performance enterprise messaging. The RabbitMQ server is a 
robust and

: scalable implementation of an AMQP broker.


On 07/15/2013 05:27 PM, Edgar Magana wrote:

Craig,

It will help if you can add more information about your set-up:
release version?
devstack configuration (if you are using it)
configuration files

recently, if you are using master branch this error is really weird because
we renamed all quantum references to neutron.

Thanks,

Edgar


On Mon, Jul 15, 2013 at 4:23 PM, Craig E. Ward cw...@isi.edu wrote:


I am seeing strange errors in a single-node OpenStack Grizzly
installation. The logs are complaining about a mismatch of arguments and
cover the linuxbridge, dhcp, and l3 agents. Below is a sample:

   TypeError: lambda() takes exactly 2 arguments (3 given)

The numbers expected and given are not consistent. It looks like a coding
error, but I can't believe such an error would have made it into a
distribution so it must be that I've configured something incorrectly. I've
attached a text file with more detailed examples. Any help diagnosing this
problem will be much appreciated.

What am I doing wrong? What other information would be useful to look at?

Thanks,

Craig

--
Craig E. Ward
Information Sciences Institute
University of Southern California
cw...@isi.edu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Craig E. Ward
Information Sciences Institute
University of Southern California
cw...@isi.edu


DHCP Agent
==
[Prefix timestamp DEBUG [quantum.openstack.common.service] removed.]
Full set of CONF:

Configuration options gathered from:
command line args: ['--log-file', '/var/log/quantum/dhcp-agent.log', 
'--config-file', '/etc/quantum/quantum.conf', '--config-file', 
'/etc/quantum/dhcp_agent.ini']
config files: ['/etc/quantum/quantum.conf', '/etc/quantum/dhcp_agent.ini']


[openstack-dev] Neutron -- creating networks with no assigned tenant

2013-07-16 Thread Jay Pipes
The way that folsom and nova + nova-network works is that you create a 
bunch of unassigned (no tenant assigned to the networks) networks and 
when a tenant first launches an instance, nova grabs an available 
network for the tenant and assigns it to the tenant. Then each instance 
the tenant spins up after that gets an IP in the specific network it was 
assigned.


How can I do the same thing with Neutron? I don't want my tenants or an 
admin to have to manually create a network in Neutron every time a 
tenant is added.


Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] lambda() Errors in Quantum/Neutron Grizzly (2013.1.2)

2013-07-16 Thread Craig E. Ward

Ajiva,

The database is MySQL, 5.1.69.

The RPC service is RabbitMQ version 2.6.1.

Is there another database in play that I am not aware of?

I attached a file with the configuration data each agent in seeing in my 
response to Edgar M. That may have more relevant data.


Thanks,

Craig

On 07/15/2013 07:50 PM, Ajiva Fan wrote:

context.session.query(xxx).filter() arguments number not matched

which database are you using?


On Tue, Jul 16, 2013 at 8:27 AM, Edgar Magana emag...@plumgrid.com wrote:


Craig,

It will help if you can add more information about your set-up:
release version?
devstack configuration (if you are using it)
configuration files

recently, if you are using master branch this error is really weird
because we renamed all quantum references to neutron.

Thanks,

Edgar


On Mon, Jul 15, 2013 at 4:23 PM, Craig E. Ward cw...@isi.edu wrote:


I am seeing strange errors in a single-node OpenStack Grizzly
installation. The logs are complaining about a mismatch of arguments and
cover the linuxbridge, dhcp, and l3 agents. Below is a sample:

   TypeError: lambda() takes exactly 2 arguments (3 given)

The numbers expected and given are not consistent. It looks like a coding
error, but I can't believe such an error would have made it into a
distribution so it must be that I've configured something incorrectly. I've
attached a text file with more detailed examples. Any help diagnosing this
problem will be much appreciated.

What am I doing wrong? What other information would be useful to look at?

Thanks,

Craig

--
Craig E. Ward
Information Sciences Institute
University of Southern California
cw...@isi.edu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Craig E. Ward
Information Sciences Institute
University of Southern California
cw...@isi.edu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Chalenges with highly available service VMs

2013-07-16 Thread Ian Wells
On 10 July 2013 21:14, Vishvananda Ishaya vishvana...@gmail.com wrote:
 It used to be essential back when we had nova-network and all tenants
 ended up on one network.  It became less useful when tenants could
 create their own networks and could use them as they saw fit.

 It's still got its uses - for instance, it's nice that the metadata
 server can be sure that a request is really coming from where it
 claims - but I would very much like it to be possible to, as an
 option, explicitly disable antispoof - perhaps on a per-network basis
 at network creation time - and I think we could do this without
 breaking the security model beyond all hope of usefulness.

 Per network and per port makes sense.

 After all, this is conceptually the same as enabling or disabling
 port security on your switch.

Bit late on the reply to this, but I think we should be specific on
the network, at least at creation time, on what disabling is allowed
at port level (default off, may be off, must be on as now).  Yes, it's
exactly like disabling port security, and you're not always the
administrator of your own switch; if we extend the analogy you
probably wouldn't necessarily want people turning antispoof off on an
explicitly shared-tenant network.
-- 
Ian.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Program Proposal: Trove

2013-07-16 Thread Michael Basnight
Official Title: OpenStack Database as a Service
Initial PTL: Michael Basnight mbasni...@gmail.com

Mission Statement: To provide scalable and reliable Cloud Database as a Service 
functionality for both relational and non-relational database engines, and to 
continue to improve its fully-featured and extensible open source framework.

GitHub: https://github.com/openstack/trove
LaunchPad: https://launchpad.net/Trove
Program Wiki: https://wiki.openstack.org/wiki/Trove
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso-config dev requirement

2013-07-16 Thread Monty Taylor


On 07/16/2013 11:42 AM, Doug Hellmann wrote:
 
 
 
 On Tue, Jul 16, 2013 at 7:58 AM, Mark McLoughlin mar...@redhat.com
 mailto:mar...@redhat.com wrote:
 
 On Mon, 2013-07-15 at 14:28 -0400, Doug Hellmann wrote:
  On Mon, Jul 15, 2013 at 11:03 AM, Monty Taylor
 mord...@inaugust.com mailto:mord...@inaugust.com wrote:
 
   I was looking in to dependency processing as part of some pbr
 change,
   which got me to look at the way we're doing oslo-config dev
 requirements
   again. To start, this email is not about causing us to change
 what we're
   doing, only possibly the mechanics of what we put in the
   requirements.txt file- or to get a more specific example of what
 we're
   solving so that I can make a test case for it and ensure we're
 handling
   it properly.
  
   Currently, we have this:
  
   -f
  
  
 
 http://tarballs..openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2..0a3
 
 http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2.0a3
   oslo.config=1.2.0a3
  
   As the way to specify to install - 1.2.0a3 of oslo.config. I
 believe
   this construct has grown in response to a sequence of issues,
 but it's
   complex and fragile, so I'd like to explore what's going on.
  
   The simplest answer would be simply to replace it with:
  
   http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz
  
   which will quite happily cause pip to install the contents of that
   tarball. It does not declare a version, but it's not necessary to,
   because the tarball has only one version that it is. Is there a
 problem
   we have identified where the wrong thing is happening?
  
 
   I've tested that I get the right thing in a virtualenv if I make
 that
   change from pip installing a tarball, pip installing the
 requirements
   directly and python setup.py install. Is there anything I'm missing?
  
   Monty
  
 
 
  Without the version specifier, we are relying on all projects to
 install
  the right version from that tarball link when we run devstack, but
 we have
  no guarantee that they are moving to new releases in lockstep.
 
 Yep, that's it. The thing to test would be if some projects have the
 1.2.0a2 tarball link and an one has the 1.2.0a3 link because it depends
 on an API that's new in 1.2.0a3.
 
 
 It's worse than that. What gets installed will depend on the order
 devstack installs the projects and what their respective requirements
 lists say. It is possible to end up with compatible source code
 installed, but with a version number that setuptools thinks is not
 compatible based on projects' requirements. In that case, setuptools may
 not let us load plugins, so services will start but not actually work.

Thank you. This is what I needed here.

BTW - I put this up:

https://review.openstack.org/#/c/35705/

To take a stab at installing our global requirements list first, and
then installing our projects in that environment. I also just made:

https://review.openstack.org/#/c/37295/

Which would add oslo.config and oslo.messaging as git repos to the
devstack system, so that we can track trunk like we do with the other
projects.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Seperate out 'soft-deleted' instances from 'deleted' ones?

2013-07-16 Thread Day, Phil
  -Original Message-
  From: David Ripton [mailto:drip...@redhat.com]
  Sent: 16 July 2013 15:39
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] Seperate out 'soft-deleted' instances 
  from
  'deleted' ones?
  
  On 07/15/2013 10:03 AM, Matt Riedemann wrote:
   I have a patch up for review on this:
  
   _https://review.openstack.org/#/c/35061/_
  
   However, this doesn't fix the vm_states.SOFT_DELETED mapping in
   nova.api.openstack.common so if you show an instance with
   vm_states.SOFT_DELETED, the response status will be 'DELETED'.
  
   I'd like to see if there are any opinions on if this should come back
   as 'SOFT_DELETED' or if everyone is OK with mapping soft-delete to
   'DELETED' in the v3 API?
  
  I would like to see them merged.  Having multiple kinds of deleted records 
  is
  really confusing, and leads to bugs.  The more public we make this, the 
  harder
  it will be to fix it in the future.
  

The only place I can see it would be confusing is if an admin is using 
deleted in the search options, in which case I think they would need some way 
to distinguish between soft and hard deleted instances.  

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

2013-07-16 Thread Sandy Walsh
There's a ton of reviews/comparisons out there, only a google away.



From: Doug Hellmann [doug.hellm...@dreamhost.com]
Sent: Tuesday, July 16, 2013 1:45 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

Great, I think I had the Mako syntax mixed up with a different templating 
language that depended on having a DOM to work on.

Can someone put together a more concrete analysis than this is working so we 
can compare the tools? :-)

Doug

On Tue, Jul 16, 2013 at 12:29 PM, Nachi Ueno 
na...@ntti3.commailto:na...@ntti3.com wrote:
Hi Doug

Mako looks OK for config generation
This is code in review.
https://review.openstack.org/#/c/33148/23/neutron/services/vpn/device_drivers/template/ipsec.conf.template



2013/7/16 Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com:



 On Tue, Jul 16, 2013 at 9:51 AM, Daniel P. Berrange 
 berra...@redhat.commailto:berra...@redhat.com
 wrote:

 On Tue, Jul 16, 2013 at 09:41:55AM -0400, Solly Ross wrote:
  (This email is with regards to https://review.openstack.org/#/c/36316/)
 
  Hello All,
 
  I have been implementing the Guru Meditation Report blueprint
  (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report),
  and the question of a templating engine was raised.  Currently, my
  version of the code includes the Jinja2 templating engine
  (http://jinja.pocoo.org/), which is modeled after the Django
  templating engine (it was designed to be an implementation of the
  Django templating engine without requiring the use of Django), which
  is used in Horizon.  Apparently, the Cheetah templating engine
  (http://www.cheetahtemplate.org/) is used in a couple places in Nova.
 
  IMO, the Jinja template language produces much more readable templates,
  and I think is the better choice for inclusion in the Report framework.
   It also shares a common format with Django (making it slightly easier
  to write for people coming from that area), and is also similar to
  template engines for other languages. What does everyone else think?

 Repeating my comments from the review...

 I don't have an opinion on whether Jinja or Cheetah is a better
 choice, since I've essentially never used either of them (beyond
 deleting usage of ceetah from libvirt). I do, however, feel we
 should not needlessly use multiple different templating libraries
 across OpenStack. We should take care to standardize on one option
 that is suitable for all our needs. So if the consensus is that
 Jinja is better, then IMHO, there would need to be an blueprint
 + expected timeframe to port existing Ceetah usage to use Jinja.

 Regards,
 Daniel


 The most current release of Cheetah is from 2010. I don't have a problem
 adding a new dependency on a tool that is actively maintained, with a plan
 to migrate off of the older tool to come later.

 The Neutron team seems to want to use Mako
 (https://review.openstack.org/#/c/37177/). Maybe we should pick one? Keep in
 mind that we won't always be generating XML or HTML, so my first question is
 how well does Mako work for plain text?

 Doug


 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DB][Migrations] Switching to using of Alembic

2013-07-16 Thread Boris Pavlovic
To ALL,

About Alembic and sqlite Alters,

There is easy way to provide Alter in Alembic from sqlite (there are
functions they should be just implemented)
As I know author is agree if somebody help with this.



About switching to Alembic:

We are not able to use automatically merging migration tool that produce 1
huge migration form N small until all our migration will be written in
alembic.

There is no magic and we have only 2 ways to end up with this problems
and bugs that could be caused by manually migrations merging and tons of
bugs in sqlalchemy-migrate.

1) step by step (openstack method)
  There are special tests test_migrations that runs migrations on real
data against all backends. So we should:

  a) improve this tests to checks all behaviors // there is a lot of hidden
bugs
  b) replace migration (only one small migration) to alembic
  c) check that in all backends we made the same changes in schema
  d) Merge all old migrations in one using alembic (automatically).
  So it could be done in safe way.

2.a) huge 2 steps
  1. Merge all migrations in one huge manually (drop all tests in test
migrations)
  e.g. In Nova was patch https://review.openstack.org/#/c/35748/
  I don't believe that there are no mistakes in this migration, and
nobody is able to check it. // because of tons of hidden bugs in old
migrations and sqla-migrate.
  2. Replace this migration in Alembic
   I don't believe that there will be way to check that there is no
bugs

2.b) suicide mode (1 big step)
  Merge and switch in one step=)


We (Mirantis guys and girls) are ready to implement first plan in all
projects (step by step, with tons of tests and checks). I think that:
Sveta, Anya, Julya, Elena, Yuri, Sergey, Alexei, Roma, Viktor, Alex and me
will be able to cover and finish all this work in all projects in Icehouse
cycle.





If you don't understand why we have only 2 ways to switch to alembic, read
at least this:

what we have:
1) migration schemas produce different schemas in different backneds
2) db models is not synced with any of these schemas.
what it means:
1) So we have to provide sqlite in migrations (until we finish work around
unit test against specified backend (https://review.openstack.org/#/c/33236/
)
2) We are not able to add test that checks that migrations and models are
synced (https://review.openstack.org/#/c/34212/)
3) We are not able to use models created DB for unit tests.


Best regards,
Boris Pavlovic




On Tue, Jul 16, 2013 at 9:02 PM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Tue, Jul 16, 2013 at 11:53 AM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




 On Tue, Jul 16, 2013 at 11:51 AM, Roman Podolyaka 
 rpodoly...@mirantis.com wrote:

 Hello, stackers!

 Most of you who is interested in work around DB in OpenStack must have
 read this thread [1] started by Boris Pavlovic. Boris made an overview of
 the work our team is doing to make DB code better.

 One of our main goals is to switch from sqlalchemy-migrate to Alembic
 for applying of DB schema migrations. sqlalchemy-migrate was abandoned for
 a long time, and even now when it's become adopted by OpenStack community,
 we'd better use a project which is supported by upstream (especially in the
 case when the author of this project is the same person who also authored
 SQLAlchemy).

 The switch isn't going to be simple though. We have a few problems:

 1) stable releases must be supported for some time, so we can't switch
 from migrate to alembic immediately

 The switch should probably be made when previous migrations scripts are
 compacted, so all new migrations scripts will use alembic. Switching of
 such big projects as Nova is hard, so we decided to gain some experience
 with porting of smaller ones first. Alexei Kornienko is currently working
 on adding support of Alembic migrations in Ceilometer [3].


 I like the idea of switching when we do a major release. I don't think we
 need to port the old migrations to alembic, though, as I said to Alexei on
 the review. We should be able to continue to have sqlalchemy-migrate
 distributed as part of OpenStack for the legacy migrations until those
 migrations can be dropped entirely. Updating would mean running
 sqlalchemy-migrate, and then alembic, to apply the full set of migrations.
 From what I understand the policy to be, since we have sqlalchemy-migrate
 scripts in Havana we could stop creating new ones in Icehouse and drop the
 use of sqlalchemy-migrate entirely in the J release when the Havana
 migrations are removed.


 This would be my preferred approach as well.


 Doug



 Our long term goal is to switch all projects from using of
 sqlalchemy-migrate to Alembic.

 2) we rely on schema migrations to set up an SQLite database for running
 tests

 Nova and possibly other projects use schema migrations to set up an
 SQLite database for running tests. Unfortunately, we can't use models
 definitions for generation of initial DB schema, because those 

Re: [openstack-dev] [DB][Migrations] Switching to using of Alembic

2013-07-16 Thread David Ripton

On 07/16/2013 01:58 PM, Boris Pavlovic wrote:


There is no magic and we have only 2 ways to end up with this problems
and bugs that could be caused by manually migrations merging and tons
of bugs in sqlalchemy-migrate.

1) step by step (openstack method)
   There are special tests test_migrations that runs migrations on
real data against all backends. So we should:

   a) improve this tests to checks all behaviors // there is a lot of
hidden bugs
   b) replace migration (only one small migration) to alembic
   c) check that in all backends we made the same changes in schema
   d) Merge all old migrations in one using alembic (automatically).
   So it could be done in safe way.

2.a) huge 2 steps
   1. Merge all migrations in one huge manually (drop all tests in test
migrations)
   e.g. In Nova was patch https://review.openstack.org/#/c/35748/
   I don't believe that there are no mistakes in this migration, and
nobody is able to check it. // because of tons of hidden bugs in old
migrations and sqla-migrate.
   2. Replace this migration in Alembic
I don't believe that there will be way to check that there is no
bugs

2.b) suicide mode (1 big step)
   Merge and switch in one step=)


We have compacted migrations before, and there's a test document for how 
to verify that the big migration has exactly the same output as the 
series of small migrations.  See 
https://wiki.openstack.org/wiki/Database_migration_testing  Dan Prince 
is the expert on this.


I think the right process is:

1. Wait until the very beginning of Icehouse cycle.  (But not after we 
have new migrations for Icehouse.)


2. Compact all migrations into 2xx_havana.py (for SQLAlchemy-migrate)

3. Test that it's perfect via above test plan plus whatever enhancements 
we think of.


4. Manually convert 2xx_havana.py (for SQLAlchemy-migrate) into Alembic, 
and verify that it's still perfect.


5. Deprecate the SQLAlchemy-migrate version and announce that new 
migrations should be in Alembic.


#4 is hard work but not impossible.  I have some old code that does 90% 
of the work, so we only have to do the other 90%.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron -- creating networks with no assigned tenant

2013-07-16 Thread Jay Pipes

On 07/16/2013 02:03 PM, Nachi Ueno wrote:

Hi Jay

It is not supported now, and there is no bp proposed to do that.
It can be done via API (CLI), so we can write a script for tenant setup.


Hi Nachi,

IMO, this is a step backwards and a deficiency. Basically, the user 
interface was needlessly made more complicated for the tenant. Instead 
of just launching their instance, the tenant now needs to create a 
subnet and then launch their instance, passing the subnet ID in the nova 
boot command.


-jay


2013/7/16 Jay Pipes jaypi...@gmail.com:

The way that folsom and nova + nova-network works is that you create a bunch
of unassigned (no tenant assigned to the networks) networks and when a
tenant first launches an instance, nova grabs an available network for the
tenant and assigns it to the tenant. Then each instance the tenant spins up
after that gets an IP in the specific network it was assigned.

How can I do the same thing with Neutron? I don't want my tenants or an
admin to have to manually create a network in Neutron every time a tenant is
added.

Thanks,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

2013-07-16 Thread Doug Hellmann
Well, yeah. I have my opinion, too, since I've used a few of these in the
past. But I was trying to encourage a bit of discussion, rather than just
settling on whatever template library made it through the review process in
openstack/requirements first. :-)

For example, I'm less concerned with how easy the language is to use and
than I am with how actively maintained the library is and how widely it is
used elsewhere. Runtime performance will matter more in some cases than
others (it doesn't seem like generating configuration files needs to be all
that fast, compared to web pages).

Doug


On Tue, Jul 16, 2013 at 1:58 PM, Sandy Walsh sandy.wa...@rackspace.comwrote:

  There's a ton of reviews/comparisons out there, only a google away.


  --
 *From:* Doug Hellmann [doug.hellm...@dreamhost.com]
 *Sent:* Tuesday, July 16, 2013 1:45 PM

 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

   Great, I think I had the Mako syntax mixed up with a different
 templating language that depended on having a DOM to work on.

  Can someone put together a more concrete analysis than this is working
 so we can compare the tools? :-)

  Doug

 On Tue, Jul 16, 2013 at 12:29 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Doug

 Mako looks OK for config generation
 This is code in review.

 https://review.openstack.org/#/c/33148/23/neutron/services/vpn/device_drivers/template/ipsec.conf.template



 2013/7/16 Doug Hellmann doug.hellm...@dreamhost.com:
  
 
 
  On Tue, Jul 16, 2013 at 9:51 AM, Daniel P. Berrange 
 berra...@redhat.com
  wrote:
 
  On Tue, Jul 16, 2013 at 09:41:55AM -0400, Solly Ross wrote:
   (This email is with regards to
 https://review.openstack.org/#/c/36316/)
  
   Hello All,
  
   I have been implementing the Guru Meditation Report blueprint
   (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report
 ),
   and the question of a templating engine was raised.  Currently, my
   version of the code includes the Jinja2 templating engine
   (http://jinja.pocoo.org/), which is modeled after the Django
   templating engine (it was designed to be an implementation of the
   Django templating engine without requiring the use of Django), which
   is used in Horizon.  Apparently, the Cheetah templating engine
   (http://www.cheetahtemplate.org/) is used in a couple places in
 Nova.
  
   IMO, the Jinja template language produces much more readable
 templates,
   and I think is the better choice for inclusion in the Report
 framework.
It also shares a common format with Django (making it slightly
 easier
   to write for people coming from that area), and is also similar to
   template engines for other languages. What does everyone else think?
 
  Repeating my comments from the review...
 
  I don't have an opinion on whether Jinja or Cheetah is a better
  choice, since I've essentially never used either of them (beyond
  deleting usage of ceetah from libvirt). I do, however, feel we
  should not needlessly use multiple different templating libraries
  across OpenStack. We should take care to standardize on one option
  that is suitable for all our needs. So if the consensus is that
  Jinja is better, then IMHO, there would need to be an blueprint
  + expected timeframe to port existing Ceetah usage to use Jinja.
 
  Regards,
  Daniel
 
 
  The most current release of Cheetah is from 2010. I don't have a problem
  adding a new dependency on a tool that is actively maintained, with a
 plan
  to migrate off of the older tool to come later.
 
  The Neutron team seems to want to use Mako
  (https://review.openstack.org/#/c/37177/). Maybe we should pick one?
 Keep in
  mind that we won't always be generating XML or HTML, so my first
 question is
  how well does Mako work for plain text?
 
  Doug
 
 
  --
  |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/
  :|
  |: http://libvirt.org  -o-
 http://virt-manager.org
  :|
  |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/
  :|
  |: http://entangle-photo.org   -o-
 http://live.gnome.org/gtk-vnc
  :|
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

Re: [openstack-dev] [DB][Migrations] Switching to using of Alembic

2013-07-16 Thread Boris Pavlovic
David,

1. Dan Prince thing is equal useful and can help in both cases

2. We are not able to block all openstack for an half year to implement
your plan

3. We are able only to convert only grizzly migrations not havana (because
our customers should be able to switch from grizzly to havana)

4. We don't need to wait to make SQLAlchemy-migrate deprecated because
script that allows in same way to use alembic and migrate is almost ready.

5. I just don't understand, we are spending tons of times to fix and unify
work around DB in whole openstack. It is pretty complex and large task. And
we should choose the way that is simpler:

Our approach:
1) Migrate N migrations to alembic step by step (this work is pretty
simple) with tons of tests + Dan Prince method

Your approach:
1) Migrate N migrations to one big migrations (it is much more complex then
our 1 step)
2) Replace one really huge migration to one Really huge migration in
alembic (it is also more complex then our 1 step)

So instead on doing 1 long but simple step we are doing 2 complex. Could
you explain me why?

Best regards,
Boris Pavlovic


On Tue, Jul 16, 2013 at 10:16 PM, David Ripton drip...@redhat.com wrote:

 On 07/16/2013 01:58 PM, Boris Pavlovic wrote:

  There is no magic and we have only 2 ways to end up with this problems
 and bugs that could be caused by manually migrations merging and tons
 of bugs in sqlalchemy-migrate.

 1) step by step (openstack method)
There are special tests test_migrations that runs migrations on
 real data against all backends. So we should:

a) improve this tests to checks all behaviors // there is a lot of
 hidden bugs
b) replace migration (only one small migration) to alembic
c) check that in all backends we made the same changes in schema
d) Merge all old migrations in one using alembic (automatically).
So it could be done in safe way.

 2.a) huge 2 steps
1. Merge all migrations in one huge manually (drop all tests in test
 migrations)
e.g. In Nova was patch 
 https://review.openstack.org/#**/c/35748/https://review.openstack.org/#/c/35748/
I don't believe that there are no mistakes in this migration, and
 nobody is able to check it. // because of tons of hidden bugs in old
 migrations and sqla-migrate.
2. Replace this migration in Alembic
 I don't believe that there will be way to check that there is no
 bugs

 2.b) suicide mode (1 big step)
Merge and switch in one step=)


 We have compacted migrations before, and there's a test document for how
 to verify that the big migration has exactly the same output as the series
 of small migrations.  See https://wiki.openstack.org/**
 wiki/Database_migration_**testinghttps://wiki.openstack.org/wiki/Database_migration_testing
  Dan Prince is the expert on this.

 I think the right process is:

 1. Wait until the very beginning of Icehouse cycle.  (But not after we
 have new migrations for Icehouse.)

 2. Compact all migrations into 2xx_havana.py (for SQLAlchemy-migrate)

 3. Test that it's perfect via above test plan plus whatever enhancements
 we think of.

 4. Manually convert 2xx_havana.py (for SQLAlchemy-migrate) into Alembic,
 and verify that it's still perfect.

 5. Deprecate the SQLAlchemy-migrate version and announce that new
 migrations should be in Alembic.

 #4 is hard work but not impossible.  I have some old code that does 90% of
 the work, so we only have to do the other 90%.

 --
 David Ripton   Red Hat   drip...@redhat.com


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DB][Migrations] Switching to using of Alembic

2013-07-16 Thread Boris Pavlovic
David,

Btw it is my code ;), and in the same way we will implement alters in
Alembic.

Best regards,
Boris Pavlovic





On Tue, Jul 16, 2013 at 10:09 PM, David Ripton drip...@redhat.com wrote:

 On 07/16/2013 12:09 PM, Dolph Mathews wrote:


 On Tue, Jul 16, 2013 at 10:51 AM, Roman Podolyaka
 rpodoly...@mirantis.com 
 mailto:rpodolyaka@mirantis.**comrpodoly...@mirantis.com
 wrote:


  We are going to implement ALTER support in Alembic for SQLite in the
 next few weeks.


 I'm a little lost on this ... sqlite doesn't support ALTER, so what
 exactly is being added to alembic? Is the alembic community receptive or
 interested?


 There is some code in Nova (on its way into Oslo) to work around not being
 able to alter tables in SQLite.  It deletes the old table and adds the
 modified version as a new table.  That's the best you can do without
 modifying SQLite itself.

 The Alembic README specifically mentions this SQLite issue and says we
 will support these features provided someone takes the initiative to
 implement and test.  So, yeah, he'll take these patches.  Means we'll need
 to use the future version of Alembic with this feature, though.


 --
 David Ripton   Red Hat   drip...@redhat.com

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso-config dev requirement

2013-07-16 Thread Mark McLoughlin
On Tue, 2013-07-16 at 13:37 -0400, Monty Taylor wrote:
 
 On 07/16/2013 11:42 AM, Doug Hellmann wrote:
  
  
  
  On Tue, Jul 16, 2013 at 7:58 AM, Mark McLoughlin mar...@redhat.com
  mailto:mar...@redhat.com wrote:
  
  On Mon, 2013-07-15 at 14:28 -0400, Doug Hellmann wrote:
   On Mon, Jul 15, 2013 at 11:03 AM, Monty Taylor
  mord...@inaugust.com mailto:mord...@inaugust.com wrote:
  
I was looking in to dependency processing as part of some pbr
  change,
which got me to look at the way we're doing oslo-config dev
  requirements
again. To start, this email is not about causing us to change
  what we're
doing, only possibly the mechanics of what we put in the
requirements.txt file- or to get a more specific example of what
  we're
solving so that I can make a test case for it and ensure we're
  handling
it properly.
   
Currently, we have this:
   
-f
   
   
  
  http://tarballs..openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2..0a3
  
  http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz#egg=oslo.config-1.2.0a3
oslo.config=1.2.0a3
   
As the way to specify to install - 1.2.0a3 of oslo.config. I
  believe
this construct has grown in response to a sequence of issues,
  but it's
complex and fragile, so I'd like to explore what's going on.
   
The simplest answer would be simply to replace it with:
   
http://tarballs.openstack.org/oslo.config/oslo.config-1.2.0a3.tar.gz
   
which will quite happily cause pip to install the contents of that
tarball. It does not declare a version, but it's not necessary to,
because the tarball has only one version that it is. Is there a
  problem
we have identified where the wrong thing is happening?
   
  
I've tested that I get the right thing in a virtualenv if I make
  that
change from pip installing a tarball, pip installing the
  requirements
directly and python setup.py install. Is there anything I'm missing?
   
Monty
   
  
  
   Without the version specifier, we are relying on all projects to
  install
   the right version from that tarball link when we run devstack, but
  we have
   no guarantee that they are moving to new releases in lockstep.
  
  Yep, that's it. The thing to test would be if some projects have the
  1.2.0a2 tarball link and an one has the 1.2.0a3 link because it depends
  on an API that's new in 1.2.0a3.
  
  
  It's worse than that. What gets installed will depend on the order
  devstack installs the projects and what their respective requirements
  lists say. It is possible to end up with compatible source code
  installed, but with a version number that setuptools thinks is not
  compatible based on projects' requirements. In that case, setuptools may
  not let us load plugins, so services will start but not actually work.
 
 Thank you. This is what I needed here.
 
 BTW - I put this up:
 
 https://review.openstack.org/#/c/35705/
 
 To take a stab at installing our global requirements list first, and
 then installing our projects in that environment. I also just made:
 
 https://review.openstack.org/#/c/37295/
 
 Which would add oslo.config and oslo.messaging as git repos to the
 devstack system, so that we can track trunk like we do with the other
 projects.

Awesome, thanks - I've had that on my TODO list for weeks.

It's probably a bit early about oslo.messaging, since nothing's using it
yet ... but no harm in having it in there.

Thanks again,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cheetah vs Jinja

2013-07-16 Thread Matt Dietz
I'll second the jinja2 recommendation. I also use it with Pyramid, and
find it non-obtrusive to write and easy to understand.

-Original Message-
From: Sandy Walsh sandy.wa...@rackspace.com
Reply-To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Date: Tuesday, July 16, 2013 11:34 AM
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Cheetah vs Jinja

I've used jinja2 on many projects ... it's always been solid.

-S


From: Solly Ross [sr...@redhat.com]
Sent: Tuesday, July 16, 2013 10:41 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

(This email is with regards to https://review.openstack.org/#/c/36316/)

Hello All,

I have been implementing the Guru Meditation Report blueprint
(https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report), and
the question of a templating engine was raised.  Currently, my version of
the code includes the Jinja2 templating engine (http://jinja.pocoo.org/),
which is modeled after the Django templating engine (it was designed to
be an implementation of the Django templating engine without requiring
the use of Django), which is used in Horizon.  Apparently, the Cheetah
templating engine (http://www.cheetahtemplate.org/) is used in a couple
places in Nova.

IMO, the Jinja template language produces much more readable templates,
and I think is the better choice for inclusion in the Report framework.
It also shares a common format with Django (making it slightly easier to
write for people coming from that area), and is also similar to template
engines for other languages. What does everyone else think?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Program Proposal: DevStack

2013-07-16 Thread Joshua Harlow
Anvil seems to fit in said program.

It's role is similar, helps build openstack  as packages (rpms) for 
deployers/developers or for quick testing. Provides similar functionality as 
devstack

http://anvil.readthedocs.org/

Both are actively supported and both are tools for developers and deployers...

-josh

Sent from my really tiny device...

On Jul 16, 2013, at 10:12 AM, Thierry Carrez 
thie...@openstack.orgmailto:thie...@openstack.org wrote:

Mark McLoughlin wrote:
On Mon, 2013-07-15 at 14:14 -0400, Russell Bryant wrote:
On 07/15/2013 11:39 AM, Dean Troyer wrote:
DevStack plays multiple roles in the development process for OpenStack.

Does it really make sense to be its own program?  There was mention of
just making it a part of infra or QA.  QA actually makes the most sense
to me, since devstack's primary use case is to make it easy to test
OpenStack.

How about if the title of the program was 'Developer Tools'?

Is devstack the only project which would fall into that bucket?

The trick with programs is that it's not just about finding themes and
aggregate existing projects around them. At the heart of a program is an
existing team of people that works (well) together towards that common goal.

Putting devstack in a Developer tools bucket only makes sense if the
team that cares about devstack also cares about those other developer
tools. You don't really want to create an ugly stepchild effect by
adding random projects onto unwanting people's plates.

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-16 Thread Joe Gordon
I am happy to help to but I don't have much extra bandwidth at the moment,
so I can only play a supporting role as a core and not a leadership role.


On Sat, Jul 13, 2013 at 2:58 PM, Michael Still mi...@stillhq.com wrote:

 I'm happy to help, although I'm pretty busy...

 Michael

 On Fri, Jul 12, 2013 at 9:37 PM, Boris Pavlovic bo...@pavlovic.me wrote:
  Hi Sean,
 
  I agree to help with sqlalchemy-migrate until we remove it.
  But probably there should be one more person mikal
 
  Best regards,
  Boris Pavlovic
 
 
  On Fri, Jul 12, 2013 at 3:31 PM, Sean Dague s...@dague.net wrote:
 
  On 07/12/2013 04:29 AM, Thierry Carrez wrote:
 
  Monty Taylor wrote:
 
  This brings us to the most important question:
 
  Who wants to be on the core team?
 
 
  That's the important question indeed. Accepting it (permanently or
  temporarily) under stackforge is an easy decision. But it's useless
  unless we have a set of people sufficiently interested in it not
  bitrotting to volunteer to maintain it...
 
 
  I'd recommend the nova-db subteam folks, like: jog0, dripton, boris-42
 as
  good people to be +2 on this.
 
  -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SQLAlchemy-migrate needs a new maintainer

2013-07-16 Thread Boris Pavlovic
Monty thanks a lot!

By the way there are 2 more guys that have a lot of experience with
sqlalchemy-migrate:

1) Roman (rpodolyaka)
2) Viktor (vsergeyev)

Best regards,
Boris Pavlovic




On Tue, Jul 16, 2013 at 11:12 PM, Joe Gordon joe.gord...@gmail.com wrote:

 I am happy to help to but I don't have much extra bandwidth at the moment,
 so I can only play a supporting role as a core and not a leadership role.


 On Sat, Jul 13, 2013 at 2:58 PM, Michael Still mi...@stillhq.com wrote:

 I'm happy to help, although I'm pretty busy...

 Michael

 On Fri, Jul 12, 2013 at 9:37 PM, Boris Pavlovic bo...@pavlovic.me
 wrote:
  Hi Sean,
 
  I agree to help with sqlalchemy-migrate until we remove it.
  But probably there should be one more person mikal
 
  Best regards,
  Boris Pavlovic
 
 
  On Fri, Jul 12, 2013 at 3:31 PM, Sean Dague s...@dague.net wrote:
 
  On 07/12/2013 04:29 AM, Thierry Carrez wrote:
 
  Monty Taylor wrote:
 
  This brings us to the most important question:
 
  Who wants to be on the core team?
 
 
  That's the important question indeed. Accepting it (permanently or
  temporarily) under stackforge is an easy decision. But it's useless
  unless we have a set of people sufficiently interested in it not
  bitrotting to volunteer to maintain it...
 
 
  I'd recommend the nova-db subteam folks, like: jog0, dripton, boris-42
 as
  good people to be +2 on this.
 
  -Sean
 
  --
  Sean Dague
  http://dague.net
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron -- creating networks with no assigned tenant

2013-07-16 Thread Nachi Ueno
Hi Jay

I agree for that usecase is needed.
# But some users wan't to setup their own networks, so this case
usecase will be also exists.

This function needes keystone notification bp (and it looks targeted for H3).
https://blueprints.launchpad.net/keystone/+spec/notifications

I'm not sure this kind of function should be in Neutron or not.
IMO, if there is some kind of orchestrator, it is best.

Best
Nachi




2013/7/16 Jay Pipes jaypi...@gmail.com:
 On 07/16/2013 02:03 PM, Nachi Ueno wrote:

 Hi Jay

 It is not supported now, and there is no bp proposed to do that.
 It can be done via API (CLI), so we can write a script for tenant setup.


 Hi Nachi,

 IMO, this is a step backwards and a deficiency. Basically, the user
 interface was needlessly made more complicated for the tenant. Instead of
 just launching their instance, the tenant now needs to create a subnet and
 then launch their instance, passing the subnet ID in the nova boot command.

 -jay


 2013/7/16 Jay Pipes jaypi...@gmail.com:

 The way that folsom and nova + nova-network works is that you create a
 bunch
 of unassigned (no tenant assigned to the networks) networks and when a
 tenant first launches an instance, nova grabs an available network for
 the
 tenant and assigns it to the tenant. Then each instance the tenant spins
 up
 after that gets an IP in the specific network it was assigned.

 How can I do the same thing with Neutron? I don't want my tenants or an
 admin to have to manually create a network in Neutron every time a tenant
 is
 added.

 Thanks,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSLO] Comments/Questions on Messaging Wiki

2013-07-16 Thread Mark McLoughlin
Hi William,

I think Doug has done a good job of answering all these, but here's
another set of answers to make sure there's no confusion :)

On Fri, 2013-07-12 at 17:40 -0400, William Henry wrote:
 Hi all, 
 
 I've been reading through the Messaging Wiki and have some comments.

The docs generated from the code are now up on:

  http://docs.openstack.org/developer/oslo.messaging/

There should be some useful clarifying stuff in there too. Indeed some
of thinking has moved on a bit since the wiki page was written.

  Not criticisms, just comments and questions. 
 I have found this to be a very useful document. Thanks. 
 
 1. There are multiple backend transport drivers which implement the
 API semantics using different messaging systems - e.g. RabbitMQ, Qpid,
 ZeroMQ. While both sides of a connection must use the same transport
 driver configured in the same way, the API avoids exposing details of
 transports so that code written using one transport should work with
 any other transport. 
 
 The good news for AMQP 1.0 users is that technically boths sides of
 the connection do not have to use same transport driver. In pre-AMQP
 1.0 days this was the case. But today interoperability between AMQP
 1.0 implementations has been demonstrated. 

Yeah, the point was more that like you need to use the zmq driver on
both sides.

I could imagine us having multiple amqp 1.0 interoperable drivers. I
don't know what the use case would be for using one of those drivers on
one side and another on the other side, but there's no reason why it
should be impossible.

 2. I notice under the RPC concepts section that you mention Exchanges
 as a container in which topics are scoped. Is this exchange a pre AMQP
 1.0 artifact or just a general term for oslo.messaging that is loosely
 based on the pre-AMQP 1.0 artifact called an Exchange? i.e. are you
 assuming that messaging implementations have something called an
 exchange? Or do you mean that messaging implementations can scope a
 topic and in oslo we call that scoping an exchange? 

Yeah, it really is only loosely related to the AMQP concept.

It's purely a namespace thing. You could e.g. have two Nova deployments
with exactly the same messaging transport (and e.g. sending messages
over the same broker, using the same topic names, etc.) and you could
keep them separated from one another by using a different exchange name
for each.

The reason we've stuck with the name exchange is that we have a
control_exchange configuration variable (defaulting to e.g. 'nova')
that servers roughly this purpose now and we want to continue using it
rather than renaming it to something else.

Which raises a point about all of this - we need to be able to
interoperate with existing OpenStack deployments using the current RPC
code. So, we really don't have the luxury of changing on-the-wire
formats, basic messaging semantics, configuration settings, etc.

oslo.messaging is mostly about cleaning up the python API which services
use to issue/receive RPCs and send notifications.

 3. Some messaging nomenclature: The way the wiki describes RPC 
 Invoke Method on One of Multiple Servers  is more like a queue than a
 topic. In messaging a queue is something that multiple consumers can
 attach to and one of them gets and services a message/request. A topic
 is where 1+ consumers are connected and each receives a the message
 and each can service it as it sees fit. In pre-AMQP 1.0 terms what
 this seems to describe is a direct exchange. And a direct excahnge can
 have multiple consumers listening to a queue on that exchange.
 (Remember that fanout is just a generalization of topic in that all
 consumers get all fanout messages - there are no sub-topics etc.) 
 
 In AMQP 1.0 the addressing doesn't care or know about exchanges but it
 can support this queue type behavior on an address or topic type
 behavior on an address. 
 
 I know this isn't about AMQP specifically but therefore this is even
 more important. Topics are pub/sub with multiple consumer/services
 responding to a single message. Queues are next consumer up gets the
 next message. 
 
 (BTW I've seen this kind of confusion also in early versions of
 MCollective in Puppet.) 
 
 It might be better to change some of the references to topic to
 address. This would solve the problem. i.e. a use case where one of
 many servers listening on an address services a message/request. And
 later all of servers listening on an address service a
 message/request. Addressing also solves the one-to-one as the address
 is specific to the server (and the others don't have to receive and
 reject the message).

It sounds to me like the qpid-proton based transport driver could easily
map the semantics we expect from topic/fanout to amqp 1.0 addresses.

The 'topic' nomenclature is pretty baked in the various services doing
RPC and notifications, especially in the naming of configuration
options.

The basic semantics is a nova compute service listens on the 

Re: [openstack-dev] Program Proposal: Trove

2013-07-16 Thread Mark McLoughlin
Hey,

On Tue, 2013-07-16 at 10:37 -0700, Michael Basnight wrote:
 Official Title: OpenStack Database as a Service
 Initial PTL: Michael Basnight mbasni...@gmail.com
 
 Mission Statement: To provide scalable and reliable Cloud Database as
 a Service functionality for both relational and non-relational
 database engines, and to continue to improve its fully-featured and
 extensible open source framework.

Seems fine to me, but I'd see adding non-relational support as an
expansion of Trove's scope as approved by the TC.

I know we discussed whether it should be in scope from the beginning,
but I thought we didn't want rule out the possibility of an entirely new
team of folks coming up with a NoSQL as a Service project.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] mid-cycle sprint?

2013-07-16 Thread Chris K
I'll cast my vote for the Aug dates as they are better for me.

chris


On Tue, Jul 16, 2013 at 8:42 AM, Byrum, Clint clint.by...@hp.com wrote:

 So my original thinking was lets get together before H3 and push on things
 that need to get done for H3. If, however, people would rather we get
 together after H3, there is plenty to be done in that time frame, including
 documentation updates and closing critical bugs before the release.

 I am available and thus +1 for either Aug 19/20 or Sep 16/17.
 
 From: Robert Collins [robe...@robertcollins.net]
 Sent: Monday, July 15, 2013 11:32 PM
 To: OpenStack Development Mailing List
 Cc: Joe Gordon; Taylor, Monty; dpri...@redhat.com; Cody Somerville;
 der...@redhat.com; Hernando Rivero, Juan Gregorio (HPCS); mr. nobody;
 Chris Blumentritt; Jesse Keating; Steve Baker; Byrum, Clint; Rainya Mosher;
 Johannes Erdfelt; lucasago...@gmail.com; Elizabeth Krumbach Joseph; Brian
 Lamar; Chris Jones; Devananda van der Veen
 Subject: Re: [TripleO] mid-cycle sprint?

 So consensus seems to be Seattle - cool, lets lock that in.

 As for dates, the basic tension is: either aug 19th, *before* H3, or
 after - e.g. 9th or 16th sept (I have a conference on the 6/7/8th here
 in NZ, so doing either the 2nd sept or the 9th will be tricky. I can
 do the 10th and miss the first day : I don't think I'm critical,
 though I would rather be there the whole time.

 Will give this 24h for feedback then am picking a date (so that we can
 book and stuff a month out rather than on silly expensive fares).

 -Rob

 On 14 July 2013 05:35, Devananda van der Veen devananda@gmail.com
 wrote:
  Adding my vote for the first week of September.
 ...


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday July 16th at 19:00 UTC

2013-07-16 Thread Elizabeth Krumbach Joseph
On Mon, Jul 15, 2013 at 9:32 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday July 16th, at 19:00 UTC in
 #openstack-meeting

Meeting logs and minutes:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-16-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-16-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-16-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cheetah vs Jinja

2013-07-16 Thread Nachi Ueno
Hi folks

Jinja2 looks have +3.
This is the winner?

# My code can be done by Jinja2 also.

so if we choose Jinja2, what's version range is needed?

Thanks
Nachi



2013/7/16 Matt Dietz matt.di...@rackspace.com:
 I'll second the jinja2 recommendation. I also use it with Pyramid, and
 find it non-obtrusive to write and easy to understand.

 -Original Message-
 From: Sandy Walsh sandy.wa...@rackspace.com
 Reply-To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date: Tuesday, July 16, 2013 11:34 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Cheetah vs Jinja

I've used jinja2 on many projects ... it's always been solid.

-S


From: Solly Ross [sr...@redhat.com]
Sent: Tuesday, July 16, 2013 10:41 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

(This email is with regards to https://review.openstack.org/#/c/36316/)

Hello All,

I have been implementing the Guru Meditation Report blueprint
(https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report), and
the question of a templating engine was raised.  Currently, my version of
the code includes the Jinja2 templating engine (http://jinja.pocoo.org/),
which is modeled after the Django templating engine (it was designed to
be an implementation of the Django templating engine without requiring
the use of Django), which is used in Horizon.  Apparently, the Cheetah
templating engine (http://www.cheetahtemplate.org/) is used in a couple
places in Nova.

IMO, the Jinja template language produces much more readable templates,
and I think is the better choice for inclusion in the Report framework.
It also shares a common format with Django (making it slightly easier to
write for people coming from that area), and is also similar to template
engines for other languages. What does everyone else think?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSLO] Comments/Questions on Messaging Wiki

2013-07-16 Thread William Henry


- Original Message -
 
 
 - Original Message -
  Hi William,
  
  I think Doug has done a good job of answering all these, but here's
  another set of answers to make sure there's no confusion :)
  
  On Fri, 2013-07-12 at 17:40 -0400, William Henry wrote:
   Hi all,
   
   I've been reading through the Messaging Wiki and have some comments.
  
  The docs generated from the code are now up on:
  
http://docs.openstack.org/developer/oslo.messaging/
  
  There should be some useful clarifying stuff in there too. Indeed some
  of thinking has moved on a bit since the wiki page was written.
  
Not criticisms, just comments and questions.
   I have found this to be a very useful document. Thanks.
   
   1. There are multiple backend transport drivers which implement the
   API semantics using different messaging systems - e.g. RabbitMQ, Qpid,
   ZeroMQ. While both sides of a connection must use the same transport
   driver configured in the same way, the API avoids exposing details of
   transports so that code written using one transport should work with
   any other transport.
   
   The good news for AMQP 1.0 users is that technically boths sides of
   the connection do not have to use same transport driver. In pre-AMQP
   1.0 days this was the case. But today interoperability between AMQP
   1.0 implementations has been demonstrated.
  
  Yeah, the point was more that like you need to use the zmq driver on
  both sides.
  
  I could imagine us having multiple amqp 1.0 interoperable drivers. I
  don't know what the use case would be for using one of those drivers on
  one side and another on the other side, but there's no reason why it
  should be impossible.
  
   2. I notice under the RPC concepts section that you mention Exchanges
   as a container in which topics are scoped. Is this exchange a pre AMQP
   1.0 artifact or just a general term for oslo.messaging that is loosely
   based on the pre-AMQP 1.0 artifact called an Exchange? i.e. are you
   assuming that messaging implementations have something called an
   exchange? Or do you mean that messaging implementations can scope a
   topic and in oslo we call that scoping an exchange?
  
  Yeah, it really is only loosely related to the AMQP concept.
  
  It's purely a namespace thing. You could e.g. have two Nova deployments
  with exactly the same messaging transport (and e.g. sending messages
  over the same broker, using the same topic names, etc.) and you could
  keep them separated from one another by using a different exchange name
  for each.
  
  The reason we've stuck with the name exchange is that we have a
  control_exchange configuration variable (defaulting to e.g. 'nova')
  that servers roughly this purpose now and we want to continue using it
  rather than renaming it to something else.
  
  Which raises a point about all of this - we need to be able to
  interoperate with existing OpenStack deployments using the current RPC
  code. So, we really don't have the luxury of changing on-the-wire
  formats, basic messaging semantics, configuration settings, etc.
  
  oslo.messaging is mostly about cleaning up the python API which services
  use to issue/receive RPCs and send notifications.
  
   3. Some messaging nomenclature: The way the wiki describes RPC 
   Invoke Method on One of Multiple Servers  is more like a queue than a
   topic. In messaging a queue is something that multiple consumers can
   attach to and one of them gets and services a message/request. A topic
   is where 1+ consumers are connected and each receives a the message
   and each can service it as it sees fit. In pre-AMQP 1.0 terms what
   this seems to describe is a direct exchange. And a direct excahnge can
   have multiple consumers listening to a queue on that exchange.
   (Remember that fanout is just a generalization of topic in that all
   consumers get all fanout messages - there are no sub-topics etc.)
   
   In AMQP 1.0 the addressing doesn't care or know about exchanges but it
   can support this queue type behavior on an address or topic type
   behavior on an address.
   
   I know this isn't about AMQP specifically but therefore this is even
   more important. Topics are pub/sub with multiple consumer/services
   responding to a single message. Queues are next consumer up gets the
   next message.
   
   (BTW I've seen this kind of confusion also in early versions of
   MCollective in Puppet.)
   
   It might be better to change some of the references to topic to
   address. This would solve the problem. i.e. a use case where one of
   many servers listening on an address services a message/request. And
   later all of servers listening on an address service a
   message/request. Addressing also solves the one-to-one as the address
   is specific to the server (and the others don't have to receive and
   reject the message).
  
  It sounds to me like the qpid-proton based transport driver could easily
  map the semantics we 

Re: [openstack-dev] Cheetah vs Jinja

2013-07-16 Thread Michael Basnight
Also, jinja2 is in requirements. We have no specific requirements on a 
particular version so feel free to pin it to a specific. We (trove) use it to 
generate config templates.

https://github.com/openstack/requirements/commit/96f38365ce94d2135f7744c93bae0ce92a747195

On Jul 16, 2013, at 1:10 PM, Nachi Ueno wrote:

 Hi folks
 
 Jinja2 looks have +3.
 This is the winner?
 
 # My code can be done by Jinja2 also.
 
 so if we choose Jinja2, what's version range is needed?
 
 Thanks
 Nachi
 
 
 
 2013/7/16 Matt Dietz matt.di...@rackspace.com:
 I'll second the jinja2 recommendation. I also use it with Pyramid, and
 find it non-obtrusive to write and easy to understand.
 
 -Original Message-
 From: Sandy Walsh sandy.wa...@rackspace.com
 Reply-To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date: Tuesday, July 16, 2013 11:34 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Cheetah vs Jinja
 
 I've used jinja2 on many projects ... it's always been solid.
 
 -S
 
 
 From: Solly Ross [sr...@redhat.com]
 Sent: Tuesday, July 16, 2013 10:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja
 
 (This email is with regards to https://review.openstack.org/#/c/36316/)
 
 Hello All,
 
 I have been implementing the Guru Meditation Report blueprint
 (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report), and
 the question of a templating engine was raised.  Currently, my version of
 the code includes the Jinja2 templating engine (http://jinja.pocoo.org/),
 which is modeled after the Django templating engine (it was designed to
 be an implementation of the Django templating engine without requiring
 the use of Django), which is used in Horizon.  Apparently, the Cheetah
 templating engine (http://www.cheetahtemplate.org/) is used in a couple
 places in Nova.
 
 IMO, the Jinja template language produces much more readable templates,
 and I think is the better choice for inclusion in the Report framework.
 It also shares a common format with Django (making it slightly easier to
 write for people coming from that area), and is also similar to template
 engines for other languages. What does everyone else think?
 
 Best Regards,
 Solly Ross
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cheetah vs Jinja

2013-07-16 Thread Nachi Ueno
Hi Folks

Thanks
I'll update code with Jinja2

2013/7/16 Michael Basnight mbasni...@gmail.com:
 Also, jinja2 is in requirements. We have no specific requirements on a 
 particular version so feel free to pin it to a specific. We (trove) use it to 
 generate config templates.

 https://github.com/openstack/requirements/commit/96f38365ce94d2135f7744c93bae0ce92a747195

 On Jul 16, 2013, at 1:10 PM, Nachi Ueno wrote:

 Hi folks

 Jinja2 looks have +3.
 This is the winner?

 # My code can be done by Jinja2 also.

 so if we choose Jinja2, what's version range is needed?

 Thanks
 Nachi



 2013/7/16 Matt Dietz matt.di...@rackspace.com:
 I'll second the jinja2 recommendation. I also use it with Pyramid, and
 find it non-obtrusive to write and easy to understand.

 -Original Message-
 From: Sandy Walsh sandy.wa...@rackspace.com
 Reply-To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date: Tuesday, July 16, 2013 11:34 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Cheetah vs Jinja

 I've used jinja2 on many projects ... it's always been solid.

 -S

 
 From: Solly Ross [sr...@redhat.com]
 Sent: Tuesday, July 16, 2013 10:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

 (This email is with regards to https://review.openstack.org/#/c/36316/)

 Hello All,

 I have been implementing the Guru Meditation Report blueprint
 (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report), and
 the question of a templating engine was raised.  Currently, my version of
 the code includes the Jinja2 templating engine (http://jinja.pocoo.org/),
 which is modeled after the Django templating engine (it was designed to
 be an implementation of the Django templating engine without requiring
 the use of Django), which is used in Horizon.  Apparently, the Cheetah
 templating engine (http://www.cheetahtemplate.org/) is used in a couple
 places in Nova.

 IMO, the Jinja template language produces much more readable templates,
 and I think is the better choice for inclusion in the Report framework.
 It also shares a common format with Django (making it slightly easier to
 write for people coming from that area), and is also similar to template
 engines for other languages. What does everyone else think?

 Best Regards,
 Solly Ross

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cheetah vs Jinja

2013-07-16 Thread Doug Hellmann
Thanks, Nachi.

Doug


On Tue, Jul 16, 2013 at 4:47 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Folks

 Thanks
 I'll update code with Jinja2

 2013/7/16 Michael Basnight mbasni...@gmail.com:
  Also, jinja2 is in requirements. We have no specific requirements on a
 particular version so feel free to pin it to a specific. We (trove) use it
 to generate config templates.
 
 
 https://github.com/openstack/requirements/commit/96f38365ce94d2135f7744c93bae0ce92a747195
 
  On Jul 16, 2013, at 1:10 PM, Nachi Ueno wrote:
 
  Hi folks
 
  Jinja2 looks have +3.
  This is the winner?
 
  # My code can be done by Jinja2 also.
 
  so if we choose Jinja2, what's version range is needed?
 
  Thanks
  Nachi
 
 
 
  2013/7/16 Matt Dietz matt.di...@rackspace.com:
  I'll second the jinja2 recommendation. I also use it with Pyramid, and
  find it non-obtrusive to write and easy to understand.
 
  -Original Message-
  From: Sandy Walsh sandy.wa...@rackspace.com
  Reply-To: OpenStack Development Mailing List
  openstack-dev@lists.openstack.org
  Date: Tuesday, July 16, 2013 11:34 AM
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] Cheetah vs Jinja
 
  I've used jinja2 on many projects ... it's always been solid.
 
  -S
 
  
  From: Solly Ross [sr...@redhat.com]
  Sent: Tuesday, July 16, 2013 10:41 AM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja
 
  (This email is with regards to
 https://review.openstack.org/#/c/36316/)
 
  Hello All,
 
  I have been implementing the Guru Meditation Report blueprint
  (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report),
 and
  the question of a templating engine was raised.  Currently, my
 version of
  the code includes the Jinja2 templating engine (
 http://jinja.pocoo.org/),
  which is modeled after the Django templating engine (it was designed
 to
  be an implementation of the Django templating engine without requiring
  the use of Django), which is used in Horizon.  Apparently, the Cheetah
  templating engine (http://www.cheetahtemplate.org/) is used in a
 couple
  places in Nova.
 
  IMO, the Jinja template language produces much more readable
 templates,
  and I think is the better choice for inclusion in the Report
 framework.
  It also shares a common format with Django (making it slightly easier
 to
  write for people coming from that area), and is also similar to
 template
  engines for other languages. What does everyone else think?
 
  Best Regards,
  Solly Ross
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron -- creating networks with no assigned tenant

2013-07-16 Thread Nachi Ueno
Hi Jay

IMO, you are mixing 'What' and How.
This is my understandings.

What is needed (Requirement) 
[Requirement1]  Network and Subnet will be assigned for new tenant
automatically
  by the configuration

How to do it (Implementation)
   - [nova-network] PreCreate list of networks which is not owned
-[Neutron]   additional neutron api call on tenant creation
(additional script is needed) or better support (keystone integration)

In folsom days, tenant can't create network by themselves via API.
so that implementation is needed.

Also, the requirement1 don't fit for all cases, so IMO it is
sufficient to provide API to do that,
because to combine API is not burden.

Best
Nachi

2013/7/16 Jay Pipes jaypi...@gmail.com:
 On 07/16/2013 03:30 PM, Nachi Ueno wrote:

 Hi Jay

 I agree for that usecase is needed.
 # But some users wan't to setup their own networks, so this case
 usecase will be also exists.

 This function needes keystone notification bp (and it looks targeted for
 H3).
 https://blueprints.launchpad.net/keystone/+spec/notifications

 I'm not sure this kind of function should be in Neutron or not.
 IMO, if there is some kind of orchestrator, it is best.


 I don't think you understand the use case :) Let me explain.

 Previously, when a user launched an instance in Folsom (without Quantum,
 using nova-network), the user did not need to specify a network manually
 when launching their instance. If a network was available -- i.e. it was not
 in use by another tenant -- then *during the instance launch*, that network
 was assigned to be used by the tenant, and their instances would
 automatically receive an IP address in that network.

 Previously, if the user wanted to specify a particular network when
 launching an instance, they could certainly do so. However, this was not
 *required* -- as noted above, an available network would automatically be
 assigned the tenant if one was available.

 In the current Nova -- Quantum interaction, that default behaviour of
 automatically assigning a tenant to an available network is now gone, and I
 believe it is a mistake that this was allowed to happen.

 This doesn't have anything to do with Keystone. This has to do with the
 decision by the Quantum development team to:

 * Force networks to have a tenant ID [1]
 * Force subnets to have a tenant ID [2]

 If Quantum allowed for multiple networks to be created without a tenant ID
 -- as was the case in Nova with nova-network -- then during the process of
 launching an instance, if the user did NOT specify a network then Nova could
 call out to Quantum to get the first available network. But since the
 decision was made to enforce tenant ID being not null in the Quantum network
 API (not the database model, which allows a NULL tenant ID), that is not
 possible anymore.

 And I think the user experience suffers because of that.

 Note that the use case of specifying a network on instance creation was and
 still is supported by Nova with nova-network. This conversation has strictly
 been about the removal of the auto-assignment behaviour.

 Best,
 -jay

 [1]
 https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py#L533
 [2]
 https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py#L625


 2013/7/16 Jay Pipes jaypi...@gmail.com:

 On 07/16/2013 02:03 PM, Nachi Ueno wrote:


 Hi Jay

 It is not supported now, and there is no bp proposed to do that.
 It can be done via API (CLI), so we can write a script for tenant setup.



 Hi Nachi,

 IMO, this is a step backwards and a deficiency. Basically, the user
 interface was needlessly made more complicated for the tenant. Instead of
 just launching their instance, the tenant now needs to create a subnet
 and
 then launch their instance, passing the subnet ID in the nova boot
 command.

 -jay


 2013/7/16 Jay Pipes jaypi...@gmail.com:


 The way that folsom and nova + nova-network works is that you create a
 bunch
 of unassigned (no tenant assigned to the networks) networks and when a
 tenant first launches an instance, nova grabs an available network for
 the
 tenant and assigns it to the tenant. Then each instance the tenant
 spins
 up
 after that gets an IP in the specific network it was assigned.

 How can I do the same thing with Neutron? I don't want my tenants or an
 admin to have to manually create a network in Neutron every time a
 tenant
 is
 added.

 Thanks,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 

Re: [openstack-dev] Cheetah vs Jinja

2013-07-16 Thread Nachi Ueno
Hi Michael

No I don't going to break your trove heart.
# Just kidding :)

Requirement.txt has Jinja2, so I will not propose changes for this.

Best
Nachi


2013/7/16 Michael Basnight mbasni...@gmail.com:
 Nachi, If/when you do pin it down to a particular version in 
 openstack/requirements, plz add me as a reviewer. Id like to make sure we 
 dont break trove. 3

 On Jul 16, 2013, at 1:47 PM, Nachi Ueno wrote:

 Hi Folks

 Thanks
 I'll update code with Jinja2

 2013/7/16 Michael Basnight mbasni...@gmail.com:
 Also, jinja2 is in requirements. We have no specific requirements on a 
 particular version so feel free to pin it to a specific. We (trove) use it 
 to generate config templates.

 https://github.com/openstack/requirements/commit/96f38365ce94d2135f7744c93bae0ce92a747195

 On Jul 16, 2013, at 1:10 PM, Nachi Ueno wrote:

 Hi folks

 Jinja2 looks have +3.
 This is the winner?

 # My code can be done by Jinja2 also.

 so if we choose Jinja2, what's version range is needed?

 Thanks
 Nachi



 2013/7/16 Matt Dietz matt.di...@rackspace.com:
 I'll second the jinja2 recommendation. I also use it with Pyramid, and
 find it non-obtrusive to write and easy to understand.

 -Original Message-
 From: Sandy Walsh sandy.wa...@rackspace.com
 Reply-To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date: Tuesday, July 16, 2013 11:34 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Cheetah vs Jinja

 I've used jinja2 on many projects ... it's always been solid.

 -S

 
 From: Solly Ross [sr...@redhat.com]
 Sent: Tuesday, July 16, 2013 10:41 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Change I30b127d6] Cheetah vs Jinja

 (This email is with regards to https://review.openstack.org/#/c/36316/)

 Hello All,

 I have been implementing the Guru Meditation Report blueprint
 (https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report), and
 the question of a templating engine was raised.  Currently, my version of
 the code includes the Jinja2 templating engine (http://jinja.pocoo.org/),
 which is modeled after the Django templating engine (it was designed to
 be an implementation of the Django templating engine without requiring
 the use of Django), which is used in Horizon.  Apparently, the Cheetah
 templating engine (http://www.cheetahtemplate.org/) is used in a couple
 places in Nova.

 IMO, the Jinja template language produces much more readable templates,
 and I think is the better choice for inclusion in the Report framework.
 It also shares a common format with Django (making it slightly easier to
 write for people coming from that area), and is also similar to template
 engines for other languages. What does everyone else think?

 Best Regards,
 Solly Ross

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron -- creating networks with no assigned tenant

2013-07-16 Thread Jay Pipes

On 07/16/2013 05:14 PM, Nachi Ueno wrote:

Hi Jay

IMO, you are mixing 'What' and How.
This is my understandings.

What is needed (Requirement)
 [Requirement1]  Network and Subnet will be assigned for new tenant
automatically
   by the configuration

How to do it (Implementation)
- [nova-network] PreCreate list of networks which is not owned
 -[Neutron]   additional neutron api call on tenant creation
(additional script is needed) or better support (keystone integration)


No, this is not done on tenant creation. That's my point. This was done 
on instance launch with communication between Nova and nova-network (and 
should be done on instance creation between nova and quantum, IMO)



In folsom days, tenant can't create network by themselves via API.
so that implementation is needed.


Again, this isn't about a tenant creating networks :) It's about 
launching instances without needing to manually supply a network ID.



Also, the requirement1 don't fit for all cases, so IMO it is
sufficient to provide API to do that,
because to combine API is not burden.


All I'm asking for is an API to create networks without a tenant ID :) 
And an API call that returns an available network. In other words, I'm 
asking for the Nova network-related API that made things easy on the 
user in Folsom and earlier.


Best,
-jay


Best
Nachi

2013/7/16 Jay Pipes jaypi...@gmail.com:

On 07/16/2013 03:30 PM, Nachi Ueno wrote:


Hi Jay

I agree for that usecase is needed.
# But some users wan't to setup their own networks, so this case
usecase will be also exists.

This function needes keystone notification bp (and it looks targeted for
H3).
https://blueprints.launchpad.net/keystone/+spec/notifications

I'm not sure this kind of function should be in Neutron or not.
IMO, if there is some kind of orchestrator, it is best.



I don't think you understand the use case :) Let me explain.

Previously, when a user launched an instance in Folsom (without Quantum,
using nova-network), the user did not need to specify a network manually
when launching their instance. If a network was available -- i.e. it was not
in use by another tenant -- then *during the instance launch*, that network
was assigned to be used by the tenant, and their instances would
automatically receive an IP address in that network.

Previously, if the user wanted to specify a particular network when
launching an instance, they could certainly do so. However, this was not
*required* -- as noted above, an available network would automatically be
assigned the tenant if one was available.

In the current Nova -- Quantum interaction, that default behaviour of
automatically assigning a tenant to an available network is now gone, and I
believe it is a mistake that this was allowed to happen.

This doesn't have anything to do with Keystone. This has to do with the
decision by the Quantum development team to:

* Force networks to have a tenant ID [1]
* Force subnets to have a tenant ID [2]

If Quantum allowed for multiple networks to be created without a tenant ID
-- as was the case in Nova with nova-network -- then during the process of
launching an instance, if the user did NOT specify a network then Nova could
call out to Quantum to get the first available network. But since the
decision was made to enforce tenant ID being not null in the Quantum network
API (not the database model, which allows a NULL tenant ID), that is not
possible anymore.

And I think the user experience suffers because of that.

Note that the use case of specifying a network on instance creation was and
still is supported by Nova with nova-network. This conversation has strictly
been about the removal of the auto-assignment behaviour.

Best,
-jay

[1]
https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py#L533
[2]
https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py#L625



2013/7/16 Jay Pipes jaypi...@gmail.com:


On 07/16/2013 02:03 PM, Nachi Ueno wrote:



Hi Jay

It is not supported now, and there is no bp proposed to do that.
It can be done via API (CLI), so we can write a script for tenant setup.




Hi Nachi,

IMO, this is a step backwards and a deficiency. Basically, the user
interface was needlessly made more complicated for the tenant. Instead of
just launching their instance, the tenant now needs to create a subnet
and
then launch their instance, passing the subnet ID in the nova boot
command.

-jay



2013/7/16 Jay Pipes jaypi...@gmail.com:



The way that folsom and nova + nova-network works is that you create a
bunch
of unassigned (no tenant assigned to the networks) networks and when a
tenant first launches an instance, nova grabs an available network for
the
tenant and assigns it to the tenant. Then each instance the tenant
spins
up
after that gets an IP in the specific network it was assigned.

How can I do the same thing with Neutron? I don't want my tenants or an
admin to have to manually create a network in Neutron 

Re: [openstack-dev] Program Proposal: Trove

2013-07-16 Thread Michael Basnight

On Jul 16, 2013, at 7:45 PM, Haomai Wang wrote:
 
 在 2013-7-17,上午4:42,Michael Basnight mbasni...@gmail.com 写道:
 
 On Jul 16, 2013, at 12:49 PM, Mark McLoughlin wrote:
 
 Hey,
 
 On Tue, 2013-07-16 at 10:37 -0700, Michael Basnight wrote:
 Official Title: OpenStack Database as a Service
 Initial PTL: Michael Basnight mbasni...@gmail.com
 
 Mission Statement: To provide scalable and reliable Cloud Database as
 a Service functionality for both relational and non-relational
 database engines, and to continue to improve its fully-featured and
 extensible open source framework.
 
 Seems fine to me, but I'd see adding non-relational support as an
 expansion of Trove's scope as approved by the TC.
 
 I know we discussed whether it should be in scope from the beginning,
 but I thought we didn't want rule out the possibility of an entirely new
 team of folks coming up with a NoSQL as a Service project.
 
 Cant disagree with this, because of the initial TCT ruling. FWIW we are 
 working on some NoSQL stuff _in_ trove at present. Maybe i should bring it 
 up at the next TCT meeting? Ive done a redis POC in the past and can show 
 the code for that. It was before the rename and has a small ammt of bitrot 
 but its something i can definitely show to the group.
 +1, I'm interested in Trove and I want to do some works for it. NoSQL is 
 easier to control than SQL database and I want to join it to implement 
 LevelDB.

Great! Feel free to join in. The API is extensible enough at present to allow 
you to implement LevelDB, or any other NoSQL data store. Its _almost_ as simple 
as adding a custom guest manager [1], and making sure you edit the api config 
to specify service_type=name_of_service_type. There are a few more small 
gotchas, but thats a good 80% of the implementation. You can also create 
strategies to define how you backup/restore 'service_type', but i wont get into 
that on list. We are also working on a cluster api to allow cluster 
instrumentation, which i will be sending out in the next few weeks for the 
community to scrutinize.

Find me on irc, #openstack-trove, user hub_cap for more information. 

[1] https://github.com/openstack/trove/tree/master/trove/guestagent/manager
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ipdb debugging in Neutron test cases?

2013-07-16 Thread Roman Podolyaka
Hi,

Ensure that stdout isn't captured by the corresponding fixture:

OS_STDOUT_CAPTURE=0 python -m testtools.run
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
Tests running...

/home/rpodolyaka/src/neutron/neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py(251)test_port_update()
250
-- 251 with contextlib.nested(
252 mock.patch.object(self.agent.int_br,
get_vif_port_by_id),


OS_STDOUT_CAPTURE=1 python -m testtools.run
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
Tests running...
==
ERROR:
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
--
Empty attachments:
  pythonlogging:''
  stdout

Traceback (most recent call last):
  File neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py, line
248, in test_port_update
import ipdb

()

AttributeError: '_io.BytesIO' object has no attribute 'name'

Thanks,
Roman


On Wed, Jul 17, 2013 at 5:58 AM, Qiu Yu unic...@gmail.com wrote:

 Hi,

 I'm wondering is there any one ever tried using ipdb in Neutron test
 cases? The same trick that used to be working with Nova, cannot be
 applied in Neutron.

 For example, you can trigger one specific test case. But once ipdb
 line is added, following exception will be raised from ipython.

 Any thoughts? How can I make ipdb work with Neutron test case? Thanks!

 $ source .venv/bin/activate
 (.venv)$ python -m testtools.run

 quantum.tests.unit.openvswitch.test_ovs_quantum_agent.TestOvsQuantumAgent.test_port_update

 ==
 ERROR:
 quantum.tests.unit.openvswitch.test_ovs_quantum_agent.TestOvsQuantumAgent.test_port_update
 --
 Empty attachments:
   pythonlogging:''
   stderr
   stdout

 Traceback (most recent call last):
   File quantum/tests/unit/openvswitch/test_ovs_quantum_agent.py,
 line 163, in test_port_update
 from ipdb import set_trace; set_trace()
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/ipdb/__init__.py,
 line 16, in module
 from ipdb.__main__ import set_trace, post_mortem, pm, run,
 runcall, runeval, launch_ipdb_on_exception
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/ipdb/__main__.py,
 line 26, in module
 import IPython
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/__init__.py,
 line 43, in module
 from .config.loader import Config
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/__init__.py,
 line 16, in module
 from .application import *
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/application.py,
 line 31, in module
 from IPython.config.configurable import SingletonConfigurable
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/configurable.py,
 line 26, in module
 from loader import Config
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/loader.py,
 line 27, in module
 from IPython.utils.path import filefind, get_ipython_dir
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/path.py,
 line 25, in module
 from IPython.utils.process import system
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/process.py,
 line 27, in module
 from ._process_posix import _find_cmd, system, getoutput, arg_split
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/_process_posix.py,
 line 27, in module
 from IPython.utils import text
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/text.py,
 line 29, in module
 from IPython.utils.io import nlprint
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/io.py,
 line 78, in module
 stdout = IOStream(sys.stdout, fallback=devnull)
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/io.py,
 line 42, in __init__
 setattr(self, meth, getattr(stream, meth))
 AttributeError: '_io.BytesIO' object has no attribute 'name'


 --
 Qiu Yu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Memcache user token index and PKI tokens

2013-07-16 Thread Kieran Spear

On 17/07/2013, at 12:26 AM, Morgan Fainberg m...@metacloud.com wrote:

 On Tue, Jul 16, 2013 at 4:01 AM, Kieran Spear kisp...@gmail.com wrote:
 
 On 16/07/2013, at 1:10 AM, Adam Young ayo...@redhat.com wrote:
 On 07/15/2013 04:06 AM, Kieran Spear wrote:
 Hi all,
 
 I want to backport the fix for the Token List in Memcache can consume
 an entire memcache page bug[1] to Grizzly, but I had a couple of
 questions:
 
 1. Why do we need to store the entire token data in the
 usertoken-userid key? This data always seems to be hashed before
 indexing into the 'token-tokenid' keys anyway. The size of the
 memcache data for a user's token list currently grows by 4k every time
 a new PKI token is created. It doesn't take long to hit 1MB at this
 rate even with the above fix.
 Yep. The reason, though, is that we either take a memory/storage hit (store 
 the whole token) or a performance hit (reproduce the token data) and we've 
 gone for the storage hit.
 
 In this case it looks like we're taking a hit from both, since the PKI token 
 id from the user token index is retrieved, then hashed and then that key 
 is used to retrieve the token from the tokens-%s page anyway.
 
 
 
 
 2. Every time it creates a new token, Keystone loads each token from
 the user's token list with a separate memcache call so it can throw it
 away if it's expired. This seems excessive. Is it anything to worry
 about? If it just checked the first two tokens you'd get the same
 effect on a longer time scale.
 
 I guess part of the answer is to decrease our token expiry time, which
 should mitigate both issues. Failing that we'd consider moving to the
 SQL backend.
 HOw about doing both?  But if you move to the sql backend, rememeber to 
 periodically clean up the token table, or you will have storage issues 
 there as well.  No silver bullet, I am afraid.
 
 I think we're going to stick with memcache for now (the devil we know :)). 
 With (1) and (2) fixed and the token expiration time tweaked I think 
 memcache will do okay.
 
 Kieran
 
 
 
 Cheers,
 Kieran
 
 [1] https://bugs.launchpad.net/keystone/+bug/1171985
 
 
 Hi Kieran,
 
 I've looked into the potential bug you described and it appears that
 there has been a change in the master branch to support the idea of
 pluggable token providers (much better implementation than the driver
 being responsible for the token itself).  This change modified how the
 memcache driver stored the IDs, and performed the CMS hashing function
 when the manager returned the token_id to the driver, instead of
 in-line within the driver.  The original fix should have been correct
 in hashing the PKI token to the short-form ID.  Your fix to simply
 hash the tokens is the correct one and more closely mirrors how the
 original fix was implemented.
 
 If you are interested in the reviews that implement the new pluggable
 provider(s): https://review.openstack.org/#/c/33858/ (V3) and
 https://review.openstack.org/#/c/34421/ (V2.0).
 
 Going with the shorter TTL on the Tokens is a good idea for various
 reasons depending on the token driver.  I know that the SQL driver
 (provided you cleanup expired tokens) has worked well for my company,
 but I want to move to the memcache driver soon.

Thanks for the info. Good to know this is fixed on master. Spotted one possible 
upgrade issue in the V3 patch which I've submitted a bug for:

https://bugs.launchpad.net/keystone/+bug/1202050

And a bug for getting my one-line fix into stable/grizzly:

https://bugs.launchpad.net/keystone/+bug/1202053

Cheers,
Kieran

 
 Cheers,
 Morgan Fainberg


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev