Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Chris Behrens

Hi Vish,

I’m jumping in slightly late on this, but I also have an interest in this. I’m 
going to preface this by saying that I have not read this whole thread yet, so 
I apologize if I repeat things, say anything that is addressed by previous 
posts, or doesn’t jive with what you’re looking for. :) But what you describe 
below sounds like exactly a use case I’d come up with.

Essentially I want another level above project_id. Depending on the exact use 
case, you could name it ‘wholesale_id’ or ‘reseller_id’...and yeah, ‘org_id’ 
fits in with your example. :) I think that I had decided I’d call it ‘domain’ 
to be more generic, especially after seeing keystone had a domain concept.

Your idea below (prefixing the project_id) is exactly one way I thought of 
doing this to be least intrusive. I, however, thought that this would not be 
efficient. So, I was thinking about proposing that we add ‘domain’ to all of 
our models. But that limits your hierarchy and I don’t necessarily like that. 
:)  So I think that if the queries are truly indexed as you say below, you have 
a pretty good approach. The one issue that comes into mind is that if there’s 
any chance of collision. For example, if project ids (or orgs) could contain a 
‘.’, then ‘.’ as a delimiter won’t work.

My requirements could be summed up pretty well by thinking of this as ‘virtual 
clouds within a cloud’. Deploy a single cloud infrastructure that could look 
like many multiple clouds. ‘domain’ would be the key into each different 
virtual cloud. Accessing one virtual cloud doesn’t reveal any details about 
another virtual cloud.

What this means is:

1) domain ‘a’ cannot see instances (or resources in general) in domain ‘b’. It 
doesn’t matter if domain ‘a’ and domain ‘b’ share the same tenant ID. If you 
act with the API on behalf of domain ‘a’, you cannot see your instances in 
domain ‘b’.
2) Flavors per domain. domain ‘a’ can have different flavors than domain ‘b’.
3) Images per domain. domain ‘a’ could see different images than domain ‘b’.
4) Quotas and quota limits per domain. your instances in domain ‘a’ don’t count 
against quotas in domain ‘b’.
5) Go as far as using different config values depending on what domain you’re 
using. This one is fun. :)

etc.

I’m not sure if you were looking to go that far or not. :) But I think that our 
ideas are close enough, if not exact, that we can achieve both of our goals 
with the same implementation.

I’d love to be involved with this. I am not sure that I currently have the time 
to help with implementation, however.

- Chris



On Feb 3, 2014, at 1:58 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Hello Again!
 
 At the meeting last week we discussed some options around getting true 
 multitenancy in nova. The use case that we are trying to support can be 
 described as follows:
 
 Martha, the owner of ProductionIT provides it services to multiple 
 Enterprise clients. She would like to offer cloud services to Joe at 
 WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for 
 WidgetMaster and he has multiple QA and Development teams with many users. 
 Joe needs the ability create users, projects, and quotas, as well as the 
 ability to list and delete resources across WidgetMaster. Martha needs to be 
 able to set the quotas for both WidgetMaster and SuperDevShop; manage users, 
 projects, and objects across the entire system; and set quotas for the client 
 companies as a whole. She also needs to ensure that Joe can't see or mess 
 with anything owned by Sam.
 
 As per the plan I outlined in the meeting I have implemented a 
 Proof-of-Concept that would allow me to see what changes were required in 
 nova to get scoped tenancy working. I used a simple approach of faking out 
 heirarchy by prepending the id of the larger scope to the id of the smaller 
 scope. Keystone uses uuids internally, but for ease of explanation I will 
 pretend like it is using the name. I think we can all agree that 
 ‘orga.projecta’ is more readable than 
 ‘b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8’.
 
 The code basically creates the following five projects:
 
 orga
 orga.projecta
 orga.projectb
 orgb
 orgb.projecta
 
 I then modified nova to replace everywhere where it searches or limits policy 
 by project_id to do a prefix match. This means that someone using project 
 ‘orga’ should be able to list/delete instances in orga, orga.projecta, and 
 orga.projectb.
 
 You can find the code here:
 
  
 https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
  
 https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f
 
 Keeping in mind that this is a prototype, but I’m hoping to come to some kind 
 of consensus as to whether this is a reasonable approach. I’ve compiled a 
 list of pros and cons.
 
 Pros:
 
  * Very easy to understand
  * Minimal changes to nova
  * Good performance in db (prefix matching uses indexes)

Re: [openstack-dev] [Ironic] January review redux

2014-02-05 Thread Lucas Alvares Gomes

 So, I'd like to nominate the following two additions to the ironic-core
 team:

 Max Lobur

 https://review.openstack.org/#/q/reviewer:mlobur%2540mirantis.com+project:openstack/ironic,n,z

 Roman Prykhodchenko

 https://review.openstack.org/#/q/reviewer:rprikhodchenko%2540mirantis.com+project:openstack/ironic,n,z


Awesome people! +1 for both :)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-05 Thread Julien Danjou
On Tue, Feb 04 2014, Joe Gordon wrote:

 Ceilometer running a plugin in nova is bad (for all the reasons
 previously discussed),

Well, I partially disagree. Are you saying that nobody is allowed to run
a plugin in Nova? So what are these plugins in the first place?
Or if you're saying that Ceilometer cannot have plugins in Nova, I would
like to know why.

What is wrong, I agree, is that we have to use and mock nova internals
to test our plugins. OTOH anyone writing plugin for Nova will have the
same issue. To which extend this is a problem with the plugin system,
I'll let everybody thing about it. :)

 So what can nova do to help this?  It sounds like you have a valid use
 case that nova should support without requiring a plugin.

We just need the possibility to run some code before an instance is
deleted, in a synchronous manner – i.e. our code needs to be fully
executed before Nova can actually destroyes the VM.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Boris Pavlovic
Hi stackers,

I would like to:

1) Nominate Hugh Saunders to Rally core, he is doing a lot of good reviews
(and always testing patches=) ):
http://stackalytics.com/report/reviews/rally/30

2) Remove Alexei from core team, because unfortunately he is not able to
work on Rally at this moment. Thank you Alexei for all work that you have
done.


Thoughts?


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] January review redux

2014-02-05 Thread Yuriy Zveryanskyy

On 02/04/2014 09:42 PM, Devananda van der Veen wrote:

So, I'd like to nominate the following two additions to the ironic-core 
team:


Max Lobur
https://review.openstack.org/#/q/reviewer:mlobur%2540mirantis.com+project:openstack/ironic,n,z

Roman Prykhodchenko
https://review.openstack.org/#/q/reviewer:rprikhodchenko%2540mirantis.com+project:openstack/ironic,n,z


+1 for both


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-02-05 Thread Tomas Sedovic

On 05/02/14 03:58, Jaromir Coufal wrote:

Hi to everybody,

based on the feedback from last week [0] I incorporated changes in the
wireframes so that we keep them up to date with latest decisions:

http://people.redhat.com/~jcoufal/openstack/tripleo/2014-02-05_tripleo-ui-icehouse.pdf


Changes:
* Smaller layout change in Nodes Registration (no rush for update)
* Unifying views for 'deploying' and 'deployed' states of the page for
deployment detail
* Improved workflow for associating node profiles with roles
- showing final state of MVP
- first iteration contains only last row (no node definition link)


Hey Jarda,

Looking good. I've got two questions:

1. Are we doing node tags (page 4) for the first iteration? Where are 
they going to live?


2. There are multiple node profiles per role on pages 11, 12, 17. Is 
that just an oversight or do you intend on keeping those in? I though 
the consensus was to do 1 node profile per deployment role.


Thanks,
Tomas




-- Jarda

[0] https://www.youtube.com/watch?v=y2fv6vebFhM


On 2014/16/01 01:50, Jaromir Coufal wrote:

Hi folks,

thanks everybody for feedback. Based on that I updated wireframes and
tried to provide a minimum scope for Icehouse timeframe.

http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-16_tripleo-ui-icehouse.pdf



Hopefully we are able to deliver described set of features. But if you
find something what is missing which is critical for the first release
(or that we are implementing a feature which should not have such high
priority), please speak up now.

The wireframes are very close to implementation. In time, there will
appear more views and we will see if we can get them in as well.

Thanks all for participation
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Sergey Skripnick

+1 for Hugh, but IMO no need to rush with Alexei's removalHi stackers,I would like to:1) Nominate Hugh Saunders to Rally core, he is doing a lot of good reviews (and always testing patches=) ):http://stackalytics.com/report/reviews/rally/30
2) Remove Alexei from core team, because unfortunately he is not able to work on Rally at this moment. Thank you Alexei for all work that you have done. 
Thoughts?Best regards,Boris Pavlovic
--Regards,Sergey Skripnick___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] backporting database migrations to stable/havana

2014-02-05 Thread Thierry Carrez
Ralf Haferkamp wrote:
 I am currently trying to backport the fix for
 https://launchpad.net/bugs/1254246 to stable/havana. The current state of that
 is here: https://review.openstack.org/#/c/68929/
 
 However, the fix requires a database migration to be applied (to add a unique
 constraint to the agents table). And the current fix linked above will AFAIK
 break havana-icehouse migrations. So I wonder what would be the correct way 
 to
 do backport database migrations in neutron using alembic? Is there even a
 correct way, or are backports of database migrations a no go?

FWIW our StableBranch policy[1] generally forbids DB schema changes in
stable branches.

[1] https://wiki.openstack.org/wiki/StableBranch

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Florent Flament
Hi Vish,

You're approach looks very interesting. I especially like the idea of 'walking 
the tree of parent projects, to construct the set of roles'.

Here are some issues that came to my mind:


Regarding policy rules enforcement:

Considering the following projects:
* orga
* orga.projecta
* orga.projectb

Let's assume that Joe has the following roles:
* `Member` of `orga`
* `admin` of `orga.projectb`

Now Joe wishes to launch a VM on `orga.projecta` and grant a role to some user 
on `orga.projectb` (which rights he has). He would like to be able to do all of 
this with the same token (scoped on project `orga`?).

For this scenario to be working, we would need to be able to store multiple 
roles (a tree of roles?) in the token, so that services would know which role 
is granted to the user on which project.

In a first time, I guess we could stay with the roles scoped to a unique 
project. Joe would be able to do what he wants, by getting a first token on 
`orga` or `orga.projecta` with a `Member` role, then a second token on 
`orga.projectb` with the `admin` role.


Considering quotas enforcement:

Let's say we wants set the following limits:

* `orga` : max 10 VMs
* ̀ orga.projecta` : max 8 VMs
* `orga.projectb` : max 8 VMs

The idea would be that the `admin` of `orga` wishes to allow 8 VMs to projects 
̀`orga.projecta` or `orga.projectb`, but doesn't care how these VMs are spread. 
Although he wishes to keep 2 VMs in `orga` for himself.

Then to be able to enforce these quotas, Nova (and all other services) would 
have to keep track of the tree of quotas, and update the appropriate nodes.


By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and 
Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, 
...) would just have to ask this centralized access management service whether 
an action is authorized for a given token?

Florent Flament



- Original Message -
From: Vishvananda Ishaya vishvana...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, February 3, 2014 10:58:28 PM
Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy   
Discussion

Hello Again!

At the meeting last week we discussed some options around getting true 
multitenancy in nova. The use case that we are trying to support can be 
described as follows:

Martha, the owner of ProductionIT provides it services to multiple Enterprise 
clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam 
at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has 
multiple QA and Development teams with many users. Joe needs the ability create 
users, projects, and quotas, as well as the ability to list and delete 
resources across WidgetMaster. Martha needs to be able to set the quotas for 
both WidgetMaster and SuperDevShop; manage users, projects, and objects across 
the entire system; and set quotas for the client companies as a whole. She also 
needs to ensure that Joe can't see or mess with anything owned by Sam.

As per the plan I outlined in the meeting I have implemented a Proof-of-Concept 
that would allow me to see what changes were required in nova to get scoped 
tenancy working. I used a simple approach of faking out heirarchy by prepending 
the id of the larger scope to the id of the smaller scope. Keystone uses uuids 
internally, but for ease of explanation I will pretend like it is using the 
name. I think we can all agree that ‘orga.projecta’ is more readable than 
‘b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8’.

The code basically creates the following five projects:

orga
orga.projecta
orga.projectb
orgb
orgb.projecta

I then modified nova to replace everywhere where it searches or limits policy 
by project_id to do a prefix match. This means that someone using project 
‘orga’ should be able to list/delete instances in orga, orga.projecta, and 
orga.projectb.

You can find the code here:

  
https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
  
https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f

Keeping in mind that this is a prototype, but I’m hoping to come to some kind 
of consensus as to whether this is a reasonable approach. I’ve compiled a list 
of pros and cons.

Pros:

  * Very easy to understand
  * Minimal changes to nova
  * Good performance in db (prefix matching uses indexes)
  * Could be extended to cover more complex scenarios like multiple owners or 
multiple scopes

Cons:

  * Nova has no map of the hierarchy
  * Moving projects would require updates to ownership inside of nova
  * Complex scenarios involving delegation of roles may be a bad fit
  * Database upgrade to hierarchy could be tricky

If this seems like a reasonable set of tradeoffs, there are a few things that 
need to be done inside of nova bring this to a complete solution:

  

Re: [openstack-dev] [Nova] os-migrateLive not working with neutron in Havana (or apparently Grizzly)

2014-02-05 Thread John Garbutt
On 4 February 2014 19:16, Jonathan Proulx j...@jonproulx.com wrote:
 HI all,

 Trying to get a little love on bug 
 https://bugs.launchpad.net/nova/+bug/1227836

 Short version is the instance migrates, but there's an RPC time out
 that keeps nova thinking it's still on the old node mid-migration.
 Informal survey of operators seems to suggest this always happens when
 using neutron networking and never when using nova-networking (for
 small values of always and never)

 Feels like I could kludge in a longer timeout somewhere and it would
 work for now, so I'm sifting through unfamiliar code trying to find
 that and hoping someone here just knows where it is and can make my
 week a whole lot better by pointing it out.

Seems like it is this call that times out:
https://github.com/openstack/nova/blob/master/nova/conductor/rpcapi.py#L428
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4283

And because there is no wrapper on this manager call method, it
remains in the Migrating task state:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4192

 Better less kludgy solutions also welcomed, but I need a kernel update
 on all my compute nodes so quick and dirty is all I need for right
 now.

I have some draft patches for a longer term fix as part of this:
https://blueprints.launchpad.net/nova/+spec/live-migration-to-conductor

In my current patches, I don't remove all the call operations, but
that seems like a good eventual goal.

Basic idea, is imagine the current flow is:
* source compute node calls destination
* source compute node calls conductor to do stuff
* source compute node completes rest of work

Possible new flow, removing all calls:
* conductor casts to destination
* destination casts to conductor
* conductor does what it needs to do
* conductor casts to source
* source casts to conductor
* conductor finishes off
* maybe have a periodic task to spot when we get stuck waiting (to
replace RPC timeout)

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] January review redux

2014-02-05 Thread Haomeng, Wang
+1 for both:)


On Wed, Feb 5, 2014 at 6:08 PM, Yuriy Zveryanskyy
yzveryans...@mirantis.com wrote:
 On 02/04/2014 09:42 PM, Devananda van der Veen wrote:

 So, I'd like to nominate the following two additions to the ironic-core
 team:

 Max Lobur
 https://review.openstack.org/#/q/reviewer:mlobur%2540mirantis.com+project:openstack/ironic,n,z

 Roman Prykhodchenko
 https://review.openstack.org/#/q/reviewer:rprikhodchenko%2540mirantis.com+project:openstack/ironic,n,z


 +1 for both



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Thierry Carrez
Steve Gordon wrote:
 From: Anne Gentle anne.gen...@rackspace.com
 Based on today's Technical Committee meeting and conversations with the
 OpenStack board members, I need to change our Conventions for service names
 at
 https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
 .

 Previously we have indicated that Ceilometer could be named OpenStack
 Telemetry and Heat could be named OpenStack Orchestration. That's not the
 case, and we need to change those names.

 To quote the TC meeting, ceilometer and heat are other modules (second
 sentence from 4.1 in
 http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
 distributed with the Core OpenStack Project.

 Here's what I intend to change the wiki page to:
  Here's the list of project and module names and their official names and
 capitalization:

 Ceilometer module
 Cinder: OpenStack Block Storage
 Glance: OpenStack Image Service
 Heat module
 Horizon: OpenStack dashboard
 Keystone: OpenStack Identity Service
 Neutron: OpenStack Networking
 Nova: OpenStack Compute
 Swift: OpenStack Object Storage

Small correction. The TC had not indicated that Ceilometer could be
named OpenStack Telemetry and Heat could be named OpenStack
Orchestration. We formally asked[1] the board to allow (or disallow)
that naming (or more precisely, that use of the trademark).

[1]
https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names

We haven't got a formal and clear answer from the board on that request
yet. I suspect they are waiting for progress on DefCore before deciding.

If you need an answer *now* (and I suspect you do), it might make sense
to ask foundation staff/lawyers about using those OpenStack names with
the current state of the bylaws and trademark usage rules, rather than
the hypothetical future state under discussion.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Oleg Gelbukh
+1 for Hugh, he's doing excellent job moving the project forward.

--
Best regards,
Oleg Gelbukh


On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick sskripn...@mirantis.comwrote:


 +1 for Hugh, but IMO no need to rush with Alexei's removal

 Hi stackers,

 I would like to:

 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good reviews
 (and always testing patches=) ):
 http://stackalytics.com/report/reviews/rally/30

 2) Remove Alexei from core team, because unfortunately he is not able to
 work on Rally at this moment. Thank you Alexei for all work that you have
 done.


 Thoughts?


 Best regards,
 Boris Pavlovic


 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-02-05 Thread Jaromir Coufal

Hi Tomas,

thanks for the questions, I am replying inline.

On 2014/05/02 11:19, Tomas Sedovic wrote:

On 05/02/14 03:58, Jaromir Coufal wrote:

Hi to everybody,

based on the feedback from last week [0] I incorporated changes in the
wireframes so that we keep them up to date with latest decisions:

http://people.redhat.com/~jcoufal/openstack/tripleo/2014-02-05_tripleo-ui-icehouse.pdf

Changes:
* Smaller layout change in Nodes Registration (no rush for update)
* Unifying views for 'deploying' and 'deployed' states of the page for
deployment detail
* Improved workflow for associating node profiles with roles
- showing final state of MVP
- first iteration contains only last row (no node definition link)


Hey Jarda,

Looking good. I've got two questions:

1. Are we doing node tags (page 4) for the first iteration? Where are
they going to live?

Yes, it's very easy to do, already part of Ironic.


2. There are multiple node profiles per role on pages 11, 12, 17. Is
that just an oversight or do you intend on keeping those in? I though
the consensus was to do 1 node profile per deployment role.

I tried to avoid the confusion by the comment:
'- showing final state of MVP
 - first iteration contains only last row (no node definition link)'

Maybe I should be more clear. By last row I meant that in the first 
iteration, the form will contain only one row with dropdown to select 
only one flavor per role.


I intend to keep multiple roles for Icehouse scope. We will see if we 
can get there in time, I am hoping for 'yes'. But I am absolutely 
aligned with the consensus that we are starting only one node profile 
per role.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-05 Thread victor stinner
Hi,

Chris Behrens wrote:
 Interesting thread. I have been working on a side project that is a
 gevent/eventlet replacement [1] that focuses on thread-safety and
 performance. This came about because of an outstanding bug we have with
 eventlet not being Thread safe. (We cannot safely enable thread pooling for
 DB calls so that they will not block.)

There are DB drivers compatible with asyncio: PostgreSQL, MongoDB, Redis and 
memcached.

There is also a driver for ZeroMQ which can be used in Oslo Messaging to have a 
more efficient (asynchronous) driver.

There also many event loops for: gevent (geventreactor, gevent3), greenlet, 
libuv, GLib and Tornado.

See the full list:
http://code.google.com/p/tulip/wiki/ThirdParty

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] backporting database migrations to stable/havana

2014-02-05 Thread Ralf Haferkamp
On Wed, Feb 05, 2014 at 11:31:55AM +0100, Thierry Carrez wrote:
 Ralf Haferkamp wrote:
  I am currently trying to backport the fix for
  https://launchpad.net/bugs/1254246 to stable/havana. The current state of 
  that
  is here: https://review.openstack.org/#/c/68929/
  
  However, the fix requires a database migration to be applied (to add a 
  unique
  constraint to the agents table). And the current fix linked above will AFAIK
  break havana-icehouse migrations. So I wonder what would be the correct 
  way to
  do backport database migrations in neutron using alembic? Is there even a
  correct way, or are backports of database migrations a no go?
 
 FWIW our StableBranch policy[1] generally forbids DB schema changes in
 stable branches.
Hm, I must have overlooked that when reading through the document recently.
Thanks for clarifying I guess I have to find another way to workaround the
above mentioned bug then.

Though it seems there can be exceptions from that rule. At least nova adds a
set of blank migrations (for sqlalchemy in nova's case) at the beginning of a
new development cylce (at least since havana) to be able to backport migrations
to stable. (It seems though, that no backport ever happened for nova).

 [1] https://wiki.openstack.org/wiki/StableBranch
-- 
Ralf

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Agenda for todays ML2 Weekly meeting

2014-02-05 Thread trinath.soman...@freescale.com
Hi-

Kindly share me the agenda for today weekly meeting on Neutron/ML2.


Best Regards,
--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Martins, Tiago
 By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and 
Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, 
...) would just have to ask this centralized access management service whether 
an action is authorized for a given token?

I agree on centralize RBAC, this is confusing, with a lot of files to manage 
and each service with some slightly different implementation on how to enforce 
policy. I think keystone is a good place for it, since the sql token is 
validated before every operation. Maybe it could even have its own DSL.
Quotas should have their own service, there are code and tables replicated all 
across OpenStack and that is not good, it forces quotas to be simple when they 
need to solve complex use cases.

Tiago Martins

-Original Message-
From: Florent Flament [mailto:florent.flament-...@cloudwatt.com] 
Sent: quarta-feira, 5 de fevereiro de 2014 08:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy 
Discussion

Hi Vish,

You're approach looks very interesting. I especially like the idea of 'walking 
the tree of parent projects, to construct the set of roles'.

Here are some issues that came to my mind:


Regarding policy rules enforcement:

Considering the following projects:
* orga
* orga.projecta
* orga.projectb

Let's assume that Joe has the following roles:
* `Member` of `orga`
* `admin` of `orga.projectb`

Now Joe wishes to launch a VM on `orga.projecta` and grant a role to some user 
on `orga.projectb` (which rights he has). He would like to be able to do all of 
this with the same token (scoped on project `orga`?).

For this scenario to be working, we would need to be able to store multiple 
roles (a tree of roles?) in the token, so that services would know which role 
is granted to the user on which project.

In a first time, I guess we could stay with the roles scoped to a unique 
project. Joe would be able to do what he wants, by getting a first token on 
`orga` or `orga.projecta` with a `Member` role, then a second token on 
`orga.projectb` with the `admin` role.


Considering quotas enforcement:

Let's say we wants set the following limits:

* `orga` : max 10 VMs
* ̀ orga.projecta` : max 8 VMs
* `orga.projectb` : max 8 VMs

The idea would be that the `admin` of `orga` wishes to allow 8 VMs to projects 
̀`orga.projecta` or `orga.projectb`, but doesn't care how these VMs are spread. 
Although he wishes to keep 2 VMs in `orga` for himself.

Then to be able to enforce these quotas, Nova (and all other services) would 
have to keep track of the tree of quotas, and update the appropriate nodes.


By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and 
Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, 
...) would just have to ask this centralized access management service whether 
an action is authorized for a given token?

Florent Flament



- Original Message -
From: Vishvananda Ishaya vishvana...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, February 3, 2014 10:58:28 PM
Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy   
Discussion

Hello Again!

At the meeting last week we discussed some options around getting true 
multitenancy in nova. The use case that we are trying to support can be 
described as follows:

Martha, the owner of ProductionIT provides it services to multiple Enterprise 
clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam 
at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has 
multiple QA and Development teams with many users. Joe needs the ability create 
users, projects, and quotas, as well as the ability to list and delete 
resources across WidgetMaster. Martha needs to be able to set the quotas for 
both WidgetMaster and SuperDevShop; manage users, projects, and objects across 
the entire system; and set quotas for the client companies as a whole. She also 
needs to ensure that Joe can't see or mess with anything owned by Sam.

As per the plan I outlined in the meeting I have implemented a Proof-of-Concept 
that would allow me to see what changes were required in nova to get scoped 
tenancy working. I used a simple approach of faking out heirarchy by prepending 
the id of the larger scope to the id of the smaller scope. Keystone uses uuids 
internally, but for ease of explanation I will pretend like it is using the 
name. I think we can all agree that ‘orga.projecta’ is more readable than 
‘b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8’.

The code basically creates the following five projects:

orga
orga.projecta
orga.projectb
orgb
orgb.projecta

I then modified nova to replace everywhere where it searches or limits policy 
by project_id to do a prefix match. This means that 

Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-02-05 Thread Tomas Sedovic

snip

1. Are we doing node tags (page 4) for the first iteration? Where are
they going to live?

Yes, it's very easy to do, already part of Ironic.


Cool!




2. There are multiple node profiles per role on pages 11, 12, 17. Is
that just an oversight or do you intend on keeping those in? I though
the consensus was to do 1 node profile per deployment role.

I tried to avoid the confusion by the comment:
'- showing final state of MVP
  - first iteration contains only last row (no node definition link)'


I'm sorry, I completely missed that comment. Thanks for the clarification.



Maybe I should be more clear. By last row I meant that in the first
iteration, the form will contain only one row with dropdown to select
only one flavor per role.

I intend to keep multiple roles for Icehouse scope. We will see if we
can get there in time, I am hoping for 'yes'. But I am absolutely
aligned with the consensus that we are starting only one node profile
per role.

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Ilya Kharin
+1 for Hugh


On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick sskripn...@mirantis.comwrote:


 +1 for Hugh, but IMO no need to rush with Alexei's removal

 Hi stackers,

 I would like to:

 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good reviews
 (and always testing patches=) ):
 http://stackalytics.com/report/reviews/rally/30

 2) Remove Alexei from core team, because unfortunately he is not able to
 work on Rally at this moment. Thank you Alexei for all work that you have
 done.


 Thoughts?


 Best regards,
 Boris Pavlovic


 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Vishvananda Ishaya

On Feb 5, 2014, at 2:38 AM, Florent Flament florent.flament-...@cloudwatt.com 
wrote:

 Hi Vish,
 
 You're approach looks very interesting. I especially like the idea of 
 'walking the tree of parent projects, to construct the set of roles'.
 
 Here are some issues that came to my mind:
 
 
 Regarding policy rules enforcement:
 
 Considering the following projects:
 * orga
 * orga.projecta
 * orga.projectb
 
 Let's assume that Joe has the following roles:
 * `Member` of `orga`
 * `admin` of `orga.projectb`
 
 Now Joe wishes to launch a VM on `orga.projecta` and grant a role to some 
 user on `orga.projectb` (which rights he has). He would like to be able to do 
 all of this with the same token (scoped on project `orga`?).
 
 For this scenario to be working, we would need to be able to store multiple 
 roles (a tree of roles?) in the token, so that services would know which role 
 is granted to the user on which project.
 
 In a first time, I guess we could stay with the roles scoped to a unique 
 project. Joe would be able to do what he wants, by getting a first token on 
 `orga` or `orga.projecta` with a `Member` role, then a second token on 
 `orga.projectb` with the `admin` role.

This is a good point, having different roles on different levels of the 
hierarchy does lead to having to reauthenticate for certain actions. Keystone 
could pass the scope along with each role instead of a single global scope. The 
policy check in this could be modifed to do matching on role  prefix against 
the scope of ther role so policy like:

“remove_user_from_project”: “role:project_admin and scope_prefix:project_id”

This starts to get complex and unwieldy however because a single token allows 
you to do anything and everything based on your roles. I think we need a 
healthy balance between ease of use and the principle of least privilege, so we 
might be best to stick to a single scope for each token and force a 
reauthentication to do adminy stuff in projectb.

 
 
 Considering quotas enforcement:
 
 Let's say we wants set the following limits:
 
 * `orga` : max 10 VMs
 * ̀ orga.projecta` : max 8 VMs
 * `orga.projectb` : max 8 VMs
 
 The idea would be that the `admin` of `orga` wishes to allow 8 VMs to 
 projects ̀`orga.projecta` or `orga.projectb`, but doesn't care how these VMs 
 are spread. Although he wishes to keep 2 VMs in `orga` for himself.

This seems like a bit of a stretch as a use case. Sharing a set of quotas 
across two projects seems strange and if we did have arbitrary nesting you 
could do the same by sticking a dummy project in between

orga: max 10
orga.dummy: max 8
orga.dummy.projecta: no max
orga.dummy.projectb: no max
 
 Then to be able to enforce these quotas, Nova (and all other services) would 
 have to keep track of the tree of quotas, and update the appropriate nodes.
 
 
 By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and 
 Quotas logic in a unique service (Keystone?). Openstack services (Nova, 
 Cinder, ...) would just have to ask this centralized access management 
 service whether an action is authorized for a given token?

So I threw out the idea the other day that quota enforcement should perhaps be 
done by gantt. Quotas seem to be a scheduling concern more than anything else.
 
 Florent Flament
 
 
 
 - Original Message -
 From: Vishvananda Ishaya vishvana...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, February 3, 2014 10:58:28 PM
 Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy 
 Discussion
 
 Hello Again!
 
 At the meeting last week we discussed some options around getting true 
 multitenancy in nova. The use case that we are trying to support can be 
 described as follows:
 
 Martha, the owner of ProductionIT provides it services to multiple 
 Enterprise clients. She would like to offer cloud services to Joe at 
 WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for 
 WidgetMaster and he has multiple QA and Development teams with many users. 
 Joe needs the ability create users, projects, and quotas, as well as the 
 ability to list and delete resources across WidgetMaster. Martha needs to be 
 able to set the quotas for both WidgetMaster and SuperDevShop; manage users, 
 projects, and objects across the entire system; and set quotas for the client 
 companies as a whole. She also needs to ensure that Joe can't see or mess 
 with anything owned by Sam.
 
 As per the plan I outlined in the meeting I have implemented a 
 Proof-of-Concept that would allow me to see what changes were required in 
 nova to get scoped tenancy working. I used a simple approach of faking out 
 heirarchy by prepending the id of the larger scope to the id of the smaller 
 scope. Keystone uses uuids internally, but for ease of explanation I will 
 pretend like it is using the name. I think we can all agree that 
 ‘orga.projecta’ is more 

[openstack-dev] [Climate] 0.1.0 release

2014-02-05 Thread Dina Belova
Hi, folks!

Today Climate has been released first time and I'm really glad to say that
:)

This release implements following use cases:

   - User wants to reserve virtual machine and use it later. He/she asks
   Nova to create server, passing special hints, describing information like
   lease start and end time. In this case instance will be not just booted,
   but also shelved not to use cloud resources when it's not needed. At the
   time user passed as 'lease start time' instance will be unshelled and used
   as user wants to. User may define different actions that might happen to
   instance at lease end - like snapshoting or/and suspending or/and removal.
   - User wants to reserve compute capacity of whole compute host to use it
   later. In this case he/she asks Climate to provide host with passed
   characteristics from predefined pool of hosts (that is managed by admin
   user). If this request might be processed, user will have the opportunity
   run his/her instances on reserved host when lease starts.


Here are our release notes:
Climate/Release_Notes/0.1.0https://wiki.openstack.org/wiki/Climate/Release_Notes/0.1.0

Other useful links:

   - Climate Wiki https://wiki.openstack.org/wiki/Climate
   - Climate Launchpad https://launchpad.net/climate
   - Future plans for 0.2.x https://etherpad.openstack.org/p/climate-0.2


Thanks all team who worked on Climate 0.1.0 and everybody who helped us!

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Vishvananda Ishaya

On Feb 5, 2014, at 12:27 AM, Chris Behrens cbehr...@codestud.com wrote:

 
 Hi Vish,
 
 I’m jumping in slightly late on this, but I also have an interest in this. 
 I’m going to preface this by saying that I have not read this whole thread 
 yet, so I apologize if I repeat things, say anything that is addressed by 
 previous posts, or doesn’t jive with what you’re looking for. :) But what you 
 describe below sounds like exactly a use case I’d come up with.
 
 Essentially I want another level above project_id. Depending on the exact use 
 case, you could name it ‘wholesale_id’ or ‘reseller_id’...and yeah, ‘org_id’ 
 fits in with your example. :) I think that I had decided I’d call it ‘domain’ 
 to be more generic, especially after seeing keystone had a domain concept.
 
 Your idea below (prefixing the project_id) is exactly one way I thought of 
 doing this to be least intrusive. I, however, thought that this would not be 
 efficient. So, I was thinking about proposing that we add ‘domain’ to all of 
 our models. But that limits your hierarchy and I don’t necessarily like that. 
 :)  So I think that if the queries are truly indexed as you say below, you 
 have a pretty good approach. The one issue that comes into mind is that if 
 there’s any chance of collision. For example, if project ids (or orgs) could 
 contain a ‘.’, then ‘.’ as a delimiter won’t work.
 
 My requirements could be summed up pretty well by thinking of this as 
 ‘virtual clouds within a cloud’. Deploy a single cloud infrastructure that 
 could look like many multiple clouds. ‘domain’ would be the key into each 
 different virtual cloud. Accessing one virtual cloud doesn’t reveal any 
 details about another virtual cloud.
 
 What this means is:
 
 1) domain ‘a’ cannot see instances (or resources in general) in domain ‘b’. 
 It doesn’t matter if domain ‘a’ and domain ‘b’ share the same tenant ID. If 
 you act with the API on behalf of domain ‘a’, you cannot see your instances 
 in domain ‘b’.
 2) Flavors per domain. domain ‘a’ can have different flavors than domain ‘b’.

I hadn’t thought of this one, but we do have per-project flavors so I think 
this could work in a project hierarchy world. We might have to rethink the idea 
of global flavors and just stick them in the top-level project. That way the 
flavors could be removed. The flavor list would have to be composed by matching 
all parent projects. It might make sense to have an option for flavors to be 
“hidden in sub projects somehow as well. In other words if orgb wants to 
delete a flavor from the global list they could do it by hiding the flavor.

Definitely some things to be thought about here.

 3) Images per domain. domain ‘a’ could see different images than domain ‘b’.

Yes this would require similar hierarchical support in glance.

 4) Quotas and quota limits per domain. your instances in domain ‘a’ don’t 
 count against quotas in domain ‘b’.

Yes we’ve talked about quotas for sure. This is definitely needed.

 5) Go as far as using different config values depending on what domain you’re 
 using. This one is fun. :)

Curious for some examples here.

 
 etc.
 
 I’m not sure if you were looking to go that far or not. :) But I think that 
 our ideas are close enough, if not exact, that we can achieve both of our 
 goals with the same implementation.
 
 I’d love to be involved with this. I am not sure that I currently have the 
 time to help with implementation, however.

Come to the meeting on friday! 1600 UTC

Vish

 
 - Chris
 
 
 
 On Feb 3, 2014, at 1:58 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:
 
 Hello Again!
 
 At the meeting last week we discussed some options around getting true 
 multitenancy in nova. The use case that we are trying to support can be 
 described as follows:
 
 Martha, the owner of ProductionIT provides it services to multiple 
 Enterprise clients. She would like to offer cloud services to Joe at 
 WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for 
 WidgetMaster and he has multiple QA and Development teams with many users. 
 Joe needs the ability create users, projects, and quotas, as well as the 
 ability to list and delete resources across WidgetMaster. Martha needs to be 
 able to set the quotas for both WidgetMaster and SuperDevShop; manage users, 
 projects, and objects across the entire system; and set quotas for the 
 client companies as a whole. She also needs to ensure that Joe can't see or 
 mess with anything owned by Sam.
 
 As per the plan I outlined in the meeting I have implemented a 
 Proof-of-Concept that would allow me to see what changes were required in 
 nova to get scoped tenancy working. I used a simple approach of faking out 
 heirarchy by prepending the id of the larger scope to the id of the smaller 
 scope. Keystone uses uuids internally, but for ease of explanation I will 
 pretend like it is using the name. I think we can all agree that 
 ‘orga.projecta’ is more readable than 
 

Re: [openstack-dev] [Climate] 0.1.0 release

2014-02-05 Thread Sergey Lukjanov
Great progress!

My congratulations.


On Wed, Feb 5, 2014 at 3:36 PM, Dina Belova dbel...@mirantis.com wrote:

 Hi, folks!

 Today Climate has been released first time and I'm really glad to say that
 :)

 This release implements following use cases:

- User wants to reserve virtual machine and use it later. He/she asks
Nova to create server, passing special hints, describing information like
lease start and end time. In this case instance will be not just booted,
but also shelved not to use cloud resources when it's not needed. At the
time user passed as 'lease start time' instance will be unshelled and used
as user wants to. User may define different actions that might happen to
instance at lease end - like snapshoting or/and suspending or/and removal.
- User wants to reserve compute capacity of whole compute host to use
it later. In this case he/she asks Climate to provide host with passed
characteristics from predefined pool of hosts (that is managed by admin
user). If this request might be processed, user will have the opportunity
run his/her instances on reserved host when lease starts.


 Here are our release notes: 
 Climate/Release_Notes/0.1.0https://wiki.openstack.org/wiki/Climate/Release_Notes/0.1.0

 Other useful links:

- Climate Wiki https://wiki.openstack.org/wiki/Climate
- Climate Launchpad https://launchpad.net/climate
- Future plans for 0.2.x https://etherpad.openstack.org/p/climate-0.2


 Thanks all team who worked on Climate 0.1.0 and everybody who helped us!

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] backporting database migrations to stable/havana

2014-02-05 Thread Ralf Haferkamp
Hi,

On Tue, Feb 04, 2014 at 12:36:16PM -0500, Miguel Angel Ajo Pelayo wrote:
 
 
 Hi Ralf, I see we're on the same boat for this.
 
It seems that a database migration introduces complications
 for future upgrades. It's not an easy path.
 
My aim when I started this backport was trying to scale out
 neutron-server, starting several ones together. But I'm afraid
 we would find more bugs like this requiring db migrations.
 
Have you actually tested running multiple servers in icehouse?,
 I just didn't have the time, but it's in my roadmap.
I actually ran into the bug in a single server setup. But that seems to happen
pretty rarely.

If that fixes the problem, may be some heavier approach (like
 table locking) could be used in the backport, without introducing 
 a new/conflicting migration.
Hm, there seems to be no clean way to do table locking in sqlalchemy. At least I
didn't find one.
 
 About the DB migration backport problem, the actual problem is:
[..]
 1st step) fix E in icehouse to skip the real unique constraint insertion if 
 it does already exist:
 
 havana   | icehouse
  |
 A-B-C-|--D-*E*-F
  
 2nd step) insert E2 in the middle of B and C to keep the icehouse first 
 reference happy:
 
 havana  | icehouse
 |
 A-B-E-C-|--D-*E*-F
 
 What do you think?
I agree, that would likely be the right fix. But as it seems there are some
(more or less) strict rules about stable backports of migrations (which I
understand as it can get really tricky). So a solution that doesn't require
them would probabyl be preferable.
 
 - Original Message -
  From: Ralf Haferkamp rha...@suse.de
  To: openstack-dev@lists.openstack.org
  Sent: Tuesday, February 4, 2014 4:02:36 PM
  Subject: [openstack-dev] [Neutron] backporting database migrations to   
  stable/havana
  
  Hi,
  
  I am currently trying to backport the fix for
  https://launchpad.net/bugs/1254246 to stable/havana. The current state of
  that
  is here: https://review.openstack.org/#/c/68929/
  
  However, the fix requires a database migration to be applied (to add a 
  unique
  constraint to the agents table). And the current fix linked above will AFAIK
  break havana-icehouse migrations. So I wonder what would be the correct way
  to
  do backport database migrations in neutron using alembic? Is there even a
  correct way, or are backports of database migrations a no go?
  
  --
  regards,
  Ralf

-- 
Ralf 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-05 Thread Thierry Carrez
victor stinner wrote:
 [...]
 The problem is that the asyncio module was written for Python 3.3, whereas 
 OpenStack is not fully Python 3 compatible (yet). To easy the transition I 
 have ported asyncio on Python 2, it's the new Trollis project which supports 
 Python 2.6-3.4:
https://bitbucket.org/enovance/trollius
 [...]

How much code from asyncio did you reuse ? How deep was the porting
effort ? Is the port maintainable as asyncio gets more bugfixes over time ?

 The Trollius API is the same than asyncio, the main difference is the syntax 
 in coroutines: yield from task must be written yield task, and return 
 value must be written raise Return(value).

Could we use a helper library (like six) to have the same syntax in Py2
and Py3 ? Something like from six.asyncio import yield_from,
return_task and use those functions for py2/py3 compatible syntax ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-05 Thread Sergey Lukjanov
Agreed, let's move on to the MySQL for savanna-ci to run integration tests
against production-like DB.


On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev alaza...@mirantis.comwrote:

 Since sqlite is not in the list of databases that would be used in
 production, CI should use other DB for testing.

 Andrew.


 On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov 
 aigna...@mirantis.comwrote:

 Indeed. We should create a bug around that and move our savanna-ci to
 mysql.

 Regards,
 Alexander Ignatov



 On 05 Feb 2014, at 01:01, Trevor McKay tmc...@redhat.com wrote:

  This brings up an interesting problem:
 
  In https://review.openstack.org/#/c/70420/ I've added a migration that
  uses a drop column for an upgrade.
 
  But savann-ci is apparently using a sqlite database to run.  So it can't
  possibly pass.
 
  What do we do here?  Shift savanna-ci tests to non sqlite?
 
  Trevor
 
  On Sat, 2014-02-01 at 18:17 +0200, Roman Podoliaka wrote:
  Hi all,
 
  My two cents.
 
  2) Extend alembic so that op.drop_column() does the right thing
  We could, but should we?
 
  The only reason alembic doesn't support these operations for SQLite
  yet is that SQLite lacks proper support of ALTER statement. For
  sqlalchemy-migrate we've been providing a work-around in the form of
  recreating of a table and copying of all existing rows (which is a
  hack, really).
 
  But to be able to recreate a table, we first must have its definition.
  And we've been relying on SQLAlchemy schema reflection facilities for
  that. Unfortunately, this approach has a few drawbacks:
 
  1) SQLAlchemy versions prior to 0.8.4 don't support reflection of
  unique constraints, which means the recreated table won't have them;
 
  2) special care must be taken in 'edge' cases (e.g. when you want to
  drop a BOOLEAN column, you must also drop the corresponding CHECK (col
  in (0, 1)) constraint manually, or SQLite will raise an error when the
  table is recreated without the column being dropped)
 
  3) special care must be taken for 'custom' type columns (it's got
  better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override
  definitions of reflected BIGINT columns manually for each
  column.drop() call)
 
  4) schema reflection can't be performed when alembic migrations are
  run in 'offline' mode (without connecting to a DB)
  ...
  (probably something else I've forgotten)
 
  So it's totally doable, but, IMO, there is no real benefit in
  supporting running of schema migrations for SQLite.
 
  ...attempts to drop schema generation based on models in favor of
 migrations
 
  As long as we have a test that checks that the DB schema obtained by
  running of migration scripts is equal to the one obtained by calling
  metadata.create_all(), it's perfectly OK to use model definitions to
  generate the initial DB schema for running of unit-tests as well as
  for new installations of OpenStack (and this is actually faster than
  running of migration scripts). ... and if we have strong objections
  against doing metadata.create_all(), we can always use migration
  scripts for both new installations and upgrades for all DB backends,
  except SQLite.
 
  Thanks,
  Roman
 
  On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
  enikano...@mirantis.com wrote:
  Boris,
 
  Sorry for the offtopic.
  Is switching to model-based schema generation is something decided? I
 see
  the opposite: attempts to drop schema generation based on models in
 favor of
  migrations.
  Can you point to some discussion threads?
 
  Thanks,
  Eugene.
 
 
 
  On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic 
 bpavlo...@mirantis.com
  wrote:
 
  Jay,
 
  Yep we shouldn't use migrations for sqlite at all.
 
  The major issue that we have now is that we are not able to ensure
 that DB
  schema created by migration  models are same (actually they are not
 same).
 
  So before dropping support of migrations for sqlite  switching to
 model
  based created schema we should add tests that will check that model 
  migrations are synced.
  (we are working on this)
 
 
 
  Best regards,
  Boris Pavlovic
 
 
  On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev 
 alaza...@mirantis.com
  wrote:
 
  Trevor,
 
  Such check could be useful on alembic side too. Good opportunity for
  contribution.
 
  Andrew.
 
 
  On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay tmc...@redhat.com
 wrote:
 
  Okay,  I can accept that migrations shouldn't be supported on
 sqlite.
 
  However, if that's the case then we need to fix up
 savanna-db-manage so
  that it checks the db connection info and throws a polite error to
 the
  user for attempted migrations on unsupported platforms. For
 example:
 
  Database migrations are not supported for sqlite
 
  Because, as a developer, when I see a sql error trace as the
 result of
  an operation I assume it's broken :)
 
  Best,
 
  Trevor
 
  On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote:
  On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
  I was playing with 

Re: [openstack-dev] [savanna] Specific job type for streaming mapreduce? (and someday pipes)

2014-02-05 Thread Sergey Lukjanov
I like the dot-separated name. There are several reasons for it:

* it'll not require changes in all Savanna subprojects;
* eventually we'd like to use not only Oozie for EDP (for example, if we'll
support Twitter Storm) and this new tools could require additional
'subtypes'.

Thanks for catching this.


On Tue, Feb 4, 2014 at 10:47 PM, Trevor McKay tmc...@redhat.com wrote:

 Thanks Andrew.

 My author thought, which is in between, is to allow dotted types.
 MapReduce.streaming for example.

 This gives you the subtype flavor but keeps all the APIs the same.
 We just need a wrapper function to separate them when we compare types.

 Best,

 Trevor

 On Mon, 2014-02-03 at 14:57 -0800, Andrew Lazarev wrote:
  I see two points:
  * having Savanna types mapped to Oozie action types is intuitive for
  hadoop users and this is something we would like to keep
  * it is hard to distinguish different kinds of one job type
 
 
  Adding 'subtype' field will solve both problems. Having it optional
  will not break backward compatibility. Adding database migration
  script is also pretty straightforward.
 
 
  Summarizing, my vote is on subtype field.
 
 
  Thanks,
  Andrew.
 
 
  On Mon, Feb 3, 2014 at 2:10 PM, Trevor McKay tmc...@redhat.com
  wrote:
 
  I was trying my best to avoid adding extra job types to
  support
  mapreduce variants like streaming or mapreduce with pipes, but
  it seems
  that adding the types is the simplest solution.
 
  On the API side, Savanna can live without a specific job type
  by
  examining the data in the job record.  Presence/absence of
  certain
  things, or null values, etc, can provide adequate indicators
  to what
  kind of mapreduce it is.  Maybe a little bit subtle.
 
  But for the UI, it seems that explicit knowledge of what the
  job is
  makes things easier and better for the user.  When a user
  creates a
  streaming mapreduce job and the UI is aware of the type later
  on at job
  launch, the user can be prompted to provide the right configs
  (i.e., the
  streaming mapper and reducer values).
 
  The explicit job type also supports validation without having
  to add
  extra flags (which impacts the savanna client, and the JSON,
  etc). For
  example, a streaming mapreduce job does not require any
  specified
  libraries so the fact that it is meant to be a streaming job
  needs to be
  known at job creation time.
 
  So, to that end, I propose that we add a MapReduceStreaming
  job type,
  and probably at some point we will have MapReducePiped too.
  It's
  possible that we might have other job types in the future too
  as the
  feature set grows.
 
  There was an effort to make Savanna job types parallel Oozie
  action
  types, but in this case that's just not possible without
  introducing a
  subtype field in the job record, which leads to a database
  migration
  script and savanna client changes.
 
  What do you think?
 
  Best,
 
  Trevor
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Undoing a change in the alembic migrations

2014-02-05 Thread Sergey Lukjanov
Just to clarify, new migration scripts should be added. You can find
details on how to do it here -
https://github.com/openstack/savanna/blob/master/savanna/db/migration/alembic_migrations/README


On Thu, Jan 30, 2014 at 8:16 AM, Alexander Ignatov aigna...@mirantis.comwrote:

 Yes, you need create new migration script. Btw, we already have started
 doing this.
 The first example was when Jon added 'neutron' param to the
 'job_execution' object:

 https://review.openstack.org/#/c/63517/17/savanna/db/migration/alembic_migrations/versions/002_add_job_exec_extra.py

 Regards,
 Alexander Ignatov



 On 30 Jan 2014, at 02:25, Andrew Lazarev alaza...@mirantis.com wrote:

 +1 on new migration script. Just to be consecutive.

 Andrew.


 On Wed, Jan 29, 2014 at 2:17 PM, Trevor McKay tmc...@redhat.com wrote:

 Hi Sergey,

   In https://review.openstack.org/#/c/69982/1 we are moving the
 'main_class' and 'java_opts' fields for a job execution into the
 job_configs['configs'] dictionary.  This means that 'main_class' and
 'java_opts' don't need to be in the database anymore.

   These fields were just added in the initial version of the migration
 scripts.  The README says that migrations work from icehouse. Since
 this is the initial script, does that mean we can just remove references
 to those fields from the db models and the script, or do we need a new
 migration script (002) to erase them?

 Thanks,

 Trevor


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-05 Thread Sergey Kolekonov
I'm currently working on moving on the MySQL for savanna-ci


On Wed, Feb 5, 2014 at 3:53 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Agreed, let's move on to the MySQL for savanna-ci to run integration tests
 against production-like DB.


 On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev alaza...@mirantis.comwrote:

 Since sqlite is not in the list of databases that would be used in
 production, CI should use other DB for testing.

 Andrew.


 On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov 
 aigna...@mirantis.comwrote:

 Indeed. We should create a bug around that and move our savanna-ci to
 mysql.

 Regards,
 Alexander Ignatov



 On 05 Feb 2014, at 01:01, Trevor McKay tmc...@redhat.com wrote:

  This brings up an interesting problem:
 
  In https://review.openstack.org/#/c/70420/ I've added a migration that
  uses a drop column for an upgrade.
 
  But savann-ci is apparently using a sqlite database to run.  So it
 can't
  possibly pass.
 
  What do we do here?  Shift savanna-ci tests to non sqlite?
 
  Trevor
 
  On Sat, 2014-02-01 at 18:17 +0200, Roman Podoliaka wrote:
  Hi all,
 
  My two cents.
 
  2) Extend alembic so that op.drop_column() does the right thing
  We could, but should we?
 
  The only reason alembic doesn't support these operations for SQLite
  yet is that SQLite lacks proper support of ALTER statement. For
  sqlalchemy-migrate we've been providing a work-around in the form of
  recreating of a table and copying of all existing rows (which is a
  hack, really).
 
  But to be able to recreate a table, we first must have its definition.
  And we've been relying on SQLAlchemy schema reflection facilities for
  that. Unfortunately, this approach has a few drawbacks:
 
  1) SQLAlchemy versions prior to 0.8.4 don't support reflection of
  unique constraints, which means the recreated table won't have them;
 
  2) special care must be taken in 'edge' cases (e.g. when you want to
  drop a BOOLEAN column, you must also drop the corresponding CHECK (col
  in (0, 1)) constraint manually, or SQLite will raise an error when the
  table is recreated without the column being dropped)
 
  3) special care must be taken for 'custom' type columns (it's got
  better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override
  definitions of reflected BIGINT columns manually for each
  column.drop() call)
 
  4) schema reflection can't be performed when alembic migrations are
  run in 'offline' mode (without connecting to a DB)
  ...
  (probably something else I've forgotten)
 
  So it's totally doable, but, IMO, there is no real benefit in
  supporting running of schema migrations for SQLite.
 
  ...attempts to drop schema generation based on models in favor of
 migrations
 
  As long as we have a test that checks that the DB schema obtained by
  running of migration scripts is equal to the one obtained by calling
  metadata.create_all(), it's perfectly OK to use model definitions to
  generate the initial DB schema for running of unit-tests as well as
  for new installations of OpenStack (and this is actually faster than
  running of migration scripts). ... and if we have strong objections
  against doing metadata.create_all(), we can always use migration
  scripts for both new installations and upgrades for all DB backends,
  except SQLite.
 
  Thanks,
  Roman
 
  On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
  enikano...@mirantis.com wrote:
  Boris,
 
  Sorry for the offtopic.
  Is switching to model-based schema generation is something decided?
 I see
  the opposite: attempts to drop schema generation based on models in
 favor of
  migrations.
  Can you point to some discussion threads?
 
  Thanks,
  Eugene.
 
 
 
  On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic 
 bpavlo...@mirantis.com
  wrote:
 
  Jay,
 
  Yep we shouldn't use migrations for sqlite at all.
 
  The major issue that we have now is that we are not able to ensure
 that DB
  schema created by migration  models are same (actually they are
 not same).
 
  So before dropping support of migrations for sqlite  switching to
 model
  based created schema we should add tests that will check that model
 
  migrations are synced.
  (we are working on this)
 
 
 
  Best regards,
  Boris Pavlovic
 
 
  On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev 
 alaza...@mirantis.com
  wrote:
 
  Trevor,
 
  Such check could be useful on alembic side too. Good opportunity
 for
  contribution.
 
  Andrew.
 
 
  On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay tmc...@redhat.com
 wrote:
 
  Okay,  I can accept that migrations shouldn't be supported on
 sqlite.
 
  However, if that's the case then we need to fix up
 savanna-db-manage so
  that it checks the db connection info and throws a polite error
 to the
  user for attempted migrations on unsupported platforms. For
 example:
 
  Database migrations are not supported for sqlite
 
  Because, as a developer, when I see a sql error trace as the
 result of
  an operation I assume it's broken :)
 
  

Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Mark McLoughlin
On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
 Steve Gordon wrote:
  From: Anne Gentle anne.gen...@rackspace.com
  Based on today's Technical Committee meeting and conversations with the
  OpenStack board members, I need to change our Conventions for service names
  at
  https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
  .
 
  Previously we have indicated that Ceilometer could be named OpenStack
  Telemetry and Heat could be named OpenStack Orchestration. That's not the
  case, and we need to change those names.
 
  To quote the TC meeting, ceilometer and heat are other modules (second
  sentence from 4.1 in
  http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
  distributed with the Core OpenStack Project.
 
  Here's what I intend to change the wiki page to:
   Here's the list of project and module names and their official names and
  capitalization:
 
  Ceilometer module
  Cinder: OpenStack Block Storage
  Glance: OpenStack Image Service
  Heat module
  Horizon: OpenStack dashboard
  Keystone: OpenStack Identity Service
  Neutron: OpenStack Networking
  Nova: OpenStack Compute
  Swift: OpenStack Object Storage
 
 Small correction. The TC had not indicated that Ceilometer could be
 named OpenStack Telemetry and Heat could be named OpenStack
 Orchestration. We formally asked[1] the board to allow (or disallow)
 that naming (or more precisely, that use of the trademark).
 
 [1]
 https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
 
 We haven't got a formal and clear answer from the board on that request
 yet. I suspect they are waiting for progress on DefCore before deciding.
 
 If you need an answer *now* (and I suspect you do), it might make sense
 to ask foundation staff/lawyers about using those OpenStack names with
 the current state of the bylaws and trademark usage rules, rather than
 the hypothetical future state under discussion.

Basically, yes - I think having the Foundation confirm that it's
appropriate to use OpenStack Telemetry in the docs is the right thing.

There's an awful lot of confusion about the subject and, ultimately,
it's the Foundation staff who are responsible for enforcing (and giving
advise to people on) the trademark usage rules. I've cc-ed Jonathan so
he knows about this issue.

But FWIW, the TC's request is asking for Ceilometer and Heat to be
allowed use their Telemetry and Orchestration names in *all* of the
circumstances where e.g. Nova is allowed use its Compute name.

Reading again this clause in the bylaws:

  The other modules which are part of the OpenStack Project, but
   not the Core OpenStack Project may not be identified using the
   OpenStack trademark except when distributed with the Core OpenStack
   Project.

it could well be said that this case of naming conventions in the docs
for the entire OpenStack Project falls under the distributed with case
and it is perfectly fine to refer to OpenStack Telemetry in the docs.
I'd really like to see the Foundation staff give their opinion on this,
though.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-05 Thread Alexei Kornienko

Hi


I'm currently working on moving on the MySQL for savanna-ci
We are working on same task in ceilometer so maybe you could use some of 
our patches as reference:


https://review.openstack.org/#/c/59489/
https://review.openstack.org/#/c/63049/

Regards,
Alexei

On 02/05/2014 02:06 PM, Sergey Kolekonov wrote:

I'm currently working on moving on the MySQL for savanna-ci


On Wed, Feb 5, 2014 at 3:53 PM, Sergey Lukjanov 
slukja...@mirantis.com mailto:slukja...@mirantis.com wrote:


Agreed, let's move on to the MySQL for savanna-ci to run
integration tests against production-like DB.


On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev
alaza...@mirantis.com mailto:alaza...@mirantis.com wrote:

Since sqlite is not in the list of databases that would be
used in production, CI should use other DB for testing.

Andrew.


On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov
aigna...@mirantis.com mailto:aigna...@mirantis.com wrote:

Indeed. We should create a bug around that and move our
savanna-ci to mysql.

Regards,
Alexander Ignatov



On 05 Feb 2014, at 01:01, Trevor McKay tmc...@redhat.com
mailto:tmc...@redhat.com wrote:

 This brings up an interesting problem:

 In https://review.openstack.org/#/c/70420/ I've added a
migration that
 uses a drop column for an upgrade.

 But savann-ci is apparently using a sqlite database to
run.  So it can't
 possibly pass.

 What do we do here?  Shift savanna-ci tests to non sqlite?

 Trevor

 On Sat, 2014-02-01 at 18:17 +0200, Roman Podoliaka wrote:
 Hi all,

 My two cents.

 2) Extend alembic so that op.drop_column() does the
right thing
 We could, but should we?

 The only reason alembic doesn't support these
operations for SQLite
 yet is that SQLite lacks proper support of ALTER
statement. For
 sqlalchemy-migrate we've been providing a work-around
in the form of
 recreating of a table and copying of all existing rows
(which is a
 hack, really).

 But to be able to recreate a table, we first must have
its definition.
 And we've been relying on SQLAlchemy schema reflection
facilities for
 that. Unfortunately, this approach has a few drawbacks:

 1) SQLAlchemy versions prior to 0.8.4 don't support
reflection of
 unique constraints, which means the recreated table
won't have them;

 2) special care must be taken in 'edge' cases (e.g.
when you want to
 drop a BOOLEAN column, you must also drop the
corresponding CHECK (col
 in (0, 1)) constraint manually, or SQLite will raise an
error when the
 table is recreated without the column being dropped)

 3) special care must be taken for 'custom' type columns
(it's got
 better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had
to override
 definitions of reflected BIGINT columns manually for each
 column.drop() call)

 4) schema reflection can't be performed when alembic
migrations are
 run in 'offline' mode (without connecting to a DB)
 ...
 (probably something else I've forgotten)

 So it's totally doable, but, IMO, there is no real
benefit in
 supporting running of schema migrations for SQLite.

 ...attempts to drop schema generation based on models
in favor of migrations

 As long as we have a test that checks that the DB
schema obtained by
 running of migration scripts is equal to the one
obtained by calling
 metadata.create_all(), it's perfectly OK to use model
definitions to
 generate the initial DB schema for running of
unit-tests as well as
 for new installations of OpenStack (and this is
actually faster than
 running of migration scripts). ... and if we have
strong objections
 against doing metadata.create_all(), we can always use
migration
 scripts for both new installations and upgrades for all
DB backends,
 except SQLite.

 Thanks,
 Roman

 On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
  

Re: [openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Alexei Kornienko

but IMO no need to rush with Alexei's removal
I'm not working actively on rally at this moment and already far behind 
of the current code base.

Cause of this I'm ok to step down from core.

Regards,

On 02/05/2014 01:17 PM, Ilya Kharin wrote:

+1 for Hugh


On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick 
sskripn...@mirantis.com mailto:sskripn...@mirantis.com wrote:



+1 for Hugh, but IMO no need to rush with Alexei's removal

Hi stackers,

I would like to:

1) Nominate Hugh Saunders to Rally core, he is doing a lot of
good reviews (and always testing patches=) ):
http://stackalytics.com/report/reviews/rally/30

2) Remove Alexei from core team, because unfortunately he is
not able to work on Rally at this moment. Thank you Alexei for
all work that you have done.


Thoughts?


Best regards,
Boris Pavlovic


-- 
Regards,

Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Boris Pavlovic
Hugh,

welcome to Rally core team!


Best regards,
Boris Pavlovic



On Wed, Feb 5, 2014 at 3:17 PM, Ilya Kharin ikha...@mirantis.com wrote:

 +1 for Hugh


 On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick 
 sskripn...@mirantis.comwrote:


 +1 for Hugh, but IMO no need to rush with Alexei's removal

 Hi stackers,

 I would like to:

 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good
 reviews (and always testing patches=) ):
 http://stackalytics.com/report/reviews/rally/30

 2) Remove Alexei from core team, because unfortunately he is not able to
 work on Rally at this moment. Thank you Alexei for all work that you have
 done.


 Thoughts?


 Best regards,
 Boris Pavlovic


 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-05 Thread victor stinner
Hi,

Thierry Carrez wrote:
  The problem is that the asyncio module was written for Python 3.3, whereas
  OpenStack is not fully Python 3 compatible (yet). To easy the transition I
  have ported asyncio on Python 2, it's the new Trollis project which
  supports Python 2.6-3.4:
 https://bitbucket.org/enovance/trollius
 
 How much code from asyncio did you reuse ? How deep was the porting
 effort ? Is the port maintainable as asyncio gets more bugfixes over time ?

Technically, Trollius is a branch of the Tulip project. I host the repository 
on Bitbucket, whereas Tulip is hosted at code.google.com. I use hg merge to 
retrieve last changes from Tulip into Trollius.

Differences between Trollius and Tulip show how much work has been done between 
Python 2.6 and 3.3 :-) Some examples:

- classes must inherit from object in Python 2.6 to be new-style classes 
(it's not more needed in Python 3),
- {}.format() must be replaced with {0}.format(),
- IOError/OSError exceptions have been reworked and now have specialized 
subclasses in Python 3.3 (I reimplemented them for Python 2.6),
- etc.

But most of the code is still the same between Tulip and Trollius. In my 
opinion, the major difference is that Tulip uses yield from wheras Trollius 
uses yield, which imply subtle differences in the module iteself. You may not 
notice them if you use Trollius, but the implementation is a little bit 
different because of that (differences are limited to the asyncio/tasks.py 
file).

I'm working actively on Tulip (asyncio). We are fixing last bugs before the 
release of Python 3.4, scheduled for March 16, 2014. So I track changes in 
Tulip and I will port them into Trollius.

  The Trollius API is the same than asyncio, the main difference is the
  syntax in coroutines: yield from task must be written yield task, and
  return value must be written raise Return(value).
 
 Could we use a helper library (like six) to have the same syntax in Py2
 and Py3 ? Something like from six.asyncio import yield_from,
 return_task and use those functions for py2/py3 compatible syntax ?

You can use Trollius with Python 3 (I tested it on Python 3.2, 3.3 and 3.4), so 
use yield syntax on Python 2 and Python 3.

Guido van Rossum proposed to use yield From(future) in Trollius, so it would 
be easier to port Trollius code (yield) on Tulip (yield from). Since OpenStack 
is not going to drop Python 2 support, I don't think that it's really useful 
(for OpenStack).

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Hugh Saunders
Thanks Boris, Sergey, Oleg  Ilya,
Rally can be hard to keep up with (rebase, rebase, rebase, merge) but that
development pace also makes it exciting, each time you run rally, something
will have improved! This morning I was awed by Pierre's atomic actions
patches - great!

Thanks for appointing me as a core team member, I will keep an eye on
reviews and trello, see you all in IRC.

--
Hugh Saunders


On 5 February 2014 12:35, Boris Pavlovic bo...@pavlovic.me wrote:

 Hugh,

 welcome to Rally core team!


 Best regards,
 Boris Pavlovic



 On Wed, Feb 5, 2014 at 3:17 PM, Ilya Kharin ikha...@mirantis.com wrote:

 +1 for Hugh


 On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick sskripn...@mirantis.com
  wrote:


 +1 for Hugh, but IMO no need to rush with Alexei's removal

 Hi stackers,

 I would like to:

 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good
 reviews (and always testing patches=) ):
 http://stackalytics.com/report/reviews/rally/30

 2) Remove Alexei from core team, because unfortunately he is not able to
 work on Rally at this moment. Thank you Alexei for all work that you have
 done.


 Thoughts?


 Best regards,
 Boris Pavlovic


 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Proposing changes in Rally core team

2014-02-05 Thread Pierre Padrixe
Thank you hugh and congratulations for your new assignment as core
reviewer, you're doing a great job!

Regards,
Pierre.

2014-02-05 Hugh Saunders h...@wherenow.org:

 Thanks Boris, Sergey, Oleg  Ilya,
 Rally can be hard to keep up with (rebase, rebase, rebase, merge) but that
 development pace also makes it exciting, each time you run rally, something
 will have improved! This morning I was awed by Pierre's atomic actions
 patches - great!

 Thanks for appointing me as a core team member, I will keep an eye on
 reviews and trello, see you all in IRC.

 --
 Hugh Saunders


 On 5 February 2014 12:35, Boris Pavlovic bo...@pavlovic.me wrote:

 Hugh,

 welcome to Rally core team!


 Best regards,
 Boris Pavlovic



 On Wed, Feb 5, 2014 at 3:17 PM, Ilya Kharin ikha...@mirantis.com wrote:

 +1 for Hugh


 On Wed, Feb 5, 2014 at 2:22 PM, Sergey Skripnick 
 sskripn...@mirantis.com wrote:


 +1 for Hugh, but IMO no need to rush with Alexei's removal

 Hi stackers,

 I would like to:

 1) Nominate Hugh Saunders to Rally core, he is doing a lot of good
 reviews (and always testing patches=) ):
 http://stackalytics.com/report/reviews/rally/30

 2) Remove Alexei from core team, because unfortunately he is not able
 to work on Rally at this moment. Thank you Alexei for all work that you
 have done.


 Thoughts?


 Best regards,
 Boris Pavlovic


 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Specific job type for streaming mapreduce? (and someday pipes)

2014-02-05 Thread Trevor McKay
Okay,

  Thanks. I'll make a draft CR that sets up Savanna for dotted names,
and one that uses dotted names with streaming.

Best,

Trevor

On Wed, 2014-02-05 at 15:58 +0400, Sergey Lukjanov wrote:
 I like the dot-separated name. There are several reasons for it:
 
 
 * it'll not require changes in all Savanna subprojects;
 * eventually we'd like to use not only Oozie for EDP (for example, if
 we'll support Twitter Storm) and this new tools could require
 additional 'subtypes'.
 
 
 Thanks for catching this.
 
 
 On Tue, Feb 4, 2014 at 10:47 PM, Trevor McKay tmc...@redhat.com
 wrote:
 Thanks Andrew.
 
 My author thought, which is in between, is to allow dotted
 types.
 MapReduce.streaming for example.
 
 This gives you the subtype flavor but keeps all the APIs the
 same.
 We just need a wrapper function to separate them when we
 compare types.
 
 Best,
 
 Trevor
 
 On Mon, 2014-02-03 at 14:57 -0800, Andrew Lazarev wrote:
  I see two points:
  * having Savanna types mapped to Oozie action types is
 intuitive for
  hadoop users and this is something we would like to keep
  * it is hard to distinguish different kinds of one job type
 
 
  Adding 'subtype' field will solve both problems. Having it
 optional
  will not break backward compatibility. Adding database
 migration
  script is also pretty straightforward.
 
 
  Summarizing, my vote is on subtype field.
 
 
  Thanks,
  Andrew.
 
 
  On Mon, Feb 3, 2014 at 2:10 PM, Trevor McKay
 tmc...@redhat.com
  wrote:
 
  I was trying my best to avoid adding extra job types
 to
  support
  mapreduce variants like streaming or mapreduce with
 pipes, but
  it seems
  that adding the types is the simplest solution.
 
  On the API side, Savanna can live without a specific
 job type
  by
  examining the data in the job record.
  Presence/absence of
  certain
  things, or null values, etc, can provide adequate
 indicators
  to what
  kind of mapreduce it is.  Maybe a little bit subtle.
 
  But for the UI, it seems that explicit knowledge of
 what the
  job is
  makes things easier and better for the user.  When a
 user
  creates a
  streaming mapreduce job and the UI is aware of the
 type later
  on at job
  launch, the user can be prompted to provide the
 right configs
  (i.e., the
  streaming mapper and reducer values).
 
  The explicit job type also supports validation
 without having
  to add
  extra flags (which impacts the savanna client, and
 the JSON,
  etc). For
  example, a streaming mapreduce job does not require
 any
  specified
  libraries so the fact that it is meant to be a
 streaming job
  needs to be
  known at job creation time.
 
  So, to that end, I propose that we add a
 MapReduceStreaming
  job type,
  and probably at some point we will have
 MapReducePiped too.
  It's
  possible that we might have other job types in the
 future too
  as the
  feature set grows.
 
  There was an effort to make Savanna job types
 parallel Oozie
  action
  types, but in this case that's just not possible
 without
  introducing a
  subtype field in the job record, which leads to a
 database
  migration
  script and savanna client changes.
 
  What do you think?
 
  Best,
 
  Trevor
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Andreas Jaeger
On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
 On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
 Steve Gordon wrote:
 From: Anne Gentle anne.gen...@rackspace.com
 Based on today's Technical Committee meeting and conversations with the
 OpenStack board members, I need to change our Conventions for service names
 at
 https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
 .

 Previously we have indicated that Ceilometer could be named OpenStack
 Telemetry and Heat could be named OpenStack Orchestration. That's not the
 case, and we need to change those names.

 To quote the TC meeting, ceilometer and heat are other modules (second
 sentence from 4.1 in
 http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
 distributed with the Core OpenStack Project.

 Here's what I intend to change the wiki page to:
  Here's the list of project and module names and their official names and
 capitalization:

 Ceilometer module
 Cinder: OpenStack Block Storage
 Glance: OpenStack Image Service
 Heat module
 Horizon: OpenStack dashboard
 Keystone: OpenStack Identity Service
 Neutron: OpenStack Networking
 Nova: OpenStack Compute
 Swift: OpenStack Object Storage

 Small correction. The TC had not indicated that Ceilometer could be
 named OpenStack Telemetry and Heat could be named OpenStack
 Orchestration. We formally asked[1] the board to allow (or disallow)
 that naming (or more precisely, that use of the trademark).

 [1]
 https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names

 We haven't got a formal and clear answer from the board on that request
 yet. I suspect they are waiting for progress on DefCore before deciding.

 If you need an answer *now* (and I suspect you do), it might make sense
 to ask foundation staff/lawyers about using those OpenStack names with
 the current state of the bylaws and trademark usage rules, rather than
 the hypothetical future state under discussion.
 
 Basically, yes - I think having the Foundation confirm that it's
 appropriate to use OpenStack Telemetry in the docs is the right thing.
 
 There's an awful lot of confusion about the subject and, ultimately,
 it's the Foundation staff who are responsible for enforcing (and giving
 advise to people on) the trademark usage rules. I've cc-ed Jonathan so
 he knows about this issue.
 
 But FWIW, the TC's request is asking for Ceilometer and Heat to be
 allowed use their Telemetry and Orchestration names in *all* of the
 circumstances where e.g. Nova is allowed use its Compute name.
 
 Reading again this clause in the bylaws:
 
   The other modules which are part of the OpenStack Project, but
not the Core OpenStack Project may not be identified using the
OpenStack trademark except when distributed with the Core OpenStack
Project.
 
 it could well be said that this case of naming conventions in the docs
 for the entire OpenStack Project falls under the distributed with case
 and it is perfectly fine to refer to OpenStack Telemetry in the docs.
 I'd really like to see the Foundation staff give their opinion on this,
 though.

What Steve is asking IMO is whether we have to change OpenStack
Telemetry to Ceilometer module or whether we can just say Telemetry
without the OpenStack in front of it,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-05 Thread Henry Gessau
Bob, this is fantastic, I really appreciate all the detail. A couple of
questions ...

On Wed, Feb 05, at 2:16 am, Robert Kukura rkuk...@redhat.com wrote:

 A couple of interrelated issues with the ML2 plugin's port binding have
 been discussed over the past several months in the weekly ML2 meetings.
 These effect drivers being implemented for icehouse, and therefore need
 to be addressed in icehouse:
 
 * MechanismDrivers need detailed information about all binding changes,
 including unbinding on port deletion
 (https://bugs.launchpad.net/neutron/+bug/1276395)
 * MechanismDrivers' bind_port() methods are currently called inside
 transactions, but in some cases need to make remote calls to controllers
 or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
 * Semantics of concurrent port binding need to be defined if binding is
 moved outside the triggering transaction.
 
 I've taken the action of writing up a unified proposal for resolving
 these issues, which follows...
 
 1) An original_bound_segment property will be added to PortContext. When
 the MechanismDriver update_port_precommit() and update_port_postcommit()
 methods are called and a binding previously existed (whether its being
 torn down or not), this property will provide access to the network
 segment used by the old binding. In these same cases, the portbinding
 extension attributes (such as binding:vif_type) for the old binding will
 be available via the PortContext.original property. It may be helpful to
 also add bound_driver and original_bound_driver properties to
 PortContext that behave similarly to bound_segment and
 original_bound_segment.
 
 2) The MechanismDriver.bind_port() method will no longer be called from
 within a transaction. This will allow drivers to make remote calls on
 controllers or devices from within this method without holding a DB
 transaction open during those calls. Drivers can manage their own
 transactions within bind_port() if needed, but need to be aware that
 these are independent from the transaction that triggered binding, and
 concurrent changes to the port could be occurring.
 
 3) Binding will only occur after the transaction that triggers it has
 been completely processed and committed. That initial transaction will
 unbind the port if necessary. Four cases for the initial transaction are
 possible:
 
 3a) In a port create operation, whether the binding:host_id is supplied
 or not, all drivers' port_create_precommit() methods will be called, the
 initial transaction will be committed, and all drivers'
 port_create_postcommit() methods will be called. The drivers will see
 this as creation of a new unbound port, with PortContext properties as
 shown. If a value for binding:host_id was supplied, binding will occur
 afterwards as described in 4 below.
 
 PortContext.original: None
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: supplied value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None
 
 3b) Similarly, in a port update operation on a previously unbound port,
 all drivers' port_update_precommit() and port_update_postcommit()
 methods will be called, with PortContext properies as shown. If a value
 for binding:host_id was supplied, binding will occur afterwards as
 described in 4 below.
 
 PortContext.original['binding:host_id']: previous value or None
 PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: current value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None
 
 3c) In a port update operation on a previously bound port that does not
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting unchanged binding states as shown.
 
 PortContext.original['binding:host_id']: previous value
 PortContext.original['binding:vif_type']: previous value
 PortContext.original_bound_segment: previous value
 PortContext.original_bound_driver: previous value
 PortContext.current['binding:host_id']: previous value
 PortContext.current['binding:vif_type']: previous value
 PortContext.bound_segment: previous value
 PortContext.bound_driver: previous value
 
 3d) In a the port update operation on a previously bound port that does
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting the previously bound and currently unbound binding
 states as shown. If a value for binding:host_id was supplied, binding
 will occur afterwards as described in 4 below.
 
 PortContext.original['binding:host_id']: previous value
 

Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-05 Thread Sergey Lukjanov
It's about integration tests that aren't db-specific, so, just
DATABASE/connection should be fixed ;)


On Wed, Feb 5, 2014 at 4:33 PM, Alexei Kornienko alexei.kornie...@gmail.com
 wrote:

  Hi


 I'm currently working on moving on the MySQL for savanna-ci

 We are working on same task in ceilometer so maybe you could use some of
 our patches as reference:

 https://review.openstack.org/#/c/59489/
 https://review.openstack.org/#/c/63049/

 Regards,
 Alexei


 On 02/05/2014 02:06 PM, Sergey Kolekonov wrote:

 I'm currently working on moving on the MySQL for savanna-ci


 On Wed, Feb 5, 2014 at 3:53 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Agreed, let's move on to the MySQL for savanna-ci to run integration
 tests against production-like DB.


 On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev alaza...@mirantis.comwrote:

 Since sqlite is not in the list of databases that would be used in
 production, CI should use other DB for testing.

  Andrew.


 On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov aigna...@mirantis.com
  wrote:

 Indeed. We should create a bug around that and move our savanna-ci to
 mysql.

 Regards,
 Alexander Ignatov



 On 05 Feb 2014, at 01:01, Trevor McKay tmc...@redhat.com wrote:

  This brings up an interesting problem:
 
  In https://review.openstack.org/#/c/70420/ I've added a migration
 that
  uses a drop column for an upgrade.
 
  But savann-ci is apparently using a sqlite database to run.  So it
 can't
  possibly pass.
 
  What do we do here?  Shift savanna-ci tests to non sqlite?
 
  Trevor
 
  On Sat, 2014-02-01 at 18:17 +0200, Roman Podoliaka wrote:
  Hi all,
 
  My two cents.
 
  2) Extend alembic so that op.drop_column() does the right thing
  We could, but should we?
 
  The only reason alembic doesn't support these operations for SQLite
  yet is that SQLite lacks proper support of ALTER statement. For
  sqlalchemy-migrate we've been providing a work-around in the form of
  recreating of a table and copying of all existing rows (which is a
  hack, really).
 
  But to be able to recreate a table, we first must have its
 definition.
  And we've been relying on SQLAlchemy schema reflection facilities for
  that. Unfortunately, this approach has a few drawbacks:
 
  1) SQLAlchemy versions prior to 0.8.4 don't support reflection of
  unique constraints, which means the recreated table won't have them;
 
  2) special care must be taken in 'edge' cases (e.g. when you want to
  drop a BOOLEAN column, you must also drop the corresponding CHECK
 (col
  in (0, 1)) constraint manually, or SQLite will raise an error when
 the
  table is recreated without the column being dropped)
 
  3) special care must be taken for 'custom' type columns (it's got
  better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override
  definitions of reflected BIGINT columns manually for each
  column.drop() call)
 
  4) schema reflection can't be performed when alembic migrations are
  run in 'offline' mode (without connecting to a DB)
  ...
  (probably something else I've forgotten)
 
  So it's totally doable, but, IMO, there is no real benefit in
  supporting running of schema migrations for SQLite.
 
  ...attempts to drop schema generation based on models in favor of
 migrations
 
  As long as we have a test that checks that the DB schema obtained by
  running of migration scripts is equal to the one obtained by calling
  metadata.create_all(), it's perfectly OK to use model definitions to
  generate the initial DB schema for running of unit-tests as well as
  for new installations of OpenStack (and this is actually faster than
  running of migration scripts). ... and if we have strong objections
  against doing metadata.create_all(), we can always use migration
  scripts for both new installations and upgrades for all DB backends,
  except SQLite.
 
  Thanks,
  Roman
 
  On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
  enikano...@mirantis.com wrote:
  Boris,
 
  Sorry for the offtopic.
  Is switching to model-based schema generation is something decided?
 I see
  the opposite: attempts to drop schema generation based on models in
 favor of
  migrations.
  Can you point to some discussion threads?
 
  Thanks,
  Eugene.
 
 
 
  On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic 
 bpavlo...@mirantis.com
  wrote:
 
  Jay,
 
  Yep we shouldn't use migrations for sqlite at all.
 
  The major issue that we have now is that we are not able to ensure
 that DB
  schema created by migration  models are same (actually they are
 not same).
 
  So before dropping support of migrations for sqlite  switching to
 model
  based created schema we should add tests that will check that
 model 
  migrations are synced.
  (we are working on this)
 
 
 
  Best regards,
  Boris Pavlovic
 
 
  On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev 
 alaza...@mirantis.com
  wrote:
 
  Trevor,
 
  Such check could be useful on alembic side too. Good opportunity
 for
  contribution.
 
  Andrew.
 
 
  On Fri, Jan 31, 2014 at 

[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2014-02-05 Thread Tzu-Mainn Chen
Hi,

In parallel to Jarda's updated wireframes, and based on various discussions 
over the past
weeks, here are the updated Tuskar requirements for Icehouse:

https://wiki.openstack.org/wiki/TripleO/TuskarIcehouseRequirements

Any feedback is appreciated.  Thanks!

Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Choosing provisioning engine during cluster launch

2014-02-05 Thread Sergey Lukjanov
It sounds little useful for dev/testing, I'm not really think that it's
needed, but not -1 such addition to the REST API.


On Thu, Jan 30, 2014 at 7:52 PM, Trevor McKay tmc...@redhat.com wrote:

 My mistake, it's already there.  I missed the distinction between set on
 startup and set per cluster.

 Trev

 On Thu, 2014-01-30 at 10:50 -0500, Trevor McKay wrote:
  +1
 
  How about an undocumented config?
 
  Trev
 
  On Thu, 2014-01-30 at 09:24 -0500, Matthew Farrellee wrote:
   i imagine this is something that can be useful in a development and
   testing environment, especially during the transition period from
 direct
   to heat. so having the ability is not unreasonable, but i wouldn't
   expose it to users via the dashboard (maybe not even directly in the
 cli)
  
   generally i want to reduce the number of parameters / questions the
 user
   is asked
  
   best,
  
  
   matt
  
   On 01/30/2014 04:42 AM, Dmitry Mescheryakov wrote:
I agree with Andrew. I see no value in letting users select how their
cluster is provisioned, it will only make interface a little bit more
complex.
   
Dmitry
   
   
2014/1/30 Andrew Lazarev alaza...@mirantis.com
mailto:alaza...@mirantis.com
   
Alexander,
   
What is the purpose of exposing this to user side? Both engines
 must
do exactly the same thing and they exist in the same time only
 for
transition period until heat engine is stabilized. I don't see
 any
value in proposed option.
   
Andrew.
   
   
On Wed, Jan 29, 2014 at 8:44 PM, Alexander Ignatov
aigna...@mirantis.com mailto:aigna...@mirantis.com wrote:
   
Today Savanna has two provisioning engines, heat and old one
known as 'direct'.
Users can choose which engine will be used by setting special
parameter in 'savanna.conf'.
   
I have an idea to give an ability for users to define
provisioning engine
not only when savanna is started but when new cluster is
launched. The idea is simple.
We will just add new field 'provisioning_engine' to 'cluster'
and 'cluster_template'
objects. And profit is obvious, users can easily switch from
 one
engine to another without
restarting savanna service. Of course, this parameter can be
omitted and the default value
from the 'savanna.conf' will be applied.
   
Is this viable? What do you think?
   
Regards,
Alexander Ignatov
   
   
   
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
   
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
   
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Andrew Laski

On 02/05/14 at 03:30am, Vishvananda Ishaya wrote:


On Feb 5, 2014, at 2:38 AM, Florent Flament florent.flament-...@cloudwatt.com 
wrote:


Hi Vish,

You're approach looks very interesting. I especially like the idea of 'walking 
the tree of parent projects, to construct the set of roles'.

Here are some issues that came to my mind:


Regarding policy rules enforcement:

Considering the following projects:
* orga
* orga.projecta
* orga.projectb

Let's assume that Joe has the following roles:
* `Member` of `orga`
* `admin` of `orga.projectb`

Now Joe wishes to launch a VM on `orga.projecta` and grant a role to some user 
on `orga.projectb` (which rights he has). He would like to be able to do all of 
this with the same token (scoped on project `orga`?).

For this scenario to be working, we would need to be able to store multiple 
roles (a tree of roles?) in the token, so that services would know which role 
is granted to the user on which project.

In a first time, I guess we could stay with the roles scoped to a unique 
project. Joe would be able to do what he wants, by getting a first token on 
`orga` or `orga.projecta` with a `Member` role, then a second token on 
`orga.projectb` with the `admin` role.


This is a good point, having different roles on different levels of the hierarchy does 
lead to having to reauthenticate for certain actions. Keystone could pass the scope 
along with each role instead of a single global scope. The policy check in this could 
be modifed to do matching on role  prefix against the scope of ther role so 
policy like:

“remove_user_from_project”: “role:project_admin and scope_prefix:project_id”

This starts to get complex and unwieldy however because a single token allows 
you to do anything and everything based on your roles. I think we need a 
healthy balance between ease of use and the principle of least privilege, so we 
might be best to stick to a single scope for each token and force a 
reauthentication to do adminy stuff in projectb.




Considering quotas enforcement:

Let's say we wants set the following limits:

* `orga` : max 10 VMs
* ̀ orga.projecta` : max 8 VMs
* `orga.projectb` : max 8 VMs

The idea would be that the `admin` of `orga` wishes to allow 8 VMs to projects 
̀`orga.projecta` or `orga.projectb`, but doesn't care how these VMs are spread. 
Although he wishes to keep 2 VMs in `orga` for himself.


This seems like a bit of a stretch as a use case. Sharing a set of quotas 
across two projects seems strange and if we did have arbitrary nesting you 
could do the same by sticking a dummy project in between

orga: max 10
orga.dummy: max 8
orga.dummy.projecta: no max
orga.dummy.projectb: no max


Then to be able to enforce these quotas, Nova (and all other services) would 
have to keep track of the tree of quotas, and update the appropriate nodes.


By the way, I'm wondering if it wouldn't be DRYer to centralize the RBAC and 
Quotas logic in a unique service (Keystone?). Openstack services (Nova, Cinder, 
...) would just have to ask this centralized access management service whether 
an action is authorized for a given token?


So I threw out the idea the other day that quota enforcement should perhaps be 
done by gantt. Quotas seem to be a scheduling concern more than anything else.


I don't want to take this thread off topic, but I would argue against 
this.  I don't want a request for a place to put an instance or volume 
to mean that an instance or volume has been created with regards to 
quotas.





Florent Flament



- Original Message -
From: Vishvananda Ishaya vishvana...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, February 3, 2014 10:58:28 PM
Subject: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy   
Discussion

Hello Again!

At the meeting last week we discussed some options around getting true 
multitenancy in nova. The use case that we are trying to support can be 
described as follows:

Martha, the owner of ProductionIT provides it services to multiple Enterprise 
clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam at 
SuperDevShop. Joe is a Development Manager for WidgetMaster and he has multiple QA and 
Development teams with many users. Joe needs the ability create users, projects, and 
quotas, as well as the ability to list and delete resources across WidgetMaster. Martha 
needs to be able to set the quotas for both WidgetMaster and SuperDevShop; manage users, 
projects, and objects across the entire system; and set quotas for the client companies 
as a whole. She also needs to ensure that Joe can't see or mess with anything owned by 
Sam.

As per the plan I outlined in the meeting I have implemented a Proof-of-Concept 
that would allow me to see what changes were required in nova to get scoped 
tenancy working. I used a simple approach of faking out heirarchy by prepending 
the id of the larger scope to 

Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-05 Thread Dan Smith
 We don't have to add a new notification, but we have to add some
 new datas in the nova notifications. At least for the delete
 instance notification to remove the ceilometer nova notifier.
 
 A while ago, I have registered a blueprint that explains which
 datas are missing in the current nova notifications:
 
 https://blueprints.launchpad.net/nova/+spec/usage-data-in-notification

 
https://wiki.openstack.org/wiki/Ceilometer/blueprints/remove-ceilometer-nova-notifier

This seems like a much better way to do this.

I'm not opposed to a nova plugin, but if it's something that lives
outside the nova tree, I think there's going to be a problem of
constantly chasing internal API changes. IMHO, a plugin should live
(and be tested) in the nova tree and provide/consume a stableish API
to/from Ceilometer.

So, it seems like we've got the following options:

1. Provide the required additional data in our notifications to avoid
   the need for a plugin to hook into nova internals.
2. Continue to use a plugin in nova to scrape the additional data
   needed during certain events, but hopefully in a way that ties the
   plugin to the internal APIs in a maintainable way.

Is that right?

Personally, I think #1 is far superior to #2.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Doug Hellmann
On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-01-08 12:14, Doug Hellmann wrote:




 On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec openst...@nemebean.com wrote:

 On 2014-01-08 11:16, Sean Dague wrote:

 On 01/08/2014 12:06 PM, Doug Hellmann wrote:
 snip

 Yeah, that's what made me start thinking oslo.sphinx should be called
 something else.

 Sean, how strongly do you feel about not installing oslo.sphinx in
 devstack? I see your point, I'm just looking for alternatives to the
 hassle of renaming oslo.sphinx.


 Doing the git thing is definitely not the right thing. But I guess I got
 lost somewhere along the way about what the actual problem is. Can
 someone write that up concisely? With all the things that have been
 tried/failed, why certain things fail, etc.

  The problem seems to be when we pip install -e oslo.config on the
 system, then pip install oslo.sphinx in a venv.  oslo.config is unavailable
 in the venv, apparently because the namespace package for o.s causes the
 egg-link for o.c to be ignored.  Pretty much every other combination I've
 tried (regular pip install of both, or pip install -e of both, regardless
 of where they are) works fine, but there seem to be other issues with all
 of the other options we've explored so far.

 We can't remove the pip install -e of oslo.config because it has to be
 used for gating, and we can't pip install -e oslo.sphinx because it's not a
 runtime dep so it doesn't belong in the gate.  Changing the toplevel
 package for oslo.sphinx was also mentioned, but has obvious drawbacks too.

 I think that about covers what I know so far.


  Here's a link dstufft provided to the pip bug tracking this problem:
 https://github.com/pypa/pip/issues/3

 Doug

   This just bit me again trying to run unit tests against a fresh Nova
 tree.I don't think it's just me either - Matt Riedemann said he has
 been disabling site-packages in tox.ini for local tox runs.  We really need
 to do _something_ about this, even if it's just disabling site-packages by
 default in tox.ini for the affected projects.  A different option would be
 nice, but based on our previous discussion I'm not sure we're going to find
 one.

 Thoughts?


Is the problem isolated to oslo.sphinx? That is, do we end up with any
configurations where we have 2 oslo libraries installed in different modes
(development and regular) where one of those 2 libraries is not
oslo.sphinx? Because if the issue is really just oslo.sphinx, we can rename
that to move it out of the namespace package.

Doug




 -Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] update an instance IP address in openstack

2014-02-05 Thread Abdul Hannan Kanji
I am writing a virtualization driver on my own. And i need to change the
instance public IP address in the code? Is there any way I can go about it?
And also, how do I use the nova db package and also add a column into the
nova instance table? is there any way? Any help is highly appreciated.

Regards,

Abdul Hannan Kanji
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The simplified blueprint for PCI extra attributes and SR-IOV NIC blueprint

2014-02-05 Thread Robert Li (baoli)
Hi John and all,

Yunhong's email mentioned about the SR-IOV NIC support BP:
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov

I'd appreciate your consideration of the approval of both BPs so that we
can have SR-IOV NIC support in Icehouse.

Thanks,
Robert


On 2/4/14 1:36 AM, Jiang, Yunhong yunhong.ji...@intel.com wrote:

Hi, John and all,
   I updated the blueprint
https://blueprints.launchpad.net/nova/+spec/pci-extra-info-icehouse
according to your feedback, to add the backward compatibility/upgrade
issue/examples.

   I try to separate this BP with the SR-IOV NIC support as a standalone
enhancement, because this requirement is more a generic PCI pass through
feature, and will benefit other usage scenario as well.

   And the reasons that I want to finish this BP in I release are:

   a) it's a generic requirement, and push it into I release is helpful to
other scenario.
   b) I don't see upgrade issue, and the only thing will be discarded in
future is the PCI alias if we all agree to use PCI flavor. But that
effort will be small and there is no conclusion to PCI flavor yet.
   c) SR-IOV NIC support is complex, it will be really helpful if we can
keep ball rolling and push the all-agreed items forward.

   Considering the big patch list for I-3 release, I'm not optimistic to
merge this in I release, but as said, we should keep the ball rolling and
move forward.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] about the bp cpu-entitlement

2014-02-05 Thread Oshrit Feder
Re: [openstack-dev] about the bp cpu-entitlement

Oshrit Feder 
to:
openstack-dev
05/02/2014 03:58 PM




Hi Sahid, 

Thank you for your interest in the cpu entitlement feature. As Paul 
mentioned, we are joining the extensible resource effort and will 
integrate it on top of it. Will be glad to keep you updated on the 
progress and will not hesitate to contact you for an extra hand.

Oshrit


-Original Message-
From: Murray, Paul (HP Cloud Services) 
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: about the bp cpu-entitlement

Hi Sahid,

This is being done by Oshrit Feder, so I'll let her answer, but I know 
that it is going to be implemented as an extensible resource (see: 
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking) 
so it is waiting for that to be done. That blueprint is making good 
progress now and it should have more patches up this week. There is 
another resource example nearly done for network entitlement (see: 
https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement) 


Paul.

-Original Message-
From: sahid [mailto:sahid.ferdja...@cloudwatt.com] 
Sent: 04 February 2014 09:24
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] about the bp cpu-entitlement

Greetings,

  I saw a really interesting blueprint about cpu entitlement, it will be 
targeted for icehouse-3 and I would like to get some details about the 
progress?. Does the developer need help? I can give a part of my time on 
it.

https://blueprints.launchpad.net/nova/+spec/cpu-entitlement

Thanks a lot,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Doug Hellmann
On Tue, Feb 4, 2014 at 6:39 PM, Joe Gordon joe.gord...@gmail.com wrote:

 On Tue, Feb 4, 2014 at 8:19 AM, Sean Dague s...@dague.net wrote:
  On 02/05/2014 12:37 AM, Mark McLoughlin wrote:
  On Mon, 2014-01-13 at 16:49 +, Sahid Ferdjaoui wrote:
  Hello all,
 
  It looks 100% of the pep8 gate for nova is failing because of a bug
 reported,
  we probably need to mark this as Critical.
 
 https://bugs.launchpad.net/nova/+bug/1268614
 
  Ivan Melnikov has pushed a patchset waiting for review:
 https://review.openstack.org/#/c/66346/
 
 
 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6IEludm9jYXRpb25FcnJvcjogXFwnL2hvbWUvamVua2lucy93b3Jrc3BhY2UvZ2F0ZS1ub3ZhLXBlcDgvdG9vbHMvY29uZmlnL2NoZWNrX3VwdG9kYXRlLnNoXFwnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4OTYzMTQzMzQ4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
 
  This just came up on #openstack-infra ...
 
  It's a general problem that is going to occur more frequently.
 
  Nova now includes config options from keystoneclient and oslo.messaging
  in its sample config file.
 
  That means that as soon as a new option is added to either library, then
  check_uptodate.sh will start failing.
 
  One option discussed is to remove the sample config files from source
  control and have the sample be generated at build/packaging time.
 
  So long as we minimize the dependencies required to generate the sample
  file, this should be manageable.
 
  The one big drawback here is that today you can point people to a git
  url, and they will then have a sample config file for Nova (or Tempest
  or whatever you are pointing them at). If this is removed, then we'll
  need / want some other way to make those samples easily available on the
  web, not only at release time.

 +1, to the idea of removing this auto-generated file from the repo.

 How about publishing these as part of the docs, we can put them in the
 dev docs, so the nova options get published at:

 http://docs.openstack.org/developer/nova/

 etc, or we can make sure the main docs are always updated etc.


I just talked with Anne, and she said the doc build now includes a
Configuration Reference which is extracting the options and building nicely
formatted tables. Given that, I don't think it adds much to include the
config files as well.

Including the config file in either the developer documentation or the
packaging build makes more sense. I'm still worried that adding it to the
sdist generation means you would have to have a lot of tools installed just
to make the sdist. However, we could include a script with each app that
will generate the sample file for that app. Anyone installing from source
could run it to build their own file, and the distro packagers could run it
as part of their build and include the output in their package.

Doug




 
  On a related point, It's slightly bothered me that we're allow libraries
  to define stanzas in our config files. It seems like a leaky abstraction
  that's only going to get worse over time as we graduate more of oslo,
  and the coupling gets even worse.
 
  Has anyone considered if it's possible to stop doing that, and have the
  libraries only provide an object model that takes args and instead leave
  config declaration to the instantiation points for those objects?
  Because having a nova.conf file that's 30% options coming from
  underlying libraries that are not really controlable in nova seems like
  a recipe for a headache. We already have a bunch of that issue today
  with changing 3rd party logging libraries in oslo, that mostly means to
  do that in nova we first just go and change the incubator, then sync the
  changes back.
 
  I do realize this would be a rather substantial shift from current
  approach, but current approach seems to be hitting a new complexity
  point that we're only just starting to feel the pain on.
 
  -Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net / sean.da...@samsung.com
  http://dague.net
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Jay Dobies

First, I don't think RollingUpdatePattern and CanaryUpdatePattern should be 2 
different entities. The second just looks like a parametrization of the first 
(growth_factor=1?).


Perhaps they can just be one. Until I find parameters which would need
to mean something different, I'll just use UpdatePattern.


I wondered about this too. Maybe I'm just not as familiar with the 
terminology, but since we're stopping on all failures both function as a 
canary in testing the waters before doing the update. The only 
difference is the potential for acceleration.


As for an example of an entirely different strategy, what about the idea 
of standing up new instances with the updates and then killing off the 
old ones? It may come down to me not fully understanding the scale of 
when you say updating configuration, but it may be desirable to not 
scale down your capacity while the update is executing and instead 
having a quick changeover (for instance, in the floating IPs or a load 
balancer).



I then feel that using (abusing?) depends_on for update pattern is a bit weird. 
Maybe I'm influenced by the CFN design, but the separate UpdatePolicy attribute 
feels better (although I would probably use a property). I guess my main 
question is around the meaning of using the update pattern on a server 
instance. I think I see what you want to do for the group, where child_updating 
would return a number, but I have no idea what it means for a single resource. 
Could you detail the operation a bit more in the document?



I would be o-k with adding another keyword. The idea in abusing depends_on
is that it changes the core language less. Properties is definitely out
for the reasons Christopher brought up, properties is really meant to
be for the resource's end target only.


I think depends_on would be a clever use of the existing language if we 
weren't in a position to influence it's evolution. A resource's update 
policy is a first-class concept IMO, so adding that notion directly into 
the definition feels cleaner.


[snip]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-02-05 Thread Jaromir Coufal

Hi Steve!

I would say that we finally got to sort of stabilized state of 
wireframes [0]. I am sure there will appear slight changes (as it is 
always like this), but there shouldn't be bigger change of direction. We 
are looking forward to see some high-fidelity mockups if you are willing 
to help in this area.


Thanks for your interest
-- Jarda

[0] 
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-02-05_tripleo-ui-icehouse.pdf


On 2014/17/01 01:16, Steve Doll wrote:

Looking good, let me know if I can be of help to make some high-fidelity
mockups.


On Thu, Jan 16, 2014 at 6:30 AM, Jay Dobies jason.dob...@redhat.com
mailto:jason.dob...@redhat.com wrote:

This is a really good evolution. I'm glad the wireframes are getting
closer to what we're doing for Icehouse.

A few notes...

On page 6, what does the Provisioning Status chart reflect? The math
doesn't add up if that's supposed to reflect the free v. deployed.
That might just be a sample data thing, but the term Provisioning
Status makes it sound like this could be tracking some sort of
ongoing provisioning operation.

What's the distinction between the config shown on the first
deployment page and the ones under more options? Is the idea that
the ones on the overview page must be specified before the first
deployment but the rest can be left to the defaults?

The Roles (Resource Category) subtab disappeared but the edit role
dialog is still there. How do you get to it?

Super happy to see the progress stuff represented. I think it's a
good first start towards handling the long running changes.

I like the addition of the Undeploy button, but since it's largely a
dev utility it feels a bit weird being so prominent. Perhaps
consider moving it under scale deployment; it's a variation of
scaling, just scaling back to nothing  :)

You locked the controller count to 1 (good call for Icehouse) but
still have incrementers on the scale page. That should also be
disabled and hardcoded to 1, right?




On 01/16/2014 08:41 AM, Hugh O. Brock wrote:

On Thu, Jan 16, 2014 at 01:50:00AM +0100, Jaromir Coufal wrote:

Hi folks,

thanks everybody for feedback. Based on that I updated
wireframes
and tried to provide a minimum scope for Icehouse timeframe.


http://people.redhat.com/~__jcoufal/openstack/tripleo/__2014-01-16_tripleo-ui-__icehouse.pdf

http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-16_tripleo-ui-icehouse.pdf

Hopefully we are able to deliver described set of features.
But if
you find something what is missing which is critical for the
first
release (or that we are implementing a feature which should
not have
such high priority), please speak up now.

The wireframes are very close to implementation. In time,
there will
appear more views and we will see if we can get them in as well.

Thanks all for participation
-- Jarda


These look great Jarda, I feel like things are coming together here.

--Hugh


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

*Steve Doll*
Art Director, Mirantis Inc.
sd...@mirantis.com mailto:sd...@mirantis.com
Mobile: +1-408-893-0525
Skype: sdoll-mirantis


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Steven Dake

On 02/04/2014 06:34 PM, Robert Collins wrote:

On 5 February 2014 13:14, Zane Bitter zbit...@redhat.com wrote:



That's not a great example, because one DB server depends on the other,
forcing them into updating serially anyway.

I have to say that even in general, this whole idea about applying update
policies to non-grouped resources doesn't make a whole lot of sense to me.
For non-grouped resources you control the resource definitions individually
- if you don't want them to update at a particular time, you have the option
of just not updating them.

Well, I don't particularly like the idea of doing thousands of
discrete heat stack-update calls, which would seem to be what you're
proposing.

On groups: autoscale groups are a problem for secure minded
deployments because every server has identical resources (today) and
we very much want discrete credentials per server - at least this is
my understanding of the reason we're not using scaling groups in
TripleO.


Where you _do_ need it is for scaling groups where every server is based on
the same launch config, so you need a way to control the members
individually - by batching up operations (done), adding delays (done) or,
even better, notifications and callbacks.

So it seems like doing 'rolling' updates for any random subset of resources
is effectively turning Heat into something of a poor-man's workflow service,
and IMHO that is probably a mistake.

I mean to reply to the other thread, but here is just as good :) -
heat as a way to describe the intended state, and heat takes care of
transitions, is a brilliant model. It absolutely implies a bunch of
workflows - the AWS update policy is probably the key example.

Being able to gracefully, *automatically* work through a transition
between two defined states, allowing the nodes in question to take
care of their own needs along the way seems like a pretty core
function to fit inside Heat itself. Its not at all the same as 'allow
users to define abitrary workflows'.

-Rob

Rob,

I'm not precisely certain what your proposing, but I think we need to 
take care not to turn the Heat DSL into a full-fledged programming 
language.  IMO thousands of updates done through heat is a perfect way 
for a third party service to do such things - eg control workflow.  
Clearly there is a workflow gap in OpenStack, and possibly that thing 
doing the thousands of updates should be a workflow service, rather then 
TripleO, but workflow is out of scope for Heat proper.  Such a workflow 
service could potentially fit in the Orchestration program alongside 
Heat and Autoscaling.  It is too bad there isn't a workflow service 
already because we are getting alot of pressure to make Heat fill this 
gap.  I personally believe filling this gap with heat would be a mistake 
and the correct course of action would be for a workflow service to 
emerge to fill this need (and depend on Heat for orchestration).


I believe this may be what Zane is reacting to; I believe the Heat 
community would like to avoid making the DSL more programmable because 
then it is harder to use and support.  The parameters,resources,outputs 
DSL objects are difficult enough for new folks to pick up and its only 3 
things to understand...


Regards
-steve




What we do need for all resources (not just scaling groups) is a way for the
user to say for this particular resource, notify me when it has updated
(but, if possible, before we have taken any destructive actions on it), give
me a chance to test it and accept or reject the update. For example, when
you resize a server, give the user a chance to confirm or reject the change
at the VERIFY_RESIZE step (Trove requires this). Or when you replace a
server during an update, give the user a chance to test the new server and
either keep it (continue on and delete the old one) or not (roll back). Or
when you replace a server in a scaling group, notify the load balancer _or
some other thing_ (e.g. OpenShift broker node) that a replacement has been
created and wait for it to switch over to the new one before deleting the
old one. Or, of course, when you update a server to some new config, give
the user a chance to test it out and make sure it works before continuing
with the stack update. All of these use cases can, I think, be solved with a
single feature.

The open questions for me are:
1) How do we notify the user that it's time to check on a resource?
(Marconi?)

This is the graceful update stuff I referred to in my mail to Clint -
the proposal from hallway discussions in HK was to do this by
notifying the server itself (that way we don't create a centralised
point of fail). I can see though that in a general sense not all
resources are servers. But - how about allowing to specify where to
notify (and notifing is always by setting a value in metadata
somewhere) - users can then pull that out themselves however they want
to. Adding push notifications is orthogonal IMO - we'd like that for
all metadata changes, 

Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-05 Thread Mathieu Rohon
Hi,

thanks for this great proposal


On Wed, Feb 5, 2014 at 3:10 PM, Henry Gessau ges...@cisco.com wrote:
 Bob, this is fantastic, I really appreciate all the detail. A couple of
 questions ...

 On Wed, Feb 05, at 2:16 am, Robert Kukura rkuk...@redhat.com wrote:

 A couple of interrelated issues with the ML2 plugin's port binding have
 been discussed over the past several months in the weekly ML2 meetings.
 These effect drivers being implemented for icehouse, and therefore need
 to be addressed in icehouse:

 * MechanismDrivers need detailed information about all binding changes,
 including unbinding on port deletion
 (https://bugs.launchpad.net/neutron/+bug/1276395)
 * MechanismDrivers' bind_port() methods are currently called inside
 transactions, but in some cases need to make remote calls to controllers
 or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
 * Semantics of concurrent port binding need to be defined if binding is
 moved outside the triggering transaction.

 I've taken the action of writing up a unified proposal for resolving
 these issues, which follows...

 1) An original_bound_segment property will be added to PortContext. When
 the MechanismDriver update_port_precommit() and update_port_postcommit()
 methods are called and a binding previously existed (whether its being
 torn down or not), this property will provide access to the network
 segment used by the old binding. In these same cases, the portbinding
 extension attributes (such as binding:vif_type) for the old binding will
 be available via the PortContext.original property. It may be helpful to
 also add bound_driver and original_bound_driver properties to
 PortContext that behave similarly to bound_segment and
 original_bound_segment.

 2) The MechanismDriver.bind_port() method will no longer be called from
 within a transaction. This will allow drivers to make remote calls on
 controllers or devices from within this method without holding a DB
 transaction open during those calls. Drivers can manage their own
 transactions within bind_port() if needed, but need to be aware that
 these are independent from the transaction that triggered binding, and
 concurrent changes to the port could be occurring.

 3) Binding will only occur after the transaction that triggers it has
 been completely processed and committed. That initial transaction will
 unbind the port if necessary. Four cases for the initial transaction are
 possible:

 3a) In a port create operation, whether the binding:host_id is supplied
 or not, all drivers' port_create_precommit() methods will be called, the
 initial transaction will be committed, and all drivers'
 port_create_postcommit() methods will be called. The drivers will see
 this as creation of a new unbound port, with PortContext properties as
 shown. If a value for binding:host_id was supplied, binding will occur
 afterwards as described in 4 below.

 PortContext.original: None
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: supplied value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None

 3b) Similarly, in a port update operation on a previously unbound port,
 all drivers' port_update_precommit() and port_update_postcommit()
 methods will be called, with PortContext properies as shown. If a value
 for binding:host_id was supplied, binding will occur afterwards as
 described in 4 below.

 PortContext.original['binding:host_id']: previous value or None
 PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: current value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None

 3c) In a port update operation on a previously bound port that does not
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting unchanged binding states as shown.

 PortContext.original['binding:host_id']: previous value
 PortContext.original['binding:vif_type']: previous value
 PortContext.original_bound_segment: previous value
 PortContext.original_bound_driver: previous value
 PortContext.current['binding:host_id']: previous value
 PortContext.current['binding:vif_type']: previous value
 PortContext.bound_segment: previous value
 PortContext.bound_driver: previous value

 3d) In a the port update operation on a previously bound port that does
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting the previously bound and currently unbound binding
 states as shown. If a value for binding:host_id was supplied, binding
 will occur afterwards as 

[openstack-dev] [nova] Reconfiguring devices when boot from instance snapshot

2014-02-05 Thread Feodor Tersin
There is a task - device reconfiguration when boot instance from instance
snapshot. For example, one may want to change shutdown behaviour by
nova boot ... --image instance_snapshot --block_device
device=/dev/vda,shutdown=false
when originally shutdown attribute for /dev/vda is 'remove' in the instance
snapshot.

Now this feature is not implemented. In BlockDeviceConfig reference (
https://wiki.openstack.org/wiki/BlockDeviceConfig) the reconfiguration
feature is mentioned very slightly (on 'no_device' attribute). Neigther
this reference, nor current implementation of BDMv2 don't clear whether
it's supposed to support the reconfiguration in future. Also it's not clear
whether this feature can be implemented in current API.

Is there any additional information about plans of using and improving
BDMv2, particularly for the reconfiguration? Where can i see it?

Moreover the future of 'image_ref' parameters is not clear. Although it's
need to set it to boot instance from image, also image must be set in
BDMv2. So this parameter seems partially obsolete. But how is planned to
boot instance from instance snapshot with no using of this parameter?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Agenda for todays ML2 Weekly meeting

2014-02-05 Thread Robert Kukura
On 02/05/2014 06:06 AM, trinath.soman...@freescale.com wrote:
 Hi-
 
  
 
 Kindly share me the agenda for today weekly meeting on Neutron/ML2.

I just updated
https://wiki.openstack.org/wiki/Meetings/ML2#Meeting_February_5.2C_2014.
Mestery has a conflict for today's meeting.

-Bob

 
  
 
  
 
 Best Regards,
 
 --
 
 Trinath Somanchi - B39208
 
 trinath.soman...@freescale.com | extn: 4048
 
  
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Backup/Restore encryption/decryption issue

2014-02-05 Thread Denis Makogon
Goodday, OpenStack DВaaS community.


I'd like to start conversation about guestagent security issue related
to backup/restore process. Trove guestagent service uses AES with 256 bit
key (in CBC mode) [1] to encrypt backups which are stored at predefined
Swift container.

As you can see, password is defined in config file [2]. And here comes
problem, this password is used for all tenants/projects that use Trove - it
is a security issue. I would like to suggest Key derivation function [3]
based on static attributes specific for each tenant/project (tenant_id).
KDF would be based upon python implementation of PBKDF2 [4]. Implementation
can be seen here [5].

Also i'm looking forward to give user an ability to pass password for
KDF that would deliver key for backup/restore encryption/decryption, if
ingress password (from user) will be empty, guest will use static
attributes of tenant (tenant_id).

To allow backward compatibility, python-troveclient should be able to pass
old password [1] to guestagent as one of parameters on restore call.

Blueprint already have been registered in Trove launchpad space, [6].

I also foresee porting this feature to oslo-crypt, as part of security
framework (oslo.crypto) extensions.

Thoughts ?

[1]
https://github.com/openstack/trove/blob/master/trove/guestagent/strategies/backup/base.py#L113-L116

[2]
https://github.com/openstack/trove/blob/master/etc/trove/trove-guestagent.conf.sample#L69

[3] http://en.wikipedia.org/wiki/Key_derivation_function

[4] http://en.wikipedia.org/wiki/PBKDF2

[5] https://gist.github.com/denismakogon/8823279

[6] https://blueprints.launchpad.net/trove/+spec/backup-encryption

Best regards,

Denis Makogon

Mirantis, Inc.

Kharkov, Ukraine

www.mirantis.com

www.mirantis.ru

dmako...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-05 Thread Robert Kukura
On 02/05/2014 09:10 AM, Henry Gessau wrote:
 Bob, this is fantastic, I really appreciate all the detail. A couple of
 questions ...
 
 On Wed, Feb 05, at 2:16 am, Robert Kukura rkuk...@redhat.com wrote:
 
 A couple of interrelated issues with the ML2 plugin's port binding have
 been discussed over the past several months in the weekly ML2 meetings.
 These effect drivers being implemented for icehouse, and therefore need
 to be addressed in icehouse:

 * MechanismDrivers need detailed information about all binding changes,
 including unbinding on port deletion
 (https://bugs.launchpad.net/neutron/+bug/1276395)
 * MechanismDrivers' bind_port() methods are currently called inside
 transactions, but in some cases need to make remote calls to controllers
 or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
 * Semantics of concurrent port binding need to be defined if binding is
 moved outside the triggering transaction.

 I've taken the action of writing up a unified proposal for resolving
 these issues, which follows...

 1) An original_bound_segment property will be added to PortContext. When
 the MechanismDriver update_port_precommit() and update_port_postcommit()
 methods are called and a binding previously existed (whether its being
 torn down or not), this property will provide access to the network
 segment used by the old binding. In these same cases, the portbinding
 extension attributes (such as binding:vif_type) for the old binding will
 be available via the PortContext.original property. It may be helpful to
 also add bound_driver and original_bound_driver properties to
 PortContext that behave similarly to bound_segment and
 original_bound_segment.

 2) The MechanismDriver.bind_port() method will no longer be called from
 within a transaction. This will allow drivers to make remote calls on
 controllers or devices from within this method without holding a DB
 transaction open during those calls. Drivers can manage their own
 transactions within bind_port() if needed, but need to be aware that
 these are independent from the transaction that triggered binding, and
 concurrent changes to the port could be occurring.

 3) Binding will only occur after the transaction that triggers it has
 been completely processed and committed. That initial transaction will
 unbind the port if necessary. Four cases for the initial transaction are
 possible:

 3a) In a port create operation, whether the binding:host_id is supplied
 or not, all drivers' port_create_precommit() methods will be called, the
 initial transaction will be committed, and all drivers'
 port_create_postcommit() methods will be called. The drivers will see
 this as creation of a new unbound port, with PortContext properties as
 shown. If a value for binding:host_id was supplied, binding will occur
 afterwards as described in 4 below.

 PortContext.original: None
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: supplied value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None

 3b) Similarly, in a port update operation on a previously unbound port,
 all drivers' port_update_precommit() and port_update_postcommit()
 methods will be called, with PortContext properies as shown. If a value
 for binding:host_id was supplied, binding will occur afterwards as
 described in 4 below.

 PortContext.original['binding:host_id']: previous value or None
 PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
 PortContext.original_bound_segment: None
 PortContext.original_bound_driver: None
 PortContext.current['binding:host_id']: current value or None
 PortContext.current['binding:vif_type']: 'unbound'
 PortContext.bound_segment: None
 PortContext.bound_driver: None

 3c) In a port update operation on a previously bound port that does not
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting unchanged binding states as shown.

 PortContext.original['binding:host_id']: previous value
 PortContext.original['binding:vif_type']: previous value
 PortContext.original_bound_segment: previous value
 PortContext.original_bound_driver: previous value
 PortContext.current['binding:host_id']: previous value
 PortContext.current['binding:vif_type']: previous value
 PortContext.bound_segment: previous value
 PortContext.bound_driver: previous value

 3d) In a the port update operation on a previously bound port that does
 trigger unbinding or rebinding, all drivers' update_port_precommit() and
 update_port_postcommit() methods will be called with PortContext
 properties reflecting the previously bound and currently unbound binding
 states as shown. If a value for binding:host_id was supplied, binding
 will occur afterwards as described in 4 below.

 PortContext.original['binding:host_id']: 

[openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Thierry Carrez
(This email is mostly directed to PTLs for programs that include one
integrated project)

The DefCore subcommittee from the OpenStack board of directors asked the
Technical Committee yesterday about which code sections in each
integrated project should be designated sections in the sense of [1]
(code you're actually needed to run or include to be allowed to use the
trademark). That determines where you can run alternate code (think:
substitute your own private hypervisor driver) and still be able to call
the result openstack.

[1] https://wiki.openstack.org/wiki/Governance/CoreDefinition

PTLs and their teams are obviously the best placed to define this, so it
seems like the process should be: PTLs propose designated sections to
the TC, which blesses them, combines them and forwards the result to the
DefCore committee. We could certainly leverage part of the governance
repo to make sure the lists are kept up to date.

Comments, thoughts ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Steve Gordon
- Original Message -
 From: Andreas Jaeger a...@suse.com
 To: Mark McLoughlin mar...@redhat.com, OpenStack Development Mailing 
 List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: Jonathan Bryce jonat...@openstack.org
 Sent: Wednesday, February 5, 2014 9:17:39 AM
 Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming
 
 On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
  On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
  Steve Gordon wrote:
  From: Anne Gentle anne.gen...@rackspace.com
  Based on today's Technical Committee meeting and conversations with the
  OpenStack board members, I need to change our Conventions for service
  names
  at
  https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
  .
 
  Previously we have indicated that Ceilometer could be named OpenStack
  Telemetry and Heat could be named OpenStack Orchestration. That's not
  the
  case, and we need to change those names.
 
  To quote the TC meeting, ceilometer and heat are other modules (second
  sentence from 4.1 in
  http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
  distributed with the Core OpenStack Project.
 
  Here's what I intend to change the wiki page to:
   Here's the list of project and module names and their official names
   and
  capitalization:
 
  Ceilometer module
  Cinder: OpenStack Block Storage
  Glance: OpenStack Image Service
  Heat module
  Horizon: OpenStack dashboard
  Keystone: OpenStack Identity Service
  Neutron: OpenStack Networking
  Nova: OpenStack Compute
  Swift: OpenStack Object Storage
 
  Small correction. The TC had not indicated that Ceilometer could be
  named OpenStack Telemetry and Heat could be named OpenStack
  Orchestration. We formally asked[1] the board to allow (or disallow)
  that naming (or more precisely, that use of the trademark).
 
  [1]
  https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
 
  We haven't got a formal and clear answer from the board on that request
  yet. I suspect they are waiting for progress on DefCore before deciding.
 
  If you need an answer *now* (and I suspect you do), it might make sense
  to ask foundation staff/lawyers about using those OpenStack names with
  the current state of the bylaws and trademark usage rules, rather than
  the hypothetical future state under discussion.
  
  Basically, yes - I think having the Foundation confirm that it's
  appropriate to use OpenStack Telemetry in the docs is the right thing.
  
  There's an awful lot of confusion about the subject and, ultimately,
  it's the Foundation staff who are responsible for enforcing (and giving
  advise to people on) the trademark usage rules. I've cc-ed Jonathan so
  he knows about this issue.
  
  But FWIW, the TC's request is asking for Ceilometer and Heat to be
  allowed use their Telemetry and Orchestration names in *all* of the
  circumstances where e.g. Nova is allowed use its Compute name.
  
  Reading again this clause in the bylaws:
  
The other modules which are part of the OpenStack Project, but
 not the Core OpenStack Project may not be identified using the
 OpenStack trademark except when distributed with the Core OpenStack
 Project.
  
  it could well be said that this case of naming conventions in the docs
  for the entire OpenStack Project falls under the distributed with case
  and it is perfectly fine to refer to OpenStack Telemetry in the docs.
  I'd really like to see the Foundation staff give their opinion on this,
  though.
 
 What Steve is asking IMO is whether we have to change OpenStack
 Telemetry to Ceilometer module or whether we can just say Telemetry
 without the OpenStack in front of it,
 
 Andreas

Constraining myself to the topic of what we should be using in the 
documentation, yes this is what I'm asking. This makes more sense to me than 
switching to calling them the Heat module and Ceilometer module because:

1) It resolves the issue of using the OpenStack mark where it (apparently) 
shouldn't be used.
2) It means we're still using the formal name for the program as defined by 
the TC [1] (it is my understanding this remains the purview of the TC, it's 
control of the mark that the board are exercising here).
3) It is a more minor change/jump and therefore provides more continuity and 
less confusion to readers (and similarly if one of them ever becomes endorsed 
as core and we need to switch again).

Thanks,

Steve

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] why do we put a license in every file?

2014-02-05 Thread Greg Hill
I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
that we have to put the same license into every single file of source code in 
our projects.  In my past experience, a single LICENSE file at the root-level 
of the project has been sufficient to declare the license chosen for a project. 
 Github even has the capacity to choose a license and generate that file for 
you, it's neat. 

Greg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2014-02-05 Thread Jaromir Coufal

On 2014/05/02 15:27, Tzu-Mainn Chen wrote:

Hi,

In parallel to Jarda's updated wireframes, and based on various discussions 
over the past
weeks, here are the updated Tuskar requirements for Icehouse:

https://wiki.openstack.org/wiki/TripleO/TuskarIcehouseRequirements

Any feedback is appreciated.  Thanks!

Tzu-Mainn Chen


+1 looks good to me!

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] RFC - Suggestion for switching from Less to Sass (Bootstrap 3 Sass support)

2014-02-05 Thread Jaromir Coufal

Dear Horizoners,

in last days there were couple of interesting discussions about updating 
to Bootstrap 3. In this e-mail, I would love to give a small summary and 
propose a solution for us.


As Bootstrap was heavily dependent on Less, when we got rid of node.js 
we started to use lesscpy. Unfortunately because of this change we were 
unable to update to Bootstrap 3. Fixing lesscpy looks problematic - 
there are issues with supporting all use-cases and even if we fix this 
in some time, we might challenge these issues again in the future.


There is great news for Bootstrap. It started to support Sass [0]. 
(Thanks Toshi and MaxV for highlighting this news!)


Thanks to this step forward, we might get out of our lesscpy issues by 
switching to Sass. I am very happy with this possible change, since Sass 
is more powerful than Less and we will be able to update our libraries 
without any constraints.


There are few downsides - we will need to change our Horizon Less files 
to Sass, but it shouldn't be very big deal as far as we discussed it 
with some Horizon folks. We can actually do it as a part of Bootstrap 
update [1] (or CSS files restructuring [2]).


Other concern will be with compilers. So far I've found 3 ways:
* rails dependency (how big problem would it be?)
* https://pypi.python.org/pypi/scss/0.7.1
* https://pypi.python.org/pypi/SassPython/0.2.1
* ... (other suggestions?)

Nice benefit of Sass is, that we can use advantage of Compass framework 
[3], which will save us a lot of energy when writing (not just 
cross-browser) stylesheets thanks to their mixins.


When we discussed on IRC with Horizoners, it looks like this is good way 
to go in order to move us forward. So I am here, bringing this 
suggestion up to whole community.


My proposal for Horizon is to *switch from Less to Sass*. Then we can 
unblock our already existing BPs, get Bootstrap updates and include 
Compass framework. I believe this is all doable in Icehouse timeframe if 
there are no problems with compilers.


Thoughts?

-- Jarda

[0] http://getbootstrap.com/getting-started/
[1] https://blueprints.launchpad.net/horizon/+spec/bootstrap-update
[2] https://blueprints.launchpad.net/horizon/+spec/css-breakdown
[3] http://compass-style.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-02-04 16:14:09 -0800:
 On 03/02/14 17:09, Clint Byrum wrote:
  Excerpts from Thomas Herve's message of 2014-02-03 12:46:05 -0800:
  So, I wrote the original rolling updates spec about a year ago, and the
  time has come to get serious about implementation. I went through it and
  basically rewrote the entire thing to reflect the knowledge I have
  gained from a year of working with Heat.
 
  Any and all comments are welcome. I intend to start implementation very
  soon, as this is an important component of the HA story for TripleO:
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates
 
  Hi Clint, thanks for pushing this.
 
  First, I don't think RollingUpdatePattern and CanaryUpdatePattern should 
  be 2 different entities. The second just looks like a parametrization of 
  the first (growth_factor=1?).
 
  Perhaps they can just be one. Until I find parameters which would need
  to mean something different, I'll just use UpdatePattern.
 
 
  I then feel that using (abusing?) depends_on for update pattern is a bit 
  weird. Maybe I'm influenced by the CFN design, but the separate 
  UpdatePolicy attribute feels better (although I would probably use a 
  property). I guess my main question is around the meaning of using the 
  update pattern on a server instance. I think I see what you want to do for 
  the group, where child_updating would return a number, but I have no idea 
  what it means for a single resource. Could you detail the operation a bit 
  more in the document?
 
 
  I would be o-k with adding another keyword. The idea in abusing depends_on
  is that it changes the core language less. Properties is definitely out
  for the reasons Christopher brought up, properties is really meant to
  be for the resource's end target only.
 
 Agree, -1 for properties - those belong to the resource, and this data 
 belongs to Heat.
 
  UpdatePolicy in cfn is a single string, and causes very generic rolling
 
 Huh?
 
 http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html
 
 Not only is it not just a single string (in fact, it looks a lot like 
 the properties you have defined), it's even got another layer of 
 indirection so you can define different types of update policy (rolling 
 vs. canary, anybody?). It's an extremely flexible syntax.
 

Oops, I relied a little too much on my memory and not enough on docs for
that one. O-k, I will re-evaluate given actual knowledge of how it
actually works. :-P

 BTW, given that we already implemented this in autoscaling, it might be 
 helpful to talk more specifically about what we need to do in addition 
 in order to support the use cases you have in mind.
 

As Robert mentioned in his mail, autoscaling groups won't allow us to
inject individual credentials. With the ResourceGroup, we can make a
nested stack with a random string generator so that is solved. Now the
other piece we need is to be able to directly choose machines to take
out of commission, which I think we may have a simple solution to but I
don't want to derail on that.

The one used in AutoScalingGroups is also limited to just one group,
thus it can be done all inside the resource.

  update behavior. I want this resource to be able to control multiple
  groups as if they are one in some cases (Such as a case where a user
  has migrated part of an app to a new type of server, but not all.. so
  they will want to treat the entire aggregate as one rolling update).
 
  I'm o-k with overloading it to allow resource references, but I'd like
  to hear more people take issue with depends_on before I select that
  course.
 
 Resource references in general, and depends_on in particular, feel like 
 very much the wrong abstraction to me. This is a policy, not a resource.
 
  To answer your question, using it with a server instance allows
  rolling updates across non-grouped resources. In the example the
  rolling_update_dbs does this.
 
 That's not a great example, because one DB server depends on the other, 
 forcing them into updating serially anyway.
 

You're right, a better example is a set of (n) resource groups which
serve the same service and thus we want to make sure we maintain the
minimum service levels as a whole.

If it were an order of magnitude harder to do it this way, I'd say
sure let's just expand on the single-resource rolling update. But
I think it won't be that much harder to achieve this and then the use
case is solved.

 I have to say that even in general, this whole idea about applying 
 update policies to non-grouped resources doesn't make a whole lot of 
 sense to me. For non-grouped resources you control the resource 
 definitions individually - if you don't want them to update at a 
 particular time, you have the option of just not updating them.
 

If I have to calculate all the deltas and feed Heat 10 templates, each
with one small delta, I'm writing the same code as I'm proposing for

Re: [openstack-dev] [OpenStack-Infra] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-05 Thread Jay Pipes
On Wed, 2014-02-05 at 15:50 +0400, Sergey Lukjanov wrote:
 Hi Jay,
 
 it's really very easy to setup Zuul for it (we're using one for
 Savanna CI).

Yes, I set up Zuul for ATT's gate system, thx.

 There are some useful links:
 
 * check pipeline as an example of zuul layout configuration
 - 
 https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml#L5
 * zuul docs - http://ci.openstack.org/zuul/
 * zuul config sample
 - https://github.com/openstack-infra/zuul/blob/master/etc/zuul.conf-sample
 
 So, I think that it could be easy enough to setup Zuul for 3rd party
 testing, but it'll be better to have some doc about it.

Yeah, I proposed in my email that I would do the documentation for using
Zuul as the trigger from Gerrit (see below). However, I didn't think I'd
get the docs done in a timely fashion and proposed relaxing the
requirement for recheck triggers until that documentation was complete
(since most of the vendors I have spoken with have used the Jenkins
Gerrit plugin and not Zuul as their triggering agent)

Best,
-jay

 
 On Wed, Feb 5, 2014 at 3:55 AM, Jay Pipes jaypi...@gmail.com wrote:
 Sorry for cross-posting to both mailing lists, but there's
 lots of folks
 working on setting up third-party testing platforms that are
 not members
 of the openstack-infra ML...
 
 tl;dr
 -
 
 The third party testing documentation [1] has requirements [2]
 that
 include the ability to trigger a recheck based on a gerrit
 comment.
 
 Unfortunately, the Gerrit Jenkins Trigger plugin [3] does not
 have the
 ability to trigger job runs based on a regex-filtered comment
 (only on
 the existence of any new comment to the code review).
 
 Therefore, we either should:
 
 a) Relax the requirement that the third party system trigger
 test
 re-runs when a comment including the word recheck appears in
 the
 Gerrit event stream
 
 b) Modify the Jenkins Gerrit plugin to support regex filtering
 on the
 comment text (in the same way that it currently supports regex
 filtering
 on the project name)
 
 or
 
 c) Add documentation to the third party testing pages that
 explains how
 to use Zuul as a replacement for the Jenkins Gerrit plugin.
 
 I propose we do a) for the short term, and I'll work on c)
 long term.
 However, I'm throwing this out there just in case there are
 some Java
 and Jenkins whizzes out there that could get b) done in a
 jiffy.
 
 details
 ---
 
 OK, so I've been putting together documentation on how to set
 up an
 external Jenkins platform that is linked [4] with the
 upstream
 OpenStack CI system.
 
 Recently, I wrote an article detailing how the upstream CI
 system
 worked, including a lot of the gory details from the
 openstack-infra/config project's files. [5]
 
 I've been working on a follow-up article that goes through how
 to set up
 a Jenkins system, and in writing that article, I created a
 source
 repository [6] that contains scripts, instructions and Puppet
 modules
 that set up a Jenkins system, the Jenkins Job Builder tool,
 and
 installs/configures the Jenkins Gerrit plugin [7].
 
 I planned to use the Jenkins Gerrit plugin as the mechanism
 that
 triggers Jenkins jobs on the external system based on gerrit
 events
 published by the OpenStack review.openstack.org Gerrit
 service. In
 addition to being mentioned in the third party documentation,
 Jenkins
 Job Builder has the ability to construct Jenkins jobs that are
 triggered
 by the Jenkins Gerrit plugin [8].
 
 Unforunately, I've run into a bit of a snag.
 
 The third party testing documentation has requirements that
 include the
 ability to trigger a recheck based on a gerrit comment:
 
 quote
 Support recheck to request re-running a test.
  * Support the following syntaxes recheck no bug and recheck
 bug ###.
  * Recheck means recheck everything. A single recheck comment
 should
 re-trigger all testing systems.
 /quote
 
 The documentation has a section on using the Gerrit Jenkins
 Trigger
 plugin [3] to accept notifications from the upstream OpenStack
 Gerrit
 instance.
 
 But unfortunately, the Jenkins Gerrit plugin does not support
 the
 

Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Chmouel Boudjnah
On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 Including the config file in either the developer documentation or the
 packaging build makes more sense. I'm still worried that adding it to the
 sdist generation means you would have to have a lot of tools installed just
 to make the sdist. However, we could



I think that may slighty complicate more devstack with this, since we rely
heavily on config samples to setup the services.

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Jay Pipes
On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
 that we have to put the same license into every single file of source code in 
 our projects.

Meh, probably just habit and copy/paste behavior.

   In my past experience, a single LICENSE file at the root-level of the 
 project has been sufficient to declare the license chosen for a project.

Agreed, and the git history is enough to figure out who worked on a
particular file. But, there's been many discussions about this topic
over the years, and it's just not been a priority, frankly.

 Github even has the capacity to choose a license and generate that file for 
 you, it's neat.

True, but we don't use GitHub :) We only only use it as a mirror for
Gerrit.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Daniel P. Berrange
On Wed, Feb 05, 2014 at 04:29:20PM +, Greg Hill wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it
 bizarre that we have to put the same license into every single file
 of source code in our projects.  In my past experience, a single
 LICENSE file at the root-level of the project has been sufficient
 to declare the license chosen for a project.  Github even has the
 capacity to choose a license and generate that file for you, it's
 neat.

It is not uncommon for source from one project to be copied into another
project in either direction. While the licenses of the two projects have
to be compatible, they don't have to be the same. It is highly desirable
that each file have license explicitly declared to remove any level of
ambiguity as to what license its code falls under. This might not seem
like a problem now, but code lives for a very long time and what is
clear today might be not be so clear 10, 15, 20 years down the road.
Distros like Debian and Fedora who audit project license compliance have
learnt the hard way that you really want these per-file licenses for
clarity of intent.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Donald Stufft
It's nice when someone removes a file from the project. They get license 
information transmitted automatically without needing to do extra work. 

 On Feb 5, 2014, at 10:46 AM, Jay Pipes jaypi...@gmail.com wrote:
 
 On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
 that we have to put the same license into every single file of source code 
 in our projects.
 
 Meh, probably just habit and copy/paste behavior.
 
  In my past experience, a single LICENSE file at the root-level of the 
 project has been sufficient to declare the license chosen for a project.
 
 Agreed, and the git history is enough to figure out who worked on a
 particular file. But, there's been many discussions about this topic
 over the years, and it's just not been a priority, frankly.
 
 Github even has the capacity to choose a license and generate that file for 
 you, it's neat.
 
 True, but we don't use GitHub :) We only only use it as a mirror for
 Gerrit.
 
 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 11:40 AM, Chmouel Boudjnah chmo...@enovance.comwrote:


 On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann doug.hellm...@dreamhost.com
  wrote:

 Including the config file in either the developer documentation or the
 packaging build makes more sense. I'm still worried that adding it to the
 sdist generation means you would have to have a lot of tools installed just
 to make the sdist. However, we could



 I think that may slighty complicate more devstack with this, since we rely
 heavily on config samples to setup the services.


Good point, we would need to add a step to generate a sample config for
each app instead of just copying the one in the source repository.

Doug





 Chmouel.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 11:22 AM, Thierry Carrez thie...@openstack.orgwrote:

 (This email is mostly directed to PTLs for programs that include one
 integrated project)

 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.

 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition

 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.

 Comments, thoughts ?


How specific do those designations need to be? The question of the impact
of this designation system on code organization came up, but wasn't really
answered clearly. Do we have any cases where part of the code in one module
might be designated core, but another part wouldn't?

For example, I could envision a module that contains code for managing data
with CRUD operations where the delete is handled through an operational job
rather than a public API (keystone tokens come to mind as an example of
that sort of data, as does the data collected by ceilometer). While it's
likely that the operational job for pruning the database would be used in
any real deployment, is that tool part of core? Does that mean a deployer
could not use an alternate mechanism to manage database's growth? If the
pruning tool is not core, does that mean the delete code is also not? Does
it have to then live in a different module from the implementations of the
other operations that are core?

It seems like the intent is to draw the lines between common project code
and drivers or other sorts of plugins or extensions, without actually
using those words because all of them are tied to implementation details.
It seems better technically, and closer to the need of someone wanting to
customize a deployment, to designate a set of customization points for
each app (be they drivers, plugins, extensions, whatever) and say that the
rest of the app is core.

Doug




 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Ben Nemec
 

On 2014-02-05 09:05, Doug Hellmann wrote: 

 On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-01-08 12:14, Doug Hellmann wrote: 
 
 On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-01-08 11:16, Sean Dague wrote:
 On 01/08/2014 12:06 PM, Doug Hellmann wrote:
 snip
 Yeah, that's what made me start thinking oslo.sphinx should be called
 something else.
 
 Sean, how strongly do you feel about not installing oslo.sphinx in
 devstack? I see your point, I'm just looking for alternatives to the
 hassle of renaming oslo.sphinx. 
 Doing the git thing is definitely not the right thing. But I guess I got
 lost somewhere along the way about what the actual problem is. Can
 someone write that up concisely? With all the things that have been
 tried/failed, why certain things fail, etc.
 The problem seems to be when we pip install -e oslo.config on the
system, then pip install oslo.sphinx in a venv. oslo.config is
unavailable in the venv, apparently because the namespace package for
o.s causes the egg-link for o.c to be ignored. Pretty much every other
combination I've tried (regular pip install of both, or pip install -e
of both, regardless of where they are) works fine, but there seem to be
other issues with all of the other options we've explored so far.

 We can't remove the pip install -e of oslo.config because it has to be
used for gating, and we can't pip install -e oslo.sphinx because it's
not a runtime dep so it doesn't belong in the gate. Changing the
toplevel package for oslo.sphinx was also mentioned, but has obvious
drawbacks too.

 I think that about covers what I know so far. 

Here's a link dstufft provided to the pip bug tracking this problem:
https://github.com/pypa/pip/issues/3 [1] 
Doug 

This just bit me again trying to run unit tests against a fresh Nova
tree. I don't think it's just me either - Matt Riedemann said he has
been disabling site-packages in tox.ini for local tox runs. We really
need to do _something_ about this, even if it's just disabling
site-packages by default in tox.ini for the affected projects. A
different option would be nice, but based on our previous discussion I'm
not sure we're going to find one. 
Thoughts? 

Is the problem isolated to oslo.sphinx? That is, do we end up with any
configurations where we have 2 oslo libraries installed in different
modes (development and regular) where one of those 2 libraries is not
oslo.sphinx? Because if the issue is really just oslo.sphinx, we can
rename that to move it out of the namespace package. 

oslo.sphinx is the only one that has triggered this for me so far. I
think it's less likely to happen with the others because they tend to be
runtime dependencies so they get installed in devstack, whereas
oslo.sphinx doesn't because it's a build dep (AIUI anyway). 

 Doug 
 
 -Ben

 

Links:
--
[1] https://github.com/pypa/pip/issues/3
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Daniel P. Berrange
On Wed, Feb 05, 2014 at 05:40:13PM +0100, Chmouel Boudjnah wrote:
 On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann
 doug.hellm...@dreamhost.comwrote:
 
  Including the config file in either the developer documentation or the
  packaging build makes more sense. I'm still worried that adding it to the
  sdist generation means you would have to have a lot of tools installed just
  to make the sdist. However, we could
 
 
 
 I think that may slighty complicate more devstack with this, since we rely
 heavily on config samples to setup the services.

devstack has to checkout nova and run its build + install steps, so it
would have a full sample config available to use. So I don't think that
generating config at build time would have any real negative impact on
devstack overall.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Russell Bryant
On 02/05/2014 11:22 AM, Thierry Carrez wrote:
 (This email is mostly directed to PTLs for programs that include one
 integrated project)
 
 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.
 
 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
 
 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.
 
 Comments, thoughts ?
 

The process you suggest is what I would prefer.  (PTLs writing proposals
for TC to approve)

Using the governance repo makes sense as a means for the PTLs to post
their proposals for review and approval of the TC.

Who gets final say if there's strong disagreement between a PTL and the
TC?  Hopefully this won't matter, but it may be useful to go ahead and
clear this up front.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev] [Oslo] [Fuel] [Fuel-dev] Openstack services should support SIGHUP signal

2014-02-05 Thread Bogdan Dobrelya
Hi, stackers.
I believe Openstack services from all projects should support SIGHUP for
effective log/config files handling w/o unnecessary restarts.
(See https://bugs.launchpad.net/oslo/+bug/1276694)

'Smooth reloads'(kill -HUP) are much better than 'disturbing restarts',
aren't they?

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-05 Thread Trevor McKay
Hi Sergey,

  Is there a bug or a blueprint for this?  I did a quick search but
didn't see one.

Thanks,

Trevor

On Wed, 2014-02-05 at 16:06 +0400, Sergey Kolekonov wrote:
 I'm currently working on moving on the MySQL for savanna-ci
 
 
 On Wed, Feb 5, 2014 at 3:53 PM, Sergey Lukjanov
 slukja...@mirantis.com wrote:
 Agreed, let's move on to the MySQL for savanna-ci to run
 integration tests against production-like DB.
 
 
 On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev
 alaza...@mirantis.com wrote:
 Since sqlite is not in the list of databases that
 would be used in production, CI should use other DB
 for testing.
 
 
 Andrew.
 
 
 On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov
 aigna...@mirantis.com wrote:
 Indeed. We should create a bug around that and
 move our savanna-ci to mysql.
 
 Regards,
 Alexander Ignatov
 
 
 
 On 05 Feb 2014, at 01:01, Trevor McKay
 tmc...@redhat.com wrote:
 
  This brings up an interesting problem:
 
  In https://review.openstack.org/#/c/70420/
 I've added a migration that
  uses a drop column for an upgrade.
 
  But savann-ci is apparently using a sqlite
 database to run.  So it can't
  possibly pass.
 
  What do we do here?  Shift savanna-ci tests
 to non sqlite?
 
  Trevor
 
  On Sat, 2014-02-01 at 18:17 +0200, Roman
 Podoliaka wrote:
  Hi all,
 
  My two cents.
 
  2) Extend alembic so that op.drop_column()
 does the right thing
  We could, but should we?
 
  The only reason alembic doesn't support
 these operations for SQLite
  yet is that SQLite lacks proper support of
 ALTER statement. For
  sqlalchemy-migrate we've been providing a
 work-around in the form of
  recreating of a table and copying of all
 existing rows (which is a
  hack, really).
 
  But to be able to recreate a table, we
 first must have its definition.
  And we've been relying on SQLAlchemy schema
 reflection facilities for
  that. Unfortunately, this approach has a
 few drawbacks:
 
  1) SQLAlchemy versions prior to 0.8.4 don't
 support reflection of
  unique constraints, which means the
 recreated table won't have them;
 
  2) special care must be taken in 'edge'
 cases (e.g. when you want to
  drop a BOOLEAN column, you must also drop
 the corresponding CHECK (col
  in (0, 1)) constraint manually, or SQLite
 will raise an error when the
  table is recreated without the column being
 dropped)
 
  3) special care must be taken for 'custom'
 type columns (it's got
  better with SQLAlchemy 0.8.x, but e.g. in
 0.7.x we had to override
  definitions of reflected BIGINT columns
 manually for each
  column.drop() call)
 
  4) schema reflection can't be performed
 when alembic migrations are
  run in 'offline' mode (without connecting
 to a DB)
  ...
  (probably something else I've forgotten)
 
  So it's totally doable, but, IMO, there is

Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Donald Stufft
Avoiding namespace packages is a good idea in general. At least until Python 
3.whatever is baseline. 

 On Feb 5, 2014, at 10:58 AM, Doug Hellmann doug.hellm...@dreamhost.com 
 wrote:
 
 
 
 
 On Wed, Feb 5, 2014 at 11:44 AM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-02-05 09:05, Doug Hellmann wrote:
 
 
 On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-01-08 12:14, Doug Hellmann wrote:
 
 
 
 On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-01-08 11:16, Sean Dague wrote:
 On 01/08/2014 12:06 PM, Doug Hellmann wrote:
 snip
 Yeah, that's what made me start thinking oslo.sphinx should be called
 something else.
 
 Sean, how strongly do you feel about not installing oslo.sphinx in
 devstack? I see your point, I'm just looking for alternatives to the
 hassle of renaming oslo.sphinx.
 
 Doing the git thing is definitely not the right thing. But I guess I got
 lost somewhere along the way about what the actual problem is. Can
 someone write that up concisely? With all the things that have been
 tried/failed, why certain things fail, etc.
 The problem seems to be when we pip install -e oslo.config on the system, 
 then pip install oslo.sphinx in a venv.  oslo.config is unavailable in 
 the venv, apparently because the namespace package for o.s causes the 
 egg-link for o.c to be ignored.  Pretty much every other combination I've 
 tried (regular pip install of both, or pip install -e of both, regardless 
 of where they are) works fine, but there seem to be other issues with all 
 of the other options we've explored so far.
 
 We can't remove the pip install -e of oslo.config because it has to be 
 used for gating, and we can't pip install -e oslo.sphinx because it's not 
 a runtime dep so it doesn't belong in the gate.  Changing the toplevel 
 package for oslo.sphinx was also mentioned, but has obvious drawbacks too.
 
 I think that about covers what I know so far.
 Here's a link dstufft provided to the pip bug tracking this problem: 
 https://github.com/pypa/pip/issues/3
 Doug
 This just bit me again trying to run unit tests against a fresh Nova tree. 
I don't think it's just me either - Matt Riedemann said he has been 
 disabling site-packages in tox.ini for local tox runs.  We really need to 
 do _something_ about this, even if it's just disabling site-packages by 
 default in tox.ini for the affected projects.  A different option would be 
 nice, but based on our previous discussion I'm not sure we're going to 
 find one.
 Thoughts?
  
 Is the problem isolated to oslo.sphinx? That is, do we end up with any 
 configurations where we have 2 oslo libraries installed in different modes 
 (development and regular) where one of those 2 libraries is not 
 oslo.sphinx? Because if the issue is really just oslo.sphinx, we can rename 
 that to move it out of the namespace package.
 
 oslo.sphinx is the only one that has triggered this for me so far.  I think 
 it's less likely to happen with the others because they tend to be runtime 
 dependencies so they get installed in devstack, whereas oslo.sphinx doesn't 
 because it's a build dep (AIUI anyway).
 
 That's pretty much what I expected.
 
 Can we get a volunteer to work on renaming oslo.sphinx?
 
 Doug
  
  
 Doug
 -Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Joe Gordon
On Wed, Feb 5, 2014 at 8:29 AM, Greg Hill greg.h...@rackspace.com wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
 that we have to put the same license into every single file of source code in 
 our projects.  In my past experience, a single LICENSE file at the root-level 
 of the project has been sufficient to declare the license chosen for a 
 project.  Github even has the capacity to choose a license and generate that 
 file for you, it's neat.


We do it for the same reason apache does it:

Why is a licensing header necessary?

License headers allow someone examining the file to know the terms for
the work, even when it is distributed without the rest of the
distribution. Without a licensing notice, it must be assumed that the
author has reserved all rights, including the right to copy, modify,
and redistribute.

http://www.apache.org/legal/src-headers.html



 Greg



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Russell Bryant
On 02/05/2014 11:55 AM, Doug Hellmann wrote:
 
 
 
 On Wed, Feb 5, 2014 at 11:22 AM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:
 
 (This email is mostly directed to PTLs for programs that include one
 integrated project)
 
 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.
 
 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
 
 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.
 
 Comments, thoughts ?
 
 
 How specific do those designations need to be? The question of the
 impact of this designation system on code organization came up, but
 wasn't really answered clearly. Do we have any cases where part of the
 code in one module might be designated core, but another part wouldn't?
 
 For example, I could envision a module that contains code for managing
 data with CRUD operations where the delete is handled through an
 operational job rather than a public API (keystone tokens come to mind
 as an example of that sort of data, as does the data collected by
 ceilometer). While it's likely that the operational job for pruning the
 database would be used in any real deployment, is that tool part of
 core? Does that mean a deployer could not use an alternate mechanism
 to manage database's growth? If the pruning tool is not core, does that
 mean the delete code is also not? Does it have to then live in a
 different module from the implementations of the other operations that
 are core?
 
 It seems like the intent is to draw the lines between common project
 code and drivers or other sorts of plugins or extensions, without
 actually using those words because all of them are tied to
 implementation details. It seems better technically, and closer to the
 need of someone wanting to customize a deployment, to designate a set of
 customization points for each app (be they drivers, plugins,
 extensions, whatever) and say that the rest of the app is core.

Perhaps going through this process for a single project first would be
helpful.  I agree that some clarification is needed on the details of
the expected result.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Clint Byrum
Excerpts from Greg Hill's message of 2014-02-05 08:29:20 -0800:
 I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
 that we have to put the same license into every single file of source code in 
 our projects.  In my past experience, a single LICENSE file at the root-level 
 of the project has been sufficient to declare the license chosen for a 
 project.  Github even has the capacity to choose a license and generate that 
 file for you, it's neat. 
 

I am definitely not a lawyer, but this is what my reading has shown.

In legal terms, explicit trumps implicit. So being explicit about
our license in each copyrightable file is a hedge against somebody
forklifting the code into their own code base in a proprietary product
and just removing the license. If the header were not there, they might
have a mitigating argument that they were not aware of the license. But
by removing it, they've actively subverted the license.

In reality, I think it is because Debian Developers like me whine when
our program 'licensecheck' says UNKNOWN for any files. ;)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Clint Byrum
Excerpts from Steven Dake's message of 2014-02-05 07:35:37 -0800:
 On 02/04/2014 06:34 PM, Robert Collins wrote:
  On 5 February 2014 13:14, Zane Bitter zbit...@redhat.com wrote:
 
 
  That's not a great example, because one DB server depends on the other,
  forcing them into updating serially anyway.
 
  I have to say that even in general, this whole idea about applying update
  policies to non-grouped resources doesn't make a whole lot of sense to me.
  For non-grouped resources you control the resource definitions individually
  - if you don't want them to update at a particular time, you have the 
  option
  of just not updating them.
  Well, I don't particularly like the idea of doing thousands of
  discrete heat stack-update calls, which would seem to be what you're
  proposing.
 
  On groups: autoscale groups are a problem for secure minded
  deployments because every server has identical resources (today) and
  we very much want discrete credentials per server - at least this is
  my understanding of the reason we're not using scaling groups in
  TripleO.
 
  Where you _do_ need it is for scaling groups where every server is based on
  the same launch config, so you need a way to control the members
  individually - by batching up operations (done), adding delays (done) or,
  even better, notifications and callbacks.
 
  So it seems like doing 'rolling' updates for any random subset of resources
  is effectively turning Heat into something of a poor-man's workflow 
  service,
  and IMHO that is probably a mistake.
  I mean to reply to the other thread, but here is just as good :) -
  heat as a way to describe the intended state, and heat takes care of
  transitions, is a brilliant model. It absolutely implies a bunch of
  workflows - the AWS update policy is probably the key example.
 
  Being able to gracefully, *automatically* work through a transition
  between two defined states, allowing the nodes in question to take
  care of their own needs along the way seems like a pretty core
  function to fit inside Heat itself. Its not at all the same as 'allow
  users to define abitrary workflows'.
 
  -Rob
 Rob,
 
 I'm not precisely certain what your proposing, but I think we need to 
 take care not to turn the Heat DSL into a full-fledged programming 
 language.  IMO thousands of updates done through heat is a perfect way 
 for a third party service to do such things - eg control workflow.  
 Clearly there is a workflow gap in OpenStack, and possibly that thing 
 doing the thousands of updates should be a workflow service, rather then 
 TripleO, but workflow is out of scope for Heat proper.  Such a workflow 
 service could potentially fit in the Orchestration program alongside 
 Heat and Autoscaling.  It is too bad there isn't a workflow service 
 already because we are getting alot of pressure to make Heat fill this 
 gap.  I personally believe filling this gap with heat would be a mistake 
 and the correct course of action would be for a workflow service to 
 emerge to fill this need (and depend on Heat for orchestration).
 

I don't think we want to make it more programmable. I think the opposite,
we want to relieve the template author of workflow by hiding the common
case work-flows behind an update pattern.

To provide some substance to that, if we were to make a workflow service
that does this, it would have to understand templating, and it would
have to understand heat's API. By the time we get done implementing
that, it would look a lot like the resource I've suggested, surrounded
by calls to heatclient and a heat template library.

 I believe this may be what Zane is reacting to; I believe the Heat 
 community would like to avoid making the DSL more programmable because 
 then it is harder to use and support.  The parameters,resources,outputs 
 DSL objects are difficult enough for new folks to pick up and its only 3 
 things to understand...

I do agree that keeping this simple to understand from a template author
perspective is extremely important.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Russell Bryant
On 02/05/2014 11:53 AM, Daniel P. Berrange wrote:
 On Wed, Feb 05, 2014 at 04:29:20PM +, Greg Hill wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it
 bizarre that we have to put the same license into every single file
 of source code in our projects.  In my past experience, a single
 LICENSE file at the root-level of the project has been sufficient
 to declare the license chosen for a project.  Github even has the
 capacity to choose a license and generate that file for you, it's
 neat.
 
 It is not uncommon for source from one project to be copied into another
 project in either direction. While the licenses of the two projects have
 to be compatible, they don't have to be the same. It is highly desirable
 that each file have license explicitly declared to remove any level of
 ambiguity as to what license its code falls under. This might not seem
 like a problem now, but code lives for a very long time and what is
 clear today might be not be so clear 10, 15, 20 years down the road.
 Distros like Debian and Fedora who audit project license compliance have
 learnt the hard way that you really want these per-file licenses for
 clarity of intent.

Yes, this.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Mark McLoughlin
On Wed, 2014-02-05 at 17:22 +0100, Thierry Carrez wrote:
 (This email is mostly directed to PTLs for programs that include one
 integrated project)
 
 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.
 
 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
 
 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.
 
 Comments, thoughts ?

I think what would be useful to the board is if we could describe at a
high level which parts of each project have a pluggable interface and
whether we encourage out-of-tree implementations of those pluggable
interfaces.

That's actually a pretty tedious thing to document properly - think
about e.g. whether we encourage out-of-tree WSGI middlewares.

There's a flip-side to this designated sections thing that bothers me
after talking it through with Michael Still - I think it's perfectly
reasonable for vendors to e.g. backport fixes to their products without
that backport ever seeing the light of day upstream (say it was too
invasive for the stable branch).

This can't be a case of e.g. enforcing the sha1 sums of files. If we
want to go that route, let's just use the AGPL :)

I don't have a big issue with the way the Foundation currently enforces
you must use the code - anyone who signs a trademark agreement with
the Foundation agrees to include the entirety of Nova's code. That's
very vague, but I assume the Foundation can terminate the agreement if
it thinks the other party is acting in bad faith.

Basically, I'm concerned about us swinging from a rather lax you must
include our code rule to an overly strict you must make no downstream
modifications to our code.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Zane Bitter

On 04/02/14 20:34, Robert Collins wrote:

On 5 February 2014 13:14, Zane Bitter zbit...@redhat.com wrote:



That's not a great example, because one DB server depends on the other,
forcing them into updating serially anyway.

I have to say that even in general, this whole idea about applying update
policies to non-grouped resources doesn't make a whole lot of sense to me.
For non-grouped resources you control the resource definitions individually
- if you don't want them to update at a particular time, you have the option
of just not updating them.


Well, I don't particularly like the idea of doing thousands of
discrete heat stack-update calls, which would seem to be what you're
proposing.


I'm not proposing you do it by hand if that's any help ;)

Ideally a workflow service would exist that could do the messy parts for 
you, but at the end of the day it's just a for-loop in your code. From 
what you say below, I think you started down the path of managing a lot 
of complexity yourself when you were forced to generate templates for 
server groups rather than use autoscaling. I think it would be better 
for _everyone_ for us to put resources into helping TripleO get off that 
path rather than it would for us to put resources into making it less 
inconvenient to stay on it.



On groups: autoscale groups are a problem for secure minded
deployments because every server has identical resources (today) and
we very much want discrete credentials per server - at least this is
my understanding of the reason we're not using scaling groups in
TripleO.


OK, I wasn't aware that y'all are not using scaling groups. It sounds 
like this is the real problem we should be addressing, because everyone 
wants secure-minded deployments and nobody wants to have to manually 
define the configs for their 1000 all-but-identical servers. If we had a 
mechanism to ensure that every server in a scaling group could obtain 
its own credentials then it seems to me that the issue of whether to 
apply autoscaling-style rolling upgrades to manually-defined groups of 
resources becomes moot.


(Note: if anybody read that paragraph and started thinking hey, we 
could make Turing-complete programmable template templates using the 
JSON equivalent of XSLT, please just stop right now kthx.)



Where you _do_ need it is for scaling groups where every server is based on
the same launch config, so you need a way to control the members
individually - by batching up operations (done), adding delays (done) or,
even better, notifications and callbacks.

So it seems like doing 'rolling' updates for any random subset of resources
is effectively turning Heat into something of a poor-man's workflow service,
and IMHO that is probably a mistake.


I mean to reply to the other thread, but here is just as good :) -
heat as a way to describe the intended state, and heat takes care of
transitions, is a brilliant model. It absolutely implies a bunch of
workflows - the AWS update policy is probably the key example.


Absolutely. Orchestration works by building a workflow internally, which 
Heat then also executes. No disagreement there.



Being able to gracefully, *automatically* work through a transition
between two defined states, allowing the nodes in question to take
care of their own needs along the way seems like a pretty core
function to fit inside Heat itself. Its not at all the same as 'allow
users to define abitrary workflows'.


That's fair and, I like to think, consistent with what I was suggesting 
below.



What we do need for all resources (not just scaling groups) is a way for the
user to say for this particular resource, notify me when it has updated
(but, if possible, before we have taken any destructive actions on it), give
me a chance to test it and accept or reject the update. For example, when
you resize a server, give the user a chance to confirm or reject the change
at the VERIFY_RESIZE step (Trove requires this). Or when you replace a
server during an update, give the user a chance to test the new server and
either keep it (continue on and delete the old one) or not (roll back). Or
when you replace a server in a scaling group, notify the load balancer _or
some other thing_ (e.g. OpenShift broker node) that a replacement has been
created and wait for it to switch over to the new one before deleting the
old one. Or, of course, when you update a server to some new config, give
the user a chance to test it out and make sure it works before continuing
with the stack update. All of these use cases can, I think, be solved with a
single feature.

The open questions for me are:
1) How do we notify the user that it's time to check on a resource?
(Marconi?)


This is the graceful update stuff I referred to in my mail to Clint -
the proposal from hallway discussions in HK was to do this by
notifying the server itself (that way we don't create a centralised
point of fail). I can see though that in a general sense not all
resources are servers. 

Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Mark McLoughlin
On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
 I'm new, so I'm sure there's some history I'm missing, but I find it
 bizarre that we have to put the same license into every single file of
 source code in our projects.  In my past experience, a single LICENSE
 file at the root-level of the project has been sufficient to declare
 the license chosen for a project.  Github even has the capacity to
 choose a license and generate that file for you, it's neat. 

Take a look at this thread on legal-discuss last month:

  http://lists.openstack.org/pipermail/legal-discuss/2014-January/thread.html

But yeah, as others say - per-file license headers help make the license
explicit when it is copied to other projects.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Daniel P. Berrange
On Wed, Feb 05, 2014 at 11:56:35AM -0500, Doug Hellmann wrote:
 On Wed, Feb 5, 2014 at 11:40 AM, Chmouel Boudjnah chmo...@enovance.comwrote:
 
 
  On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann doug.hellm...@dreamhost.com
   wrote:
 
  Including the config file in either the developer documentation or the
  packaging build makes more sense. I'm still worried that adding it to the
  sdist generation means you would have to have a lot of tools installed just
  to make the sdist. However, we could
 
 
 
  I think that may slighty complicate more devstack with this, since we rely
  heavily on config samples to setup the services.
 
 
 Good point, we would need to add a step to generate a sample config for
 each app instead of just copying the one in the source repository.

Which is what 'python setup.py build' for an app would take care of.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Tiwari, Arvind
Hi Chris,

Looking at your requirements, seems my solution (see attached email) is pretty 
much aligned. What I am trying to propose is

1. One root domain as owner of virtual cloud. Logically linked to n leaf 
domains. 
2. All leaf domains falls under admin boundary of virtual cloud owner.
3. No sharing of resources at project level, that will keep the authorization 
model simple.
4. No sharing of resources at domain level either.
5. Hierarchy or admin boundary will be totally governed by roles. 

This way we can setup a true virtual cloud/Reseller/wholesale model.

Thoughts?

Thanks,
Arvind

-Original Message-
From: Chris Behrens [mailto:cbehr...@codestud.com] 
Sent: Wednesday, February 05, 2014 1:27 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy 
Discussion


Hi Vish,

I'm jumping in slightly late on this, but I also have an interest in this. I'm 
going to preface this by saying that I have not read this whole thread yet, so 
I apologize if I repeat things, say anything that is addressed by previous 
posts, or doesn't jive with what you're looking for. :) But what you describe 
below sounds like exactly a use case I'd come up with.

Essentially I want another level above project_id. Depending on the exact use 
case, you could name it 'wholesale_id' or 'reseller_id'...and yeah, 'org_id' 
fits in with your example. :) I think that I had decided I'd call it 'domain' 
to be more generic, especially after seeing keystone had a domain concept.

Your idea below (prefixing the project_id) is exactly one way I thought of 
doing this to be least intrusive. I, however, thought that this would not be 
efficient. So, I was thinking about proposing that we add 'domain' to all of 
our models. But that limits your hierarchy and I don't necessarily like that. 
:)  So I think that if the queries are truly indexed as you say below, you have 
a pretty good approach. The one issue that comes into mind is that if there's 
any chance of collision. For example, if project ids (or orgs) could contain a 
'.', then '.' as a delimiter won't work.

My requirements could be summed up pretty well by thinking of this as 'virtual 
clouds within a cloud'. Deploy a single cloud infrastructure that could look 
like many multiple clouds. 'domain' would be the key into each different 
virtual cloud. Accessing one virtual cloud doesn't reveal any details about 
another virtual cloud.

What this means is:

1) domain 'a' cannot see instances (or resources in general) in domain 'b'. It 
doesn't matter if domain 'a' and domain 'b' share the same tenant ID. If you 
act with the API on behalf of domain 'a', you cannot see your instances in 
domain 'b'.
2) Flavors per domain. domain 'a' can have different flavors than domain 'b'.
3) Images per domain. domain 'a' could see different images than domain 'b'.
4) Quotas and quota limits per domain. your instances in domain 'a' don't count 
against quotas in domain 'b'.
5) Go as far as using different config values depending on what domain you're 
using. This one is fun. :)

etc.

I'm not sure if you were looking to go that far or not. :) But I think that our 
ideas are close enough, if not exact, that we can achieve both of our goals 
with the same implementation.

I'd love to be involved with this. I am not sure that I currently have the time 
to help with implementation, however.

- Chris



On Feb 3, 2014, at 1:58 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Hello Again!
 
 At the meeting last week we discussed some options around getting true 
 multitenancy in nova. The use case that we are trying to support can be 
 described as follows:
 
 Martha, the owner of ProductionIT provides it services to multiple 
 Enterprise clients. She would like to offer cloud services to Joe at 
 WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for 
 WidgetMaster and he has multiple QA and Development teams with many users. 
 Joe needs the ability create users, projects, and quotas, as well as the 
 ability to list and delete resources across WidgetMaster. Martha needs to be 
 able to set the quotas for both WidgetMaster and SuperDevShop; manage users, 
 projects, and objects across the entire system; and set quotas for the client 
 companies as a whole. She also needs to ensure that Joe can't see or mess 
 with anything owned by Sam.
 
 As per the plan I outlined in the meeting I have implemented a 
 Proof-of-Concept that would allow me to see what changes were required in 
 nova to get scoped tenancy working. I used a simple approach of faking out 
 heirarchy by prepending the id of the larger scope to the id of the smaller 
 scope. Keystone uses uuids internally, but for ease of explanation I will 
 pretend like it is using the name. I think we can all agree that 
 'orga.projecta' is more readable than 
 'b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8'.
 
 The 

Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Jonathan Bryce
On Feb 5, 2014, at 10:18 AM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
 From: Andreas Jaeger a...@suse.com
 To: Mark McLoughlin mar...@redhat.com, OpenStack Development Mailing 
 List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: Jonathan Bryce jonat...@openstack.org
 Sent: Wednesday, February 5, 2014 9:17:39 AM
 Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming
 
 On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
 On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
 Steve Gordon wrote:
 From: Anne Gentle anne.gen...@rackspace.com
 Based on today's Technical Committee meeting and conversations with the
 OpenStack board members, I need to change our Conventions for service
 names
 at
 https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
 .
 
 Previously we have indicated that Ceilometer could be named OpenStack
 Telemetry and Heat could be named OpenStack Orchestration. That's not
 the
 case, and we need to change those names.
 
 To quote the TC meeting, ceilometer and heat are other modules (second
 sentence from 4.1 in
 http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
 distributed with the Core OpenStack Project.
 
 Here's what I intend to change the wiki page to:
 Here's the list of project and module names and their official names
 and
 capitalization:
 
 Ceilometer module
 Cinder: OpenStack Block Storage
 Glance: OpenStack Image Service
 Heat module
 Horizon: OpenStack dashboard
 Keystone: OpenStack Identity Service
 Neutron: OpenStack Networking
 Nova: OpenStack Compute
 Swift: OpenStack Object Storage
 
 Small correction. The TC had not indicated that Ceilometer could be
 named OpenStack Telemetry and Heat could be named OpenStack
 Orchestration. We formally asked[1] the board to allow (or disallow)
 that naming (or more precisely, that use of the trademark).
 
 [1]
 https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
 
 We haven't got a formal and clear answer from the board on that request
 yet. I suspect they are waiting for progress on DefCore before deciding.
 
 If you need an answer *now* (and I suspect you do), it might make sense
 to ask foundation staff/lawyers about using those OpenStack names with
 the current state of the bylaws and trademark usage rules, rather than
 the hypothetical future state under discussion.
 
 Basically, yes - I think having the Foundation confirm that it's
 appropriate to use OpenStack Telemetry in the docs is the right thing.
 
 There's an awful lot of confusion about the subject and, ultimately,
 it's the Foundation staff who are responsible for enforcing (and giving
 advise to people on) the trademark usage rules. I've cc-ed Jonathan so
 he knows about this issue.
 
 But FWIW, the TC's request is asking for Ceilometer and Heat to be
 allowed use their Telemetry and Orchestration names in *all* of the
 circumstances where e.g. Nova is allowed use its Compute name.
 
 Reading again this clause in the bylaws:
 
  The other modules which are part of the OpenStack Project, but
   not the Core OpenStack Project may not be identified using the
   OpenStack trademark except when distributed with the Core OpenStack
   Project.
 
 it could well be said that this case of naming conventions in the docs
 for the entire OpenStack Project falls under the distributed with case
 and it is perfectly fine to refer to OpenStack Telemetry in the docs.
 I'd really like to see the Foundation staff give their opinion on this,
 though.

In this case, we are talking about documentation that is produced and 
distributed with the integrated release to cover the Core OpenStack Project and 
the “modules that are distributed together with the Core OpenStack Project in 
the integrated release. This is the intended use case for the exception Mark 
quoted above from the Bylaws, and I think it is perfectly fine to refer to the 
integrated components in the OpenStack release documentation as OpenStack 
components.


 What Steve is asking IMO is whether we have to change OpenStack
 Telemetry to Ceilometer module or whether we can just say Telemetry
 without the OpenStack in front of it,
 
 Andreas
 
 Constraining myself to the topic of what we should be using in the 
 documentation, yes this is what I'm asking. This makes more sense to me than 
 switching to calling them the Heat module and Ceilometer module because:
 
 1) It resolves the issue of using the OpenStack mark where it (apparently) 
 shouldn't be used.
 2) It means we're still using the formal name for the program as defined by 
 the TC [1] (it is my understanding this remains the purview of the TC, it's 
 control of the mark that the board are exercising here).
 3) It is a more minor change/jump and therefore provides more continuity and 
 less confusion to readers (and similarly if one of them ever becomes endorsed 
 as core and we need to switch again).
 
 

Re: [openstack-dev] [Openstack-dev] [Oslo] [Fuel] [Fuel-dev] Openstack services should support SIGHUP signal

2014-02-05 Thread Ben Nemec

On 2014-02-05 10:58, Bogdan Dobrelya wrote:

Hi, stackers.
I believe Openstack services from all projects should support SIGHUP 
for

effective log/config files handling w/o unnecessary restarts.
(See https://bugs.launchpad.net/oslo/+bug/1276694)

'Smooth reloads'(kill -HUP) are much better than 'disturbing restarts',
aren't they?


I believe Oslo already has support for this: 
https://github.com/openstack/oslo-incubator/commit/825ace5581fbb416944acae62f51c489ed93b9c9


As such, I'm going to mark that bug invalid against Oslo, but please 
feel free to add other projects to it that need to start using the 
functionality (or tell me I'm completely wrong and that doesn't do what 
you want :-).


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from Less to Sass (Bootstrap 3 Sass support)

2014-02-05 Thread Jason Rist
On Wed 05 Feb 2014 09:32:54 AM MST, Jaromir Coufal wrote:
 Dear Horizoners,

 in last days there were couple of interesting discussions about
 updating to Bootstrap 3. In this e-mail, I would love to give a small
 summary and propose a solution for us.

 As Bootstrap was heavily dependent on Less, when we got rid of node.js
 we started to use lesscpy. Unfortunately because of this change we
 were unable to update to Bootstrap 3. Fixing lesscpy looks problematic
 - there are issues with supporting all use-cases and even if we fix
 this in some time, we might challenge these issues again in the future.

 There is great news for Bootstrap. It started to support Sass [0].
 (Thanks Toshi and MaxV for highlighting this news!)

 Thanks to this step forward, we might get out of our lesscpy issues by
 switching to Sass. I am very happy with this possible change, since
 Sass is more powerful than Less and we will be able to update our
 libraries without any constraints.

 There are few downsides - we will need to change our Horizon Less
 files to Sass, but it shouldn't be very big deal as far as we
 discussed it with some Horizon folks. We can actually do it as a part
 of Bootstrap update [1] (or CSS files restructuring [2]).

 Other concern will be with compilers. So far I've found 3 ways:
 * rails dependency (how big problem would it be?)
 * https://pypi.python.org/pypi/scss/0.7.1
 * https://pypi.python.org/pypi/SassPython/0.2.1
 * ... (other suggestions?)

 Nice benefit of Sass is, that we can use advantage of Compass
 framework [3], which will save us a lot of energy when writing (not
 just cross-browser) stylesheets thanks to their mixins.

 When we discussed on IRC with Horizoners, it looks like this is good
 way to go in order to move us forward. So I am here, bringing this
 suggestion up to whole community.

 My proposal for Horizon is to *switch from Less to Sass*. Then we can
 unblock our already existing BPs, get Bootstrap updates and include
 Compass framework. I believe this is all doable in Icehouse timeframe
 if there are no problems with compilers.

 Thoughts?

 -- Jarda

 [0] http://getbootstrap.com/getting-started/
 [1] https://blueprints.launchpad.net/horizon/+spec/bootstrap-update
 [2] https://blueprints.launchpad.net/horizon/+spec/css-breakdown
 [3] http://compass-style.org/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think this is a fantastic idea. Having no experience with Less, but 
seeing that it is troublesome - if we can use SASS/Compass, I'd be much 
more comfortable with the switch. +1

--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
+1.919.754.4048
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Thierry Carrez
Russell Bryant wrote:
 Who gets final say if there's strong disagreement between a PTL and the
 TC?  Hopefully this won't matter, but it may be useful to go ahead and
 clear this up front.

I suspect that would be as usual. PTL has final say over his project
matters. The TC can just wield the nuclear weapon of removing a project
from the integrated release... but I seriously doubt we'd engage in such
an extreme solution over that precise discussion.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Jonathan Bryce
On Feb 5, 2014, at 11:12 AM, Mark McLoughlin mar...@redhat.com wrote:

 I don't have a big issue with the way the Foundation currently enforces
 you must use the code - anyone who signs a trademark agreement with
 the Foundation agrees to include the entirety of Nova's code. That's
 very vague, but I assume the Foundation can terminate the agreement if
 it thinks the other party is acting in bad faith.
 
 Basically, I'm concerned about us swinging from a rather lax you must
 include our code rule to an overly strict you must make no downstream
 modifications to our code”.

I tend to agree with you for the most part. As they exist today, the trademark 
licenses include a couple of components: legally agreeing to use the code in 
the projects specified (requires self certification from the licensee) and 
passing the approved test suite once it exists (which adds a component 
requiring external validation of behavior). By creating the test suite and 
selecting required capabilities that can be externally validated through the 
test suite, we would take a step in tightening up the usage and consistency 
enforceable by our existing legal framework.

I think that designated sections” could provide a useful construct for better 
general guidance on where the extension points to the codebase are. From a 
practical standpoint, it would probably be pretty difficult to efficiently 
audit an overly strict definition of the designated sections and this would 
still be a self certifying requirement on the licensee.

Jonathan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Mark Washenberger
On Wed, Feb 5, 2014 at 8:22 AM, Thierry Carrez thie...@openstack.orgwrote:

 (This email is mostly directed to PTLs for programs that include one
 integrated project)

 The DefCore subcommittee from the OpenStack board of directors asked the
 Technical Committee yesterday about which code sections in each
 integrated project should be designated sections in the sense of [1]
 (code you're actually needed to run or include to be allowed to use the
 trademark). That determines where you can run alternate code (think:
 substitute your own private hypervisor driver) and still be able to call
 the result openstack.

 [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition

 PTLs and their teams are obviously the best placed to define this, so it
 seems like the process should be: PTLs propose designated sections to
 the TC, which blesses them, combines them and forwards the result to the
 DefCore committee. We could certainly leverage part of the governance
 repo to make sure the lists are kept up to date.

 Comments, thoughts ?


I don't have any issue defining what I think of as typical extension /
variation seams in the Glance code base. However, I'm still struggling to
understand what all this means for our projects and our ecosystem.
Basically, why do I care? What are the implications of a 0% vs 100%
designation? Are we hoping to improve interoperability, or encourage more
upstream collaboration, or what?

How many deployments do we expect to get the trademark after this core
definition process is completed?



 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating required use upstream code

2014-02-05 Thread Russell Bryant
On 02/05/2014 12:54 PM, Jonathan Bryce wrote:
 On Feb 5, 2014, at 11:12 AM, Mark McLoughlin mar...@redhat.com wrote:
 
 I don't have a big issue with the way the Foundation currently enforces
 you must use the code - anyone who signs a trademark agreement with
 the Foundation agrees to include the entirety of Nova's code. That's
 very vague, but I assume the Foundation can terminate the agreement if
 it thinks the other party is acting in bad faith.

 Basically, I'm concerned about us swinging from a rather lax you must
 include our code rule to an overly strict you must make no downstream
 modifications to our code”.
 
 I tend to agree with you for the most part. As they exist today, the 
 trademark licenses include a couple of components: legally agreeing to use 
 the code in the projects specified (requires self certification from the 
 licensee) and passing the approved test suite once it exists (which adds a 
 component requiring external validation of behavior). By creating the test 
 suite and selecting required capabilities that can be externally validated 
 through the test suite, we would take a step in tightening up the usage and 
 consistency enforceable by our existing legal framework.
 
 I think that designated sections” could provide a useful construct for 
 better general guidance on where the extension points to the codebase are. 
 From a practical standpoint, it would probably be pretty difficult to 
 efficiently audit an overly strict definition of the designated sections and 
 this would still be a self certifying requirement on the licensee.

Another thing to consider is that like many other implementation
details, this stuff is rapidly evolving.  I'm a bit worried about the
nightmare of trying to keep the definitions up to date, much less agreed
upon by all parties involved.

The vague include the entirety of statement is in line with what I
feel is appropriate for Nova.  I suspect that I would disagree with some
interpretations of that, though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Zane Bitter

On 05/02/14 11:39, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2014-02-04 16:14:09 -0800:

On 03/02/14 17:09, Clint Byrum wrote:

UpdatePolicy in cfn is a single string, and causes very generic rolling


Huh?

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html

Not only is it not just a single string (in fact, it looks a lot like
the properties you have defined), it's even got another layer of
indirection so you can define different types of update policy (rolling
vs. canary, anybody?). It's an extremely flexible syntax.



Oops, I relied a little too much on my memory and not enough on docs for
that one. O-k, I will re-evaluate given actual knowledge of how it
actually works. :-P


cheers :D


BTW, given that we already implemented this in autoscaling, it might be
helpful to talk more specifically about what we need to do in addition
in order to support the use cases you have in mind.



As Robert mentioned in his mail, autoscaling groups won't allow us to
inject individual credentials. With the ResourceGroup, we can make a
nested stack with a random string generator so that is solved. Now the


\o/ for the random string generator solving the problem!

:-( for ResourceGroup being the only way to do it.

This is exactly why I hate ResourceGroup and think it was a mistake. 
Powerful software comes from being able to combine simple concepts in 
complex ways. Right now you have to choose between an autoscaling group, 
which has rolling updates, and a ResourceGroup which allows you to scale 
stacks. That sucks. What you need is to have both at the same time, and 
the way to do that is to allow autoscaling groups to scale stacks, as 
has long been planned.


At this point it would be a mistake to add a _complicated_ feature 
solely for the purpose of working around the fact the we can't yet 
combine two other, existing, features. It would be better to fix 
autoscaling groups to allow you to inject individual credentials and 
then add a simpler feature that does not need to create ad-hoc groups.



other piece we need is to be able to directly choose machines to take
out of commission, which I think we may have a simple solution to but I
don't want to derail on that.

The one used in AutoScalingGroups is also limited to just one group,
thus it can be done all inside the resource.


update behavior. I want this resource to be able to control multiple
groups as if they are one in some cases (Such as a case where a user
has migrated part of an app to a new type of server, but not all.. so
they will want to treat the entire aggregate as one rolling update).

I'm o-k with overloading it to allow resource references, but I'd like
to hear more people take issue with depends_on before I select that
course.


Resource references in general, and depends_on in particular, feel like
very much the wrong abstraction to me. This is a policy, not a resource.


To answer your question, using it with a server instance allows
rolling updates across non-grouped resources. In the example the
rolling_update_dbs does this.


That's not a great example, because one DB server depends on the other,
forcing them into updating serially anyway.



You're right, a better example is a set of (n) resource groups which
serve the same service and thus we want to make sure we maintain the
minimum service levels as a whole.


That's interesting, and I'd like to hear more about that use case and 
why it couldn't be solved using autoscaling groups assuming the obstacle 
to using them at all were eliminated. If there's a real use case here 
beyond work around lack of stack-scaling functionality then I'm 
definitely open to being persuaded. I'd just like to make sure that it 
exists and justifies the extra complexity.



If it were an order of magnitude harder to do it this way, I'd say
sure let's just expand on the single-resource rolling update. But
I think it won't be that much harder to achieve this and then the use
case is solved.


I guess what I'm thinking is that your proposal is really two features:

1) Notifications/callbacks on update that allow the user to hook in to 
the workflow.

2) Rolling updates over ad-hoc groups (not autoscaling groups).

I think we all agree that (1) is needed; by my count ~6 really good use 
cases have been mentioned in this thread.


What I'm suggesting is that we probably don't need to do (2) at all if 
we fix autoscaling groups to be something you could use.


Having reviewed the code for rolling updates in scaling groups, I can 
report that it is painfully complicated and that you'd be doing yourself 
a big favour by not attempting to reimplement it with ad-hoc groups ;). 
(To be fair, I don't think this would be quite as bad, though clearly it 
wouldn't be as good as not having to do it at all.) More concerning than 
that, though, is the way this looks set to make the template format even 
more arcane than it already is. We might eventually be able 

[openstack-dev] [Governance] Integrated projects and new requirements

2014-02-05 Thread Russell Bryant
Greetings,

In the TC we have been going through a process to better define our
requirements for incubation and graduation to being an integrated
project.  The current version can be found in the governance repo:

http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements

Is it time that we do an analysis of the existing integrated projects
against the requirements we have set?  If not now, when?

Perhaps we should start putting each project on the TC agenda for a
review of its current standing.  For any gaps, I think we should set a
specific timeframe for when we expect these gaps to be filled.

Thoughts?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation - linking to slideshares?

2014-02-05 Thread Collins, Sean
On Tue, Feb 04, 2014 at 07:52:22AM -0600, Anne Gentle wrote:
 Currently the docs contributor sign the same CLA as code contributors. I'd
 encourage you to use the docs to really explain not just link to slide
 decks. There's a better chance of maintenance over time.

Agreed - I plan on writing up docs, but when I find something really
good on a slide I'd like to be able to have a reference to it in the
footnotes - I suppose a works cited section, so I'm not plagiarizing.

 I had been using a wiki page for a collection of videos at
 https://wiki.openstack.org/wiki/Demo_Videos. But it ages with time.

Awesome - I'll make sure to add that link to some kind of
further reading section.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Ben Nemec
 

On 2014-02-05 10:58, Doug Hellmann wrote: 

 On Wed, Feb 5, 2014 at 11:44 AM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-02-05 09:05, Doug Hellmann wrote: 
 
 On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-01-08 12:14, Doug Hellmann wrote: 
 
 On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-01-08 11:16, Sean Dague wrote:
 On 01/08/2014 12:06 PM, Doug Hellmann wrote:
 snip
 Yeah, that's what made me start thinking oslo.sphinx should be called
 something else.
 
 Sean, how strongly do you feel about not installing oslo.sphinx in
 devstack? I see your point, I'm just looking for alternatives to the
 hassle of renaming oslo.sphinx. 
 Doing the git thing is definitely not the right thing. But I guess I got
 lost somewhere along the way about what the actual problem is. Can
 someone write that up concisely? With all the things that have been
 tried/failed, why certain things fail, etc.
 The problem seems to be when we pip install -e oslo.config on the
system, then pip install oslo.sphinx in a venv. oslo.config is
unavailable in the venv, apparently because the namespace package for
o.s causes the egg-link for o.c to be ignored. Pretty much every other
combination I've tried (regular pip install of both, or pip install -e
of both, regardless of where they are) works fine, but there seem to be
other issues with all of the other options we've explored so far.

 We can't remove the pip install -e of oslo.config because it has to be
used for gating, and we can't pip install -e oslo.sphinx because it's
not a runtime dep so it doesn't belong in the gate. Changing the
toplevel package for oslo.sphinx was also mentioned, but has obvious
drawbacks too.

 I think that about covers what I know so far. 

Here's a link dstufft provided to the pip bug tracking this problem:
https://github.com/pypa/pip/issues/3 [1] 
Doug 

This just bit me again trying to run unit tests against a fresh Nova
tree. I don't think it's just me either - Matt Riedemann said he has
been disabling site-packages in tox.ini for local tox runs. We really
need to do _something_ about this, even if it's just disabling
site-packages by default in tox.ini for the affected projects. A
different option would be nice, but based on our previous discussion I'm
not sure we're going to find one. 
Thoughts? 

Is the problem isolated to oslo.sphinx? That is, do we end up with any
configurations where we have 2 oslo libraries installed in different
modes (development and regular) where one of those 2 libraries is not
oslo.sphinx? Because if the issue is really just oslo.sphinx, we can
rename that to move it out of the namespace package. 

oslo.sphinx is the only one that has triggered this for me so far. I
think it's less likely to happen with the others because they tend to be
runtime dependencies so they get installed in devstack, whereas
oslo.sphinx doesn't because it's a build dep (AIUI anyway). 

That's pretty much what I expected. 

Can we get a volunteer to work on renaming oslo.sphinx? 

I'm winding down on the parallel testing work so I could look at this
next. I don't know exactly what is going to be involved in the rename
though. 

We also need to decide what we're going to call it. I haven't come up
with any suggestions that I'm particularly in love with so far. :-/ 

-Ben 

 Doug 
 
 Doug 
 
 -Ben

 

Links:
--
[1] https://github.com/pypa/pip/issues/3
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Russell Bryant
On 01/23/2014 11:28 AM, Justin Santa Barbara wrote:
 Would appreciate feedback / opinions on this
 blueprint: 
 https://blueprints.launchpad.net/nova/+spec/first-discover-your-peers

The blueprint starts out with:

When running a clustered service on Nova, typically each node needs
to find its peers. In the physical world, this is typically done
using multicast. On the cloud, we either can't or don't want to use
multicast.

So, it seems that at the root of this, you're looking for a
cloud-compatible way for instances to message each other.  I really
don't see the metadata API as the appropriate place for that.

How about using Marconi here?  If not, what's missing from Marconi's API
to solve your messaging use case to allow instances to discover each other?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Will the Scheuler use Nova Objects?

2014-02-05 Thread Chris Behrens

On Jan 30, 2014, at 5:55 AM, Andrew Laski andrew.la...@rackspace.com wrote:

 I'm of the opinion that the scheduler should use objects, for all the reasons 
 that Nova uses objects, but that they should not be Nova objects.  Ultimately 
 what the scheduler needs is a concept of capacity, allocations, and locality 
 of resources.  But the way those are modeled doesn't need to be tied to how 
 Nova does it, and once the scope expands to include Cinder it may quickly 
 turn out to be limiting to hold onto Nova objects.

+2! 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >