Re: [openstack-dev] [Mistral] How Mistral handling long running delegate tasks

2014-04-04 Thread Renat Akhmerov

On 04 Apr 2014, at 07:33, Kirill Izotov enyk...@stackstorm.com wrote:

 Then, we can make task executor interface public and allow clients to
 provide their own task executors. It will be possible then for Mistral
 to implement its own task executor, or several, and share the
 executors between all the engine instances.
 I'm afraid that if we start to tear apart the TaskFlow engine, it would 
 quickly become a mess to support. Besides, the amount of things left to 
 integrate after we throw out engine might be so low it proof the whole 
 process of integration to be just nominal and we are back to square one. Any 
 way, task execution is the part that least bothers me, both graph action and 
 the engine itself is where the pain will be.

Would love to see something additional (boxedarrows) explaining this approach. 
Sorry, I’m hardly following the idea.

 That is part of our public API, it is stable and good enough. Basically,
 I don't think this API needs any major change.
 
 But whatever should and will be done about it, I daresay all that work
 can be done without affecting API more then I described above.
 
 I completely agree that we should not change the public API of the sync 
 engine, especially the one in helpers. What we need is, on the contrary, a 
 low level construct that would do the number of things i stated previously, 
 but will be a part of public API of TaskFlow so we can be sure it would work 
 exactly the same way it worked yesterday.

I’m 99.9% sure we’ll have to change API because all we’ve been discussing 
so far made me think this is a key point going implicitly through all our 
discussions: without have a public method like “task_done” we won’t build truly 
passive/async execution model. And it doesn’t matter wether it uses futures, 
callbacks or whatever else inside.

And again, just want to repeat. If we will be able to deal with all the 
challenges that passive/async execution model exposes then other models can be 
built trivially on top of it.

@Ivan,

Thanks for joining the conversation. Looks like we really need your active 
participation for you’re the one who knows all the TF internals and concepts 
very well. As for what you wrote about futures and callbacks, it would be 
helpful to see some illustration of your idea.

Renat Akhmerov
@ Mirantis Inc.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Next crack at real workflows

2014-04-04 Thread Renat Akhmerov
Dmitri, nice work, will research them carefully early next week. I would ask 
other folks to do the same (especially Nikolay).

Renat Akhmerov
@ Mirantis Inc.

On 03 Apr 2014, at 06:22, Dmitri Zimine d...@stackstorm.com wrote:

 Two more workflows drafted - cloud cron, and lifecycle, version 1. 
 
 The mindset questions are: 
 1) is DSL syntax expressive, and capable and convenient to handle real use 
 cases?
 2) most importantly: what are the implied workflow capabilities which make it 
 all work? 
  
 * Take a look here: 
 https://github.com/dzimine/mistral-workflows/tree/add-usecases
 
 * Leave your comments - generally, or  line-by-line, in the pull request  
 https://github.com/dzimine/mistral-workflows/pull/1/files
 
 * Fork, do your own modifications and do another pull request. 
 
 * Or just reply with your comments  in email (lazy option :)) 
 
 NOTE: please keep this thread for specific comments on DSL and workflow 
 capabilities, create another thread if changing topic. Thanks! 
 
 DZ 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [nova] How should a holistic scheduler relate to Heat?

2014-04-04 Thread Mike Spreitzer
Clint Byrum cl...@fewbar.com wrote on 04/03/2014 07:01:16 PM:

 ... The whole question raises many more
 questions, and I wonder if there's just something you haven't told us
 about this use case. :-P

Yes, I seem to have made a muddle of things by starting in one corner of a 
design space.  Let me try to reset this conversation and start from the 
beginning and go slowly enough.  I have adjusted the email subject line to 
describe the overall discussion and invite Nova people, who should also 
participate because this involves the evolution of the Nova API.

Let's start with the simple exercise of designing a resource type for the 
existing server-groups feature of Nova, and then consider how to take one 
evolutionary step forward (from sequential to holistic scheduling).  By 
scheduling here I mean simply placement, not a more sophisticated thing 
that includes time as well.

The server-groups feature of Nova (
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension) 
allows a Nova client to declare a group (just the group as a thing unto 
itself, not listing its members) and associate placement policies with it, 
and include a reference to the group in each Nova API call that creates a 
member of the group --- thereby putting those instances in that group, for 
the purpose of letting the scheduling for those instances take the group's 
policies into account.  The policies currently supported are affinity and 
anti-affinity.  This does what might be called sequential scheduling: when 
an instance is created, its placement decision can take into account its 
group's policies and the placement decisions already made for instances 
previously created, but cannot take into account the issues of placing 
instances that have yet to be created.

We can define a Heat resource type for a server-group.  Such a resource 
would include its policy set, and not its members, among its properties. 
In the Heat snippet for an OS::Nova::Server there could be a reference to 
a server-group resource.  This directly reflects the API outlined above, 
the dependencies run in the right direction for that API, and it looks to 
me like a pretty simple and clear design. Do not ask me whether a 
server-group's attributes include its members.

If the only placement policies are anti-affinity policies and all servers 
are eligible for the same places then I think that there is no advantage 
in scheduling holistically.  But I am interested in a broader set of 
scenarios, and for those holistic scheduling can get better results than 
sequential scheduling in some cases.

Now let us consider how to evolve the Nova API so that a server-group can 
be scheduled holistically.  That is, we want to enable the scheduler to 
look at both the group's policies and its membership, all at once, and 
make a joint decision about how to place all the servers (instances) in 
the group.  There is no agreed answer here yet, but let me suggest one 
that I hope can move this discussion forward.  The key idea is to first 
associate not just the policies but also a description of the group's 
members with the group, then get the joint scheduling decision made, then 
let the client orchestrate the actual creation of the servers.  This could 
be done with a two-step API: one step creates the group, given its 
policies and member descriptions, and in the second step the client makes 
the calls that cause the individual servers to be made; as before, each 
such call includes a reference to the group --- which is now associated 
(under the covers) with a table that lists the chosen placement for each 
server.  The server descriptions needed in the first step are not as 
extensive as the descriptions needed in the second step.  For example, the 
holistic scheduler would not care about the user_data of a server.  We 
could define a new data structure for member descriptions used in the 
first step (this would probably be a pared-down version of what is used in 
the second step).

Now let us consider how to expose this through Heat.  We could take a 
direct approach: modify our original server-group resource type so that 
its properties include not only the policy set but also the list of member 
descriptions, and the rest remains unchanged.  That would work, but it 
would be awkward for template authors.  They now have to write two 
descriptions of each server --- with no help at authoring time for 
ensuring the requisite consistency between the two descriptions.  Of 
course, the Nova API is no better regarding consistency, it can (at best) 
check for consistency when it sees the second description of a given 
server.  But the Nova API is imperative, while a Heat template is intended 
to be declarative.  I do not like double description because it adds bulk 
and creates additional opportunities for mistakes (compared to single 
description).

How can we avoid double-description?  A few ideas come to mind.

One approach involves a change in the 

Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Ladislav Smola

+1
On 04/03/2014 01:02 PM, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

My approach to this caused some confusion a while back, so I'm keeping
the boilerplate :) - I'm
going to talk about stats here, but they are only part of the picture
: folk that aren't really being /felt/ as effective reviewers won't be
asked to take on -core responsibility, and folk who are less active
than needed but still very connected to the project may still keep
them : it's not pure numbers.

Also, it's a vote: that is direct representation by the existing -core
reviewers as to whether they are ready to accept a new reviewer as
core or not. This mail from me merely kicks off the proposal for any
changes.

But, the metrics provide an easy fingerprint - they are a useful tool
to avoid bias (e.g. remembering folk who are just short-term active) -
human memory can be particularly treacherous - see 'Thinking, Fast and
Slow'.

With that prelude out of the way:

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up so they
aren't caught by surprise.

90 day active-enough stats:

+-+---++
| Reviewer| Reviews   -2  -1  +1  +2  +A+/- % |
Disagreements* |
+-+---++
|slagle **| 6550 145   7 503 15477.9% |
36 (  5.5%)  |
| clint-fewbar ** | 5494 120  11 414 11577.4% |
32 (  5.8%)  |
|   lifeless **   | 518   34 203   2 279 11354.2% |
21 (  4.1%)  |
|  rbrady | 4530  14 439   0   096.9% |
60 ( 13.2%)  |
| cmsj ** | 3220  24   1 297 13692.5% |
22 (  6.8%)  |
|derekh **| 2610  50   1 210  9080.8% |
12 (  4.6%)  |
|dan-prince   | 2570  67 157  33  1673.9% |
15 (  5.8%)  |
|   jprovazn **   | 1900  21   2 167  4388.9% |
13 (  6.8%)  |
|ifarkas **   | 1860  28  18 140  8284.9% |
6 (  3.2%)  |
===
| jistr **| 1770  31  16 130  2882.5% |
4 (  2.3%)  |
|  ghe.rivero **  | 1761  21  25 129  5587.5% |
7 (  4.0%)  |
|lsmola **| 1722  12  55 103  6391.9% |
21 ( 12.2%)  |
|   jdob  | 1660  31 135   0   081.3% |
9 (  5.4%)  |
|  bnemec | 1380  38 100   0   072.5% |
17 ( 12.3%)  |
|greghaynes   | 1260  21 105   0   083.3% |
22 ( 17.5%)  |
|  dougal | 1250  26  99   0   079.2% |
13 ( 10.4%)  |
|   tzumainn **   | 1190  30  69  20  1774.8% |
2 (  1.7%)  |
|rpodolyaka   | 1150  15 100   0   087.0% |
15 ( 13.0%)  |
| ftcjeff | 1030   3 100   0   097.1% |
9 (  8.7%)  |
| thesheep|  930  26  31  36  2172.0% |
3 (  3.2%)  |
|pblaho **|  881   8  37  42  2289.8% |
3 (  3.4%)  |
| jonpaul-sullivan|  800  33  47   0   058.8% |
17 ( 21.2%)  |
|   tomas-8c8 **  |  780  15   4  59  2780.8% |
4 (  5.1%)  |
|marios **|  750   7  53  15  1090.7% |
14 ( 18.7%)  |
| stevenk |  750  15  60   0   080.0% |
9 ( 12.0%)  |
|   rwsu  |  740   3  71   0   095.9% |
11 ( 14.9%)  |
| mkerrin |  700  14  56   0   080.0% |
14 ( 20.0%)  |

The  line is set at the just voted on minimum expected of core: 3
reviews per work day, 60 work days in a 90 day period (64 - fudge for
holidays), 180 reviews.
I cut the full report out at the point we had been previously - with
the commitment to 3 reviews per day, next months report will have a
much higher minimum. In future reviews, we'll set the 

Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Thomas Spatzier
Hi Steve,

your indexing idea sounds interesting, but I am not sure it would work
reliably. The kind of matching based on names of parameters and outputs and
internal get_attr uses has very strong assumptions and I think there is a
not so low risk of false positives. What if the templates includes some
internal details that would not affect the matching but still change the
behavior in a way that would break the composition. Or what if a user by
chance built a template that by pure coincidence uses the same parameter
and output names as one of those abstract types that was mention, but the
template is simply not built for composition?

I think it would be much cleaner to have an explicit attribute in the
template that says this template can be uses as realization of type
My::SomeType and use that for presenting the user choice and building the
environment.

Regards,
Thomas

Steve Baker sba...@redhat.com wrote on 04/04/2014 06:12:38:
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 04/04/2014 06:14
 Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications inthe
cloud

 On 03/04/14 13:04, Georgy Okrokvertskhov wrote:
 Hi Steve,

 I think this is exactly the place where we have a boundary between
 Murano catalog and HOT.

 In your example one can use abstract resource type and specify a
 correct implementation via environment file. This is how it will be
 done on the final stage in Murano too.

 Murano will solve another issue. In your example user should know
 what template to use as a provider template. In Murano this will be
 done in the following way:
 1) User selects an app which requires a DB
 2) Murano sees this requirement for DB and do a search in the app
 catalog to find all apps which expose this functionality. Murano
 uses app package definitions for that.
 3) User select in UI specific DB implementation he wants to use.

 As you see, in Murano case user has no preliminary knowledge of
 available apps\templates and it uses catalog to find it. A search
 criteria can be quite complex with using different application
 attribute. If we think about moving application definition to HOT
 format it should provide all necessary information for catalog.

 In order to search apps in catalog which uses HOT format we need
 something like that:

 One needs to define abstract resource like
 OS:HOT:DataBase

 Than in each DB implementation of DB resource one has to somehow
 refer this abstract resource as a parent like

 Resource OS:HOT:MySQLDB
   Parent: OS:HOT:DataBase

 Then catalog part can use this information and build a list of all
 apps\HOTs with resources with parents OS:HOT:DataBase

 That is what we are looking for. As you see, in this example I am
 not talking about version and other attributes which might be
 required for catalog.


 This sounds like a vision for Murano that I could get behind. It
 would be a tool which allows fully running applications to be
 assembled and launched from a catalog of Heat templates (plus some
 app lifecycle workflow beyond the scope of this email).

 We could add type interfaces to HOT but I still think duck typing
 would be worth considering. To demonstrate, lets assume that when a
 template gets cataloged, metadata is also indexed about what
 parameters and outputs the template has. So for the case above:
 1) User selects an app to launch from the catalog
 2) Murano performs a heat resource-type-list and compares that with
 the types in the template. The resource-type list is missing
 My::App::Database for a resource named my_db
 3) Murano analyses the template and finds that My::App::Database is
 assigned 2 properties (db_username, db_password) and elsewhere in
 the template is a {get_attr: [my_db, db_url]} attribute access.
 4) Murano queries glance for templates, filtering by templates which
 have parameters [db_username, db_password] and outputs [db_url]
 (plus whatever appropriate metadata filters)
 5) Glance returns 2 matches. Murano prompts the user for a choice
 6) Murano constructs an environment based on the chosen template,
 mapping My::App::Database to the chosen template
 7) Murano launches the stack

 Sure, there could be a type interface called My::App::Database which
 declares db_username, db_password and db_url, but since a heat
 template is in a readily parsable declarative format, all required
 information is available to analyze, both during glance indexing and
 app launching.




 On Wed, Apr 2, 2014 at 3:30 PM, Steve Baker sba...@redhat.com wrote:
 On 03/04/14 10:39, Ruslan Kamaldinov wrote:
  This is a continuation of the MuranoPL questions thread.
 
  As a result of ongoing discussions, we figured out that definitionof
layers
  which each project operates on and has responsibility for is not yet
agreed
  and discussed between projects and teams (Heat, Murano, Solum (in
  alphabetical order)).
 
  Our suggestion and expectation from this working group is to have such
  a definition of layers, 

Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread mar...@redhat.com
On 03/04/14 14:02, Robert Collins wrote:
 Getting back in the swing of things...
 
 Hi,
 like most OpenStack projects we need to keep the core team up to
 date: folk who are not regularly reviewing will lose context over
 time, and new folk who have been reviewing regularly should be trusted
 with -core responsibilities.
 
 In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

+1

 
 Existing -core members are eligible to vote - please indicate your
 opinion on each of the three changes above in reply to this email.
 

---snip


 
 -core that are not keeping up recently... :
 
 |   tomas-8c8 **  |  310   4   2  25   887.1% |
 1 (  3.2%)  |
 |marios **|  270   1  17   9   796.3% |
 3 ( 11.1%)  |

thanks for the heads up - after some time away, I've been keeping the '3
a day' for the last couple weeks so hopefully this will improve.
However, my reviews are mainly in tripleo-heat-templates and tuskar-ui;
I guess the latter no longer counts towards these statistics (under
horizon?) and I'm not sure how to reconcile this ...? Should I just drop
the tuskar-ui reviews altogether ( I am trying to become more active in
neutron too, so something has to give somewhere)...

thanks! marios


 |   tzumainn **   |  270   3  23   1   488.9% |
 0 (  0.0%)  |
 |pblaho **|  170   0   4  13   4   100.0% |
 1 (  5.9%)  |
 |jomara **|   00   0   0   0   1 0.0% |
 0 (  0.0%)  |
 
 
 Please remember - the stats are just an entry point to a more detailed
 discussion about each individual, and I know we all have a bunch of
 work stuff, on an ongoing basis :)
 
 I'm using the fairly simple metric we agreed on - 'average at least
 three reviews a
 day' as a proxy for 'sees enough of the code and enough discussion of
 the code to be an effective reviewer'. The three review a day thing we
 derived based
 on the need for consistent volume of reviews to handle current
 contributors - we may
 lower that once we're ahead (which may happen quickly if we get more cores... 
 :)
 But even so:
  - reading three patches a day is a pretty low commitment to ask for
  - if you don't have time to do that, you will get stale quickly -
 you'll only see under
33% of the code changes going on (we're doing about 10 commits
a day - twice as many since december - and hopefully not slowing down!)
 
 Cheers,
 Rob
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Imre Farkas

On 04/03/2014 01:02 PM, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.


ACK for all proposed changes.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-04 Thread Yingjun Li
Glad to see this, i will be glad to contribute on it if the project could move 
on..

On Apr 4, 2014, at 10:01, Cazzolato, Sergio J sergio.j.cazzol...@intel.com 
wrote:

 
 Glad to see that, for sure I'll participate of this session.
 
 Thanks
 
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com] 
 Sent: Thursday, April 03, 2014 7:21 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Quota Management
 
 On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
 On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
 Jay, thanks for taking ownership on this idea, we are really 
 interested to contribute to this, so what do you think are the next 
 steps to move on?
 
 Perhaps a summit session on quota management would be in order?
 
 Done:
 
 http://summit.openstack.org/cfp/details/221
 
 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Icehouse RC1 available

2014-04-04 Thread Thierry Carrez
Hello everyone,

Last but not least, Swift just published its first Icehouse release
candidate. You can find the tarball for 1.13.1-rc1 at:

https://launchpad.net/swift/icehouse/1.13.1-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released together with all
other OpenStack integrated components as the Swift 1.13.1 final version
on April 17. You are therefore strongly encouraged to test and validate
this tarball.

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/swift/tree/milestone-proposed

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/swift/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-04 Thread Sylvain Bauza
Hi all,



2014-04-03 18:47 GMT+02:00 Meghal Gosalia meg...@yahoo-inc.com:

  Hello folks,

  Here is the bug [1] which is currently not allowing a host to be part of
 two availability zones.
 This bug was targeted for havana.

  The fix in the bug was made because it was assumed
 that openstack does not support adding hosts to two zones by design.

  The assumption was based on the fact that ---
 if hostX is added to zoneA as well as zoneB,
 and if you boot a vm vmY passing zoneB in boot params,
 nova show vmY still returns zoneA.

  In my opinion, we should fix the case of nova show
 rather than changing aggregate api to not allow addition of hosts to
 multiple zones.

  I have added my comments in comments #7 and #9 on that bug.

  Thanks,
 Meghal


 [1] Bug - https://bugs.launchpad.net/nova/+bug/1196893





Thanks for the pointer, now I see why the API is preventing host to be
added to a 2nd aggregated if there is a different AZ. Unfortunately, this
patch missed the fact that aggregates metadata can be modified once the
aggregate is created, so we should add a check when updating metadate in
order to cover all corner cases.

So, IMHO, it's worth providing a patch for API consistency so as we enforce
the fact that a host should be in only one AZ (but possibly 2 or more
aggregates) and see how we can propose to user ability to provide 2
distincts AZs when booting.

Does everyone agree ?

-Sylvain


   On Apr 3, 2014, at 9:05 AM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -

 Currently host aggregates are quite general, but the only ways for an
 end-user to make use of them are:

 1) By making the host aggregate an availability zones (where each host
 is only supposed to be in one availability zone) and selecting it at
 instance creation time.

 2) By booting the instance using a flavor with appropriate metadata
 (which can only be set up by admin).


 I would like to see more flexibility available to the end-user, so I
 think we should either:

 A) Allow hosts to be part of more than one availability zone (and allow
 selection of multiple availability zones when booting an instance), or


 While changing to allow hosts to be in multiple AZs changes the concept
 from an operator/user point of view I do think the idea of being able to
 specify multiple AZs when booting an instance makes sense and would be a
 nice enhancement for users working with multi-AZ environments - I'm OK
 with this instance running in AZ1 and AZ2, but not AZ*.

 -Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Whats the way to do cleanup during service shutdown / restart ?

2014-04-04 Thread Deepak Shetty
resendign it with correct cinder prefix in subject.

thanx,
deepak


On Thu, Apr 3, 2014 at 7:44 PM, Deepak Shetty dpkshe...@gmail.com wrote:


 Hi,
 I am looking to umount the glsuterfs shares that are mounted as part
 of gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
 devstack env) or when c-vol service is being shutdown.

 I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
 didn't work

  def __del__(self):
  LOG.info(_(DPKS: Inside __del__ Hurray!, shares=%s)%
 self._mounted_shares)
  for share in self._mounted_shares:
  mount_path = self._get_mount_point_for_share(share)
  command = ['umount', mount_path]
  self._do_umount(command, True, share)

 self._mounted_shares is defined in the base class (RemoteFsDriver)

1. ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
Caught SIGINT, stopping children
2. 2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-]
Caught SIGTERM, exiting
3. 2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-]
Caught SIGTERM, exiting
4. 2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
Waiting on 2 children to exit
5. 2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-]
Child 30185 exited with status 1
6. 2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-]
DPKS: Inside __del__ Hurray!, shares=[]
7. 2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-]
Child 30186 exited with status 1
8. Exception TypeError: 'NoneType' object is not callable in bound
method GlusterfsDriver.__del__ of
cinder.volume.drivers.glusterfs.GlusterfsDriver object at 0x2777ed0
ignored
9. [stack@devstack-vm tempest]$

 So the _mounted_shares is empty ([]) which isn't true since I have 2
 glsuterfs shares mounted and when i print _mounted_shares in other parts of
 code, it does show me the right thing.. as below...

 From volume/drivers/glusterfs.py @ line 1062:
 LOG.debug(_('Available shares: %s') % self._mounted_shares)

 which dumps the debugprint  as below...

 2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
 [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None]* Available shares:
 [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']*from 
 (pid=30185) _ensure_shares_mounted
 /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
  This brings in few Qs ( I am usign devstack env) ...

 1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
 gluster backends setup, hence 2 cinder-volume instances, but i see __del__
 being called once only (as per above debug prints)
 2) I tried atexit and registering a function to do the cleanup. Ctrl-C'ing
 c-vol (from screen ) gives the same issue.. shares is empty ([]), but this
 time i see that my atexit handler called twice (once for each backend)
 3) In general, whats the right way to do cleanup inside cinder volume
 driver when a service is going down or being restarted ?
 4) The solution should work in both devstack (ctrl-c to shutdown c-vol
 service) and production (where we do service restart c-vol)

 Would appreciate a response

 thanx,
 deepak


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-04 Thread Simon Pasquier
Hi Salvatore,

On 03/04/2014 14:56, Salvatore Orlando wrote:
 Hi Simon,
 
snip
 
 I hope stricter criteria will be enforced for Juno; I personally think
 every CI should run at least the smoketest suite for L2/L3 services (eg:
 load balancer scenario will stay optional).

I had a little thinking about this and I feel like it might not have
caught _immediately_ the issue Kyle talked about [1].

Let's rewind the time line:
1/ Change to *Nova* adding external events API is merged
https://review.openstack.org/#/c/76388/
2/ Change to *Neutron* notifying Nova when ports are ready is merged
https://review.openstack.org/#/c/75253/
3/ Change to *Nova* making libvirt wait for Neutron notifications is merged
https://review.openstack.org/#/c/74832/

At this point and assuming that the external ODL CI system were running
the L2/L3 smoke tests, change #3 could have passed since external
Neutron CI aren't voting for Nova. Instead it would have voted against
any subsequent change to Neutron.

Simon

[1] https://bugs.launchpad.net/neutron/+bug/1301449

 
 Salvatore
 
 [1] https://review.openstack.org/#/c/75304/
 
 
 
 On 3 April 2014 12:28, Simon Pasquier simon.pasqu...@bull.net
 mailto:simon.pasqu...@bull.net wrote:
 
 Hi,
 
 I'm looking at [1] but I see no requirement of which Tempest tests
 should be executed.
 
 In particular, I'm a bit puzzled that it is not mandatory to boot an
 instance and check that it gets connected to the network. To me, this is
 the very minimum for asserting that your plugin or driver is working
 with Neutron *and* Nova (I'm not even talking about security groups). I
 had a quick look at the existing 3rd party CI systems and I found none
 running this kind of check (correct me if I'm wrong).
 
 Thoughts?
 
 [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
 --
 Simon Pasquier
 Software Engineer (OpenStack Expertise Center)
 Bull, Architect of an Open World
 Phone: + 33 4 76 29 71 49 tel:%2B%2033%204%2076%2029%2071%2049
 http://www.bull.com
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift ring building..

2014-04-04 Thread Shyam Prasad N
Hi,

I have a question regarding the ring building process in a swift cluster.
Many sources online suggest building the rings using ring-builder and scp
the generated ring files to all the nodes in the cluster.
What I'm trying to understand is if the scp step is just to simplify
things, or is it absolutely necessary that the ring files on all the nodes
is exactly the same?
Can I instead individually build the rings on each node individually?

-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Devstack] add support for ceph

2014-04-04 Thread Chmouel Boudjnah
Hello,

We had quite a lengthy discussion on this review :

https://review.openstack.org/#/c/65113/

about a patch that seb has sent to add ceph support to devstack.

The main issues seems to resolve around the fact that in devstack we
support only packages that are in the distros and not having to add
external apt sources for that.

In devstack we are moving as well toward a nice and solid plugin system
where people can hook externally and not needing to submit patch to add
a feature that change the 'core' of devstack.

I think the best way to go forward with this would be to :

* Split the patch mentioned above to get the generic things bit in
their own patch. i.e the storage file :

https://review.openstack.org/#/c/65113/19/lib/storage

and the create_disk (which would need to be used by lib/swift as well) :

https://review.openstack.org/#/c/65113/19/functions

* Get the existing drivers converted to that new storage format.

* Adding new hooks to the plugin system to be able to do what we want
for this:

https://review.openstack.org/#/c/65113/19/lib/cinder

and for injecting things in libvirt :

https://review.openstack.org/#/c/65113/19/lib/nova

Hopefully to have folks using devstack and ceph would just need to be :

$ git clone devstack 
$ curl -O lib/storages/ceph http:///ceph_devstack
(and maybe an another file for extras.d)

am I missing a step ?

Cheers,
Chmouel.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 1302376 : bug or feature

2014-04-04 Thread Shweta shweta
Hi all,

I've logged a bug in trove. I'm a little unsure if this is a bug or
feature. Please have a look at the bug @
https://bugs.launchpad.net/trove/+bug/1302376 and suggest if it is valid.


Thanks,
Shweta | Consultant Engineering
GlobalLogic
www.globallogic.com
http://www.globallogic.com/
http://www.globallogic.com/email_disclaimer.txt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 1302376 : bug or feature

2014-04-04 Thread Denis Makogon
Goodday Shweta, it's a definitely a bug, thanks for registering the
bug-report.



Best regards,
Denis Makogon


On Fri, Apr 4, 2014 at 1:04 PM, Shweta shweta shw...@globallogic.comwrote:

 Hi all,

 I've logged a bug in trove. I'm a little unsure if this is a bug or
 feature. Please have a look at the bug @
 https://bugs.launchpad.net/trove/+bug/1302376 and suggest if it is valid.


 Thanks,
 Shweta | Consultant Engineering
 GlobalLogic
 www.globallogic.com
  http://www.globallogic.com/
 http://www.globallogic.com/email_disclaimer.txt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest]:Please updated etherpad before adding tempest tests

2014-04-04 Thread Kekane, Abhishek
Hello everyone,

This is regarding implementation of blueprint 
https://blueprints.launchpad.net/tempest/+spec/testcases-expansion-icehouse.

As per mentioned in etherpads for this blueprint, please add your name if you 
are working on any of the items mentioned in the list.
Otherwise efforts will get duplicated.


Thanks  Regards,

Abhishek Kekane

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-04 Thread Julie Pichon
On 03/04/14 23:20, Jay Pipes wrote:
 On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
 On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
 Jay, thanks for taking ownership on this idea, we are really
 interested to contribute to this, so what do you think are the next
 steps to move on?

 Perhaps a summit session on quota management would be in order?
 
 Done:
 
 http://summit.openstack.org/cfp/details/221

Thank you for proposing the session, I'm hopeful having this in the new
cross-project track will have a positive impact on the discussion. I'm
under the impression that this comes back regularly as a session topic
but keeps hitting barriers when it comes to actual implementation
(perhaps because important stakeholders were missing from the session
before).

I'd like to bring up the cross-project discussion from last time this
was discussed in December [1] as a reference, since the same
questions/objections will likely come back again. One of the main issues
was that this shouldn't live in Keystone, which could be resolved by
using Boson, but the rest shows a reluctance from the projects to
delegate quota management, and uncertainty around the use cases. Oslo
was also mentioned as a possible place to help with improving the
consistency.

I'd love a more consistent way to handle and manage quotas across
multiple projects as this would help Horizon too, for very similar
reasons than are mentioned here.

Thanks,

Julie

[1]
http://eavesdrop.openstack.org/meetings/project/2013/project.2013-12-10-21.02.log.html
from 21:10

 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Tempest without branches

2014-04-04 Thread David Kranz

On 04/04/2014 07:37 AM, Sean Dague wrote:

An interesting conversation has cropped up over the last few days in -qa
and -infra which I want to bring to the wider OpenStack community. When
discussing the use of Tempest as part of the Defcore validation we came
to an interesting question:

Why does Tempest have stable/* branches? Does it need them?

Historically the Tempest project has created a stable/foo tag the week
of release to lock the version of Tempest that will be tested against
stable branches. The reason we did that is until this cycle we had
really limited nobs in tempest to control which features were tested.
stable/havana means - test everything we know how to test in havana. So
when, for instance, a new API extension landed upstream in icehouse,
we'd just add the tests to Tempest. It wouldn't impact stable/havana,
because we wouldn't backport changes.

But is this really required?

For instance, we don't branch openstack clients. They are supposed to
work against multiple server versions. Tempest, at some level, is
another client. So there is some sense there.

Tempest now also have flags on features, and tests are skippable if
services, or even extensions aren't enabled (all explicitly setable in
the tempest.conf). This is a much better control mechanism than the
course grained selection of stable/foo.


If we decided not to set a stable/icehouse branch in 2 weeks, the gate
would change as follows:

Project masters: no change
Project stable/icehouse: would be gated against Tempest master
Tempest master: would double the gate jobs, gate on project master and
project stable/icehouse on every commit.

(That last one needs infra changes to work right, those are all in
flight right now to assess doability.)

Some interesting effects this would have:

  * Tempest test enhancements would immediately apply on stable/icehouse *

... giving us more confidence. A large amount of tests added to master
in every release are enhanced checking for existing function.

  * Tempest test changes would need server changes in master and
stable/icehouse *

In trying tempest master against stable/havana we found a number of
behavior changes in projects that there had been a 2 step change in the
Tempest tests to support. But this actually means that stable/havana and
stable/icehouse for the same API version are different. Going forward
this would require master + stable changes on the projects + Tempest
changes. Which would provide much more friction in changing these sorts
of things by accident.

  * Much more stable testing *

If every Tempest change is gating on stable/icehouse, the week long
stable/havana can't pass tests won't happen. There will be much more
urgency to keep stable branches functioning.


If we got rid of branches in Tempest the path would be:
  * infrastructure to support this in infra - in process, probably
landing today
  * don't set stable/icehouse - decision needed by Apr 17th
  * changes to d-g/devstack to be extra explicit about what features
stable/icehouse should support in tempest.conf
  * see if we can make master work with stable/havana to remove the
stable/havana Tempest branch (if this is doable in a month, great, if
not just wait for havana to age out).


I think we would still want to declare Tempest versions from time to
time. I'd honestly suggest a quarterly timebox. The events that are
actually important to Tempest are less the release itself, but the eol
of branches, as that would mean features which removed completely from
any supported tree could be removed.


My current leaning is that this approach would be a good thing, and
provide a better experience for both the community and the defcore
process. However it's a big enough change that we're still collecting
data, and it would be interesting to hear other thoughts from the
community at large on this approach.

-Sean


With regard to havana, the problems with DefCore using stable/havana are 
the same as many of us have felt with testing real deployments of havana.
Master (now icehouse) has many more tests, is more robust to individual 
test failures, and is more configurable. But the work to backport 
improvements is difficult or impossible due to many refactorings on 
master, api changes, and the tempest backport policy that we don't want 
to spend our review time looking backwards. The reality is that almost 
nothing has been backported to stable/havana tempest, and we don't want 
to start doing that now. As defcore/refstack becomes a reality, more 
bugs and desired features in tempest will be found and it would be good 
if issues could be addressed on master.


The approach advocated here would prevent this from happening again with 
icehouse and going forward. That still leaves havana as an important 
case for many folks. I did an initial run of master tempest against 
havana using nova-network but no heat/ceilo/swift). 148 out of 2009 
tests failed. The failures seemed to be in these categories:


1. An api 

[openstack-dev] [sahara] Icehouse RC1 available

2014-04-04 Thread Sergey Lukjanov
Hello everyone,

Sahara published its first Icehouse release candidate today. The list of
bugs fixed since feature freeze and the RC1 tarball are available at:

https://launchpad.net/sahara/icehouse/icehouse-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released as the 2014.1 final
version on April 17. You are therefore strongly encouraged to test and
validate this tarball.

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/sahara/tree/milestone-proposed

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/sahara/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Note that the master branch of Sahara is now open for Juno
development, and feature freeze restrictions no longer apply there.

P.S. Thanks for Thierry for release management and this cool
announcement template.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Marconi PTL Candidacy

2014-04-04 Thread Flavio Percoco

On 03/04/14 17:53 +, Kurt Griffiths wrote:
[snip]


If elected, my priorities during Juno will include:

1. Operational Maturity: Marconi is already production-ready, but we still
have work to do to get to world-class reliability, monitoring, logging,
and efficiency.
2. Documentation: During Icehouse, Marconi made a good start on user and
operator manuals, and I would like to see those docs fleshed out, as well
as reworking the program wiki to make it much more informative and
engaging.
3. Security: During Juno I want to start doing per-milestone threat
modeling, and build out a suite of security tests.
4. Integration: I have heard from several other OpenStack programs who
would like to use Marconi, and so I look forward to working with them to
understand their needs and to assist them however we can.
5. Notifications: Beginning the work on the missing pieces needed to build
a notifications service on top of the Marconi messaging platform, that can
be used to surface events to end-users via SMS, email, web hooks, etc.
6. Graduation: Completing all remaining graduation requirements so that
Marconi can become integrated in the K cycle, which will allow other
programs to be more confident about taking dependencies on the service for
features they are planning.
7. Growth: I'd like to welcome several more contributors to the Marconi
core team, continue on-boarding new contributors and interns, and see
several more large deployments of Marconi in production.



All the above sounds amazing to me! You've done an amazing work so
far!

--
@flaper87
Flavio Percoco


pgpN9lAprsvWv.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with software config and Heat standalone configurations

2014-04-04 Thread Michael Elder
No problem. 

Filed here: https://bugs.launchpad.net/heat/+bug/1302578 for continued 
discussion. 

-M


Kind Regards,

Michael D. Elder

STSM | Master Inventor
mdel...@us.ibm.com  | linkedin.com/in/mdelder

Success is not delivering a feature; success is learning how to solve the 
customer’s problem.” -Mark Cook



From:   Steve Baker sba...@redhat.com
To: openstack-dev@lists.openstack.org
Date:   04/03/2014 10:13 PM
Subject:Re: [openstack-dev] [heat] Problems with software config 
and Heat standalone configurations



On 04/04/14 14:26, Michael Elder wrote:
Hello, 

While adopting the latest from the software configurations in Icehouse, we 
discovered an issue with the new software configuration type and its 
assumptions about using the heat client to perform behavior. 

The change was introduced in: 

commit 21f60b155e4b65396ebf77e05a0ef300e7c3c1cf 
Author: Steve Baker sba...@redhat.com 
Change: https://review.openstack.org/#/c/67621/ 

The net is that the software config type in software_config.py lines 
147-152 relies on the heat client to create/clone software configuration 
resources in the heat database: 

def handle_create(self): 
props = dict(self.properties) 
props[self.NAME] = self.physical_resource_name() 

sc = self.heat().software_configs.create(**props) ## HERE THE HEAT 
CLIENT IS CREATING A NEW SOFTWARE_CONFIG TO MAKE EACH ONE IMMUTABLE 
self.resource_id_set(sc.id) 

My concerns with this approach: 

When used in standalone mode, the Heat engine receives headers which are 
used to drive authentication (X-Auth-Url, X-Auth-User, X-Auth-Key, ..): 

curl -i -X POST -H 'X-Auth-Key: password' -H 'Accept: application/json' -H 
'Content-Type: application/json' -H 'X-Auth-Url: http://[host]:5000/v2.0' 
-H 'X-Auth-User: admin' -H 'User-Agent: python-heatclient' -d '{...}' 
http://10.0.2.15:8004/v1/{tenant_id} 

In this mode, the heat config file indicates standalone mode and can also 
indicate multicloud support: 

# /etc/heat/heat.conf 
[paste_deploy] 
flavor = standalone 

[auth_password] 
allowed_auth_uris = http://[host1]:5000/v2.0,http://[host2]:5000/v2.0 
multi_cloud = true 

Any keystone URL which is referenced is unaware of the orchestration 
engine which is interacting with it. Herein lies the design flaw. 
Its not so much a design flaw, its a bug where a new piece of code 
interacts poorly with a mode that currently has few users and no 
integration test coverage.


When software_config calls self.heat(), it resolves clients.py's heat 
client: 

def heat(self): 
if self._heat: 
return self._heat 
 
con = self.context 
if self.auth_token is None: 
logger.error(_(Heat connection failed, no auth_token!)) 
return None 
# try the token 
args = { 
'auth_url': con.auth_url, 
'token': self.auth_token, 
'username': None, 
'password': None, 
'ca_file': self._get_client_option('heat', 'ca_file'), 
'cert_file': self._get_client_option('heat', 'cert_file'), 

'key_file': self._get_client_option('heat', 'key_file'), 
'insecure': self._get_client_option('heat', 'insecure') 
 } 

endpoint_type = self._get_client_option('heat', 'endpoint_type') 
endpoint = self._get_heat_url() 
if not endpoint: 
endpoint = self.url_for(service_type='orchestration', 
endpoint_type=endpoint_type) 
self._heat = heatclient.Client('1', endpoint, **args) 

return self._heat 

Here, an attempt to look up the orchestration URL (which is already 
executing in the context of the heat engine) comes up wrong because 
Keystone doesn't know about this remote standalone Heat engine. 

If you look at self._get_heat_url() you'll see that the heat.conf 
[clients_heat] url will be used for the heat endpoint if it is set. I 
would recommend setting that for standalone mode. A devstack change for 
HEAT_STANDALONE would be helpful here.

Further, at this point, the username and password are null, and when the 
auth_password standza is applied in the config file, Heat will deny any 
attempts at authorization which only provide a token. As I understand it 
today, that's because it doesn't have individual keystone admin users for 
all remote keystone services in the list of allowed_auth_urls. Hence, if 
only provided with a token, I don't think the heat engine can validate the 
token against the remote keystone. 

One workaround that I've implemented locally is to change the logic to 
check for standalone mode and send the username and password. 

   flavor = 'default' 
try: 
logger.info(Configuration is %s % str(cfg.CONF)) 
flavor = cfg.CONF.paste_deploy.flavor 
except cfg.NoSuchOptError as nsoe: 
flavor = 

Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

2014-04-04 Thread Ling Gao
Hello Vladimir,
 I would prefer an agent-less node, meaning the agent is only used 
under the ramdisk OS to collect hw info, to do firmware updates and to 
install nodes etc. In this sense, the agent running as root is fine. Once 
the node is installed, the agent should be out of the picture. I have been 
working with HPC customers, in that environment they prefer as less memory 
prints as possible. Even as a ordinary tenant, I do not feel secure to 
have some agents running on my node. For the firmware update on the fly, I 
do not know how many customers will trust us doing it while their critical 
application is running. Even they do and ready to do it, Ironic can then 
send an agent to the node through scp/wget as admin/root and quickly do it 
and then kill the agent on the node.   Just my 2 cents.

Ling Gao




From:   Vladimir Kozhukalov vkozhuka...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   04/04/2014 08:24 AM
Subject:[openstack-dev] [Ironic][Agent]



Hello, everyone,

I'd like to involve more people to express their opinions about the way 
how we are going to run Ironic-python-agent. I mean should we run it with 
root privileges or not.

From the very beginning agent is supposed to run under ramdisk OS and it 
is intended to make disk partitioning, RAID configuring, firmware updates 
and other stuff according to installing OS. Looks like we always will run 
agent with root privileges. Right? There are no reasons to limit agent 
permissions.

On the other hand, it is easy to imagine a situation when you want to run 
agent on every node of your cluster after installing OS. It could be 
useful to keep hardware info consistent (for example, many hardware 
configurations allow one to add hard drives in run time). It also could be 
useful for on the fly firmware updates. It could be useful for on the 
fly manipulations with lvm groups/volumes and so on. 

Frankly, I am not even sure that we need to run agent with root privileges 
even in ramdisk OS, because, for example, there are some system default 
limitations such as number of connections, number of open files, etc. 
which are different for root and ordinary user and potentially can 
influence agent behaviour. Besides, it is possible that some 
vulnerabilities will be found in the future and they potentially could be 
used to compromise agent and damage hardware configuration.   

Consequently, it is better to run agent under ordinary user even under 
ramdisk OS and use rootwrap if agent needs to run commands with root 
privileges. I know that rootwrap has some performance issues 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html 
but it is still pretty suitable for ironic agent use case.

It would be great to hear as many opinions as possible according to this 
case.


Vladimir Kozhukalov___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Nova][Heat] Sample config generator issue

2014-04-04 Thread Doug Hellmann
On Thu, Apr 3, 2014 at 5:42 PM, Zane Bitter zbit...@redhat.com wrote:
 On 03/04/14 08:48, Doug Hellmann wrote:

 On Wed, Apr 2, 2014 at 9:55 PM, Zane Bitter zbit...@redhat.com wrote:

 We have an issue in Heat where the sample config generator from Oslo is
 currently broken (see bug #1288586). Unfortunately it turns out that
 there
 is no fix to the generator script itself that can do the Right Thing for
 both Heat and Nova.

 A brief recap on how the sample config generator works: it goes through
 all
 of the files specified and finds all the ConfigOpt objects at the top
 level.
 It then searches for them in the registered options, and returns the name
 of
 the group in which they are registered. Previously it looked for the
 identical object being registered, but now it just looks for any
 equivalent
 ones. When you register two or more equivalent options, the second and
 subsequent ones are just ignored by oslo.config.

 The situation in Heat is that we have a bunch of equivalent options
 registered in multiple groups. This is because we have a set of options
 for
 each client library (i.e. python-novaclient, python-cinderclient, c.),
 with
 each set containing equivalent options (e.g. every client has an
 endpoint_type option for looking up the keystone catalog). This used to
 work, but now that equivalent options (and not just identical options)
 match
 when searching for them in a group, we just end up with multiple copies
 of
 each option in the first group to be searched, and none in any of the
 other
 groups, in the generated sample config.

 Nova, on the other hand, has the opposite problem (see bug #1262148).
 Nova
 adds the auth middleware from python-keystoneclient to its list of files
 to
 search for options. That middleware imports a file from oslo-incubator
 that
 registers the option in the default group - a registration that is *not*
 wanted by the keystone middleware, because it registers an equivalent
 option
 in a different group instead (or, as it turns out, as well). Just to make
 it
 interesting, Nova uses the same oslo-incubator module and relies on the
 option being registered in the default group. Of course, oslo-incubator
 is
 not a real library, so it gets registered a second time but ignored
 (since
 an equivalent one is already present). Crucially, the oslo-incubator file
 from python-keystoneclient is not on the list of extra modules to search
 in
 Nova, so when the generator script was looking for options identical to
 the
 ones it found in modules, it didn't see this option at all. Hence the
 change
 to looking for equivalent options, which broke Heat.

 Neither comparing for equivalence nor for identity in the generator
 script
 can solve both use cases. It's hard to see what Heat could or should be
 doing differently. I think it follows that the fix needs to be in either
 Nova or python-keystoneclient in the first instance.

 One option I suggested was for the auth middleware to immediately
 deregister
 the extra option that had accidentally been registered upon importing a
 module from oslo-incubator. I put up patches to do this, but it seemed to
 be
 generally agreed by Oslo folks that this was a Bad Idea.

 Another option would be to specifically include the relevant module from
 keystoneclient.openstack.common when generating the sample config. This
 seems quite brittle to me.

 We could fix it by splitting the oslo-incubator module into one that
 provides the code needed by the auth middleware and one that does the
 registration of options, but this will likely result in cascading changes
 to
 a whole bunch of projects.

 Does anybody have any thoughts on what the right fix looks like here?
 Currently, verification of the sample config is disabled in the Heat gate
 because of this issue, so it would be good to get it resolved.

 cheers,
 Zane.


 We've seen some similar issues in other projects where the guessing
 done by the generator is not matching the newer ways we use
 configuration options. In those cases, I suggested that projects use
 the new entry-point feature that allows them to explicitly list
 options within groups, instead of scanning a set of files. This
 feature was originally added so apps can include the options from
 libraries that use oslo.config (such as oslo.messaging), but it can be
 used for options define by the applications as well.

 To define an option discovery entry point, create a function that
 returns a sequence of (group name, option list) pairs. For an example,
 see list_opts() in oslo.messaging [1]. Then define the entry point in
 your setup.cfg under the oslo.config.opts namespace [2]. If you need
 more than one function, register them separately.

 Then change the way generate_sample.sh is called for your project so
 it passes the -l option [3] once for each name you have given to the
 entry points. So if you have just heat you would pass -l heat and
 if you have heat-core and heat-some-driver you would pass -l
 heat-core -l 

Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-04 Thread Michael Elder
Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624

I still have concerns though about the design approach of creating a new 
project for every stack and new users for every resource. 

If I provision 1000 patterns a day with an average of 10 resources per 
pattern, you're looking at 10,000 users per day. How can that scale? 

How can we ensure that all stale projects and users are cleaned up as 
instances are destroy? When users choose to go through horizon or nova to 
tear down instances, what cleans up the project  users associated with 
that heat stack? 

Keystone defines the notion of tokens to support authentication, why 
doesn't the design provision and store a token for the stack and its 
equivalent management? 

-M


Kind Regards,

Michael D. Elder

STSM | Master Inventor
mdel...@us.ibm.com  | linkedin.com/in/mdelder

Success is not delivering a feature; success is learning how to solve the 
customer’s problem.” -Mark Cook



From:   Steve Baker sba...@redhat.com
To: openstack-dev@lists.openstack.org
Date:   04/03/2014 10:13 PM
Subject:Re: [openstack-dev] [heat] Problems with Heat software 
configurations and KeystoneV2



On 04/04/14 14:05, Michael Elder wrote:
Hello, 

I'm looking for insights about the interaction between keystone and the 
software configuration work that's gone into Icehouse in the last month or 
so. 

I've found that when using software configuration, the KeystoneV2 is 
broken because the server.py#_create_transport_credentials() explicitly 
depends on KeystoneV3 methods. 

Here's what I've come across: 

In the following commit, the introduction of 
_create_transport_credentials() on server.py begins to create a user for 
each OS::Nova::Server resource in the template: 

commit b776949ae94649b4a1eebd72fabeaac61b404e0f 
Author: Steve Baker sba...@redhat.com 
Date:   Mon Mar 3 16:39:57 2014 +1300 
Change: https://review.openstack.org/#/c/77798/ 

server.py lines 470-471: 

if self.user_data_software_config(): 
self._create_transport_credentials() 

With the introduction of this change, each server resource which is 
provisioned results in the creation of a new user ID. The call delegates 
through to stack_user.py lines 40-54: 


def _create_user(self): 
# Check for stack user project, create if not yet set 
if not self.stack.stack_user_project_id: 
project_id = self.keystone().create_stack_domain_project( 
self.stack.id) 
self.stack.set_stack_user_project_id(project_id) 
 
# Create a keystone user in the stack domain project 
user_id = self.keystone().create_stack_domain_user( 
username=self.physical_resource_name(),## HERE THE 
USERNAME IS SET TO THE RESOURCE NAME 
password=self.password, 
project_id=self.stack.stack_user_project_id) 

# Store the ID in resource data, for compatibility with 
SignalResponder 
db_api.resource_data_set(self, 'user_id', user_id) 

My concerns with this approach: 

- Each resource is going to result in the creation of a unique user in 
Keystone. That design point seems hardly teneble if you're provisioning a 
large number of templates by an organization every day. 
Compared to the resources consumed by creating a new nova server (or a 
keystone token!), I don't think creating new users will present a 
significant overhead.

As for creating users bound to resources, this is something heat has done 
previously but we're doing it with more resources now. With havana heat 
(or KeystoneV2) those users will be created in the same project as the 
stack launching user, and the stack launching user needs admin permissions 
to create these users.
- If you attempt to set your resource names to some human-readable string 
(like web_server), you get one shot to provision the template, wherein 
future attempts to provision it will result in exceptions due to duplicate 
user ids. 
This needs a bug raised. This isn't an issue on KeystoneV3 since the users 
are created in a project which is specific to the stack. Also for v3 
operations the username is ignored as the user_id is used exclusively.

- The change prevents compatibility between Heat on Icehouse and 
KeystoneV2. 
Please continue to test this with KeystoneV2. However any typical icehouse 
OpenStack should really have the keystone v3 API enabled. Can you explain 
the reasons why yours isn't?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Donald Stufft

On Apr 4, 2014, at 10:41 AM, Chuck Thier cth...@gmail.com wrote:

 Howdy,
 
 Now that swift has aligned with the other projects to use requests in 
 python-swiftclient, we have lost a couple of features.
 
 1.  Requests doesn't support expect: 100-continue.  This is very useful for 
 services like swift or glance where you want to make sure a request can 
 continue before you start uploading GBs of data (for example find out that 
 you need to auth).
 
 2.  Requests doesn't play nicely with eventlet or other async frameworks [1]. 
  I noticed this when suddenly swift-bench (which uses swiftclient) wasn't 
 performing as well as before.  This also means that, for example, if you are 
 using keystone with swift, the auth requests to keystone will block the proxy 
 server until they complete, which is also not desirable.

requests should work fine if you used the event let monkey patch the socket 
module prior to import requests.

 
 Does anyone know if these issues are being addressed, or begun working on 
 them?
 
 Thanks,
 
 --
 Chuck
 
 [1] 
 http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ironic] BaremetalHostManager unused?

2014-04-04 Thread Matthew Booth
Whilst looking at something unrelated in HostManager, I noticed that
HostManager.service_states appears to be unused, and decided to remove
it. This seems to have a number of implications:

1. capabilities in HostManager.get_all_host_states will always be None.
2. capabilities passed to host_state_cls() will always be None
(host_state_cls doesn't appear to be used anywhere else)
3. baremetal_host_manager.new_host_state() capabilities will always be None.
4. cap will always be {}, so will never contain 'baremetal_driver'
5. BaremetalNodeState will never be instantiated
6. BaremetalHostManager is a no-op

possibly resulting in

7. The filter scheduler could try to put multiple instances on a single
bare metal host

This was going to be a 3 line cleanup, but it looks like a can of worms
so I'm going to drop it. It's entirely possible that I've missed another
entry point in to this code, but it might be worth a quick look.
Incidentally, the tests seem to populate service_states in fake, so the
behaviour of the automated tests probably isn't reliable.

Matt
-- 
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team

GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Chuck Thier
On Fri, Apr 4, 2014 at 9:44 AM, Donald Stufft don...@stufft.io wrote:

 requests should work fine if you used the event let monkey patch the
 socket module prior to import requests.


That's what I had hoped as well (and is what swift-bench did already), but
it performs the same if I monkey patch or not.

--
Chuck
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Release Notes for Icehouse

2014-04-04 Thread Yaguang Tang
Hi all,

I think it's important for our developers to publish an official Release
Note as other core openstack projects does at the end of Icehouse
development cycle, it contains the new features added and upgrade issue to
be noticed by the users. any one like to be volunteer to help accomplish it?
https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse

-- 
Tang Yaguang

Canonical Ltd. | www.ubuntu.com | www.canonical.com
Mobile:  +86 152 1094 6968
gpg key: 0x187F664F
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

2014-04-04 Thread Dickson, Mike (HP Servers)
+1

From: Ling Gao [mailto:ling...@us.ibm.com]
Sent: Friday, April 04, 2014 10:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

Hello Vladimir,
 I would prefer an agent-less node, meaning the agent is only used under 
the ramdisk OS to collect hw info, to do firmware updates and to install nodes 
etc. In this sense, the agent running as root is fine. Once the node is 
installed, the agent should be out of the picture. I have been working with HPC 
customers, in that environment they prefer as less memory prints as possible. 
Even as a ordinary tenant, I do not feel secure to have some agents running on 
my node. For the firmware update on the fly, I do not know how many customers 
will trust us doing it while their critical application is running. Even they 
do and ready to do it, Ironic can then send an agent to the node through 
scp/wget as admin/root and quickly do it and then kill the agent on the node.   
Just my 2 cents.

Ling Gao




From:Vladimir Kozhukalov 
vkozhuka...@mirantis.commailto:vkozhuka...@mirantis.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:04/04/2014 08:24 AM
Subject:[openstack-dev] [Ironic][Agent]




Hello, everyone,

I'd like to involve more people to express their opinions about the way how we 
are going to run Ironic-python-agent. I mean should we run it with root 
privileges or not.

From the very beginning agent is supposed to run under ramdisk OS and it is 
intended to make disk partitioning, RAID configuring, firmware updates and 
other stuff according to installing OS. Looks like we always will run agent 
with root privileges. Right? There are no reasons to limit agent permissions.

On the other hand, it is easy to imagine a situation when you want to run agent 
on every node of your cluster after installing OS. It could be useful to keep 
hardware info consistent (for example, many hardware configurations allow one 
to add hard drives in run time). It also could be useful for on the fly 
firmware updates. It could be useful for on the fly manipulations with lvm 
groups/volumes and so on.

Frankly, I am not even sure that we need to run agent with root privileges even 
in ramdisk OS, because, for example, there are some system default limitations 
such as number of connections, number of open files, etc. which are different 
for root and ordinary user and potentially can influence agent behaviour. 
Besides, it is possible that some vulnerabilities will be found in the future 
and they potentially could be used to compromise agent and damage hardware 
configuration.

Consequently, it is better to run agent under ordinary user even under ramdisk 
OS and use rootwrap if agent needs to run commands with root privileges. I know 
that rootwrap has some performance issues 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html but 
it is still pretty suitable for ironic agent use case.

It would be great to hear as many opinions as possible according to this case.


Vladimir Kozhukalov___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help in re-running openstack

2014-04-04 Thread Dean Troyer
On Fri, Apr 4, 2014 at 3:43 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Shiva,
   Can u tell what exactly u r trying to change in /opt/stack/ ?
 My guess is that u might be running into stack.sh re-pulling the sources
 hence overriding ur changes ? Try with OFFLINE=True in localrc (create a
 localrc file in /opt/stack/ and put OFFLINE=True) and redo stack.sh


FWIW, RECLONE controls the 'pull sources every time' behaviour without
cutting off the rest of your net access.  OFFLINE short-circuits functions
that attempt network access to avoid waiting on the timeouts when you know
they will fail.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

2014-04-04 Thread Jay Faulkner

+1

   The agent is a tool Ironic is using to take the place of a
   hypervisor to discover and prepare nodes to recieve workloads. For
   hardware, this includes more work -- such as firmware flashing, bios
   configuration, and disk imaging -- all of which must be done in an
   OOB manner. (This is also why deploy drivers that interact directly
   with the hardware when the supported - such as Seamicro or the
   proposed HP iLo driver - are good alternative approaches.)


-Jay Faulkner

On 4/4/2014 7:10 AM, Ling Gao wrote:

Hello Vladimir,
 I would prefer an agent-less node, meaning the agent is only used 
under the ramdisk OS to collect hw info, to do firmware updates and to 
install nodes etc. In this sense, the agent running as root is fine. 
Once the node is installed, the agent should be out of the picture. I 
have been working with HPC customers, in that environment they prefer 
as less memory prints as possible. Even as a ordinary tenant, I do not 
feel secure to have some agents running on my node. For the firmware 
update on the fly, I do not know how many customers will trust us 
doing it while their critical application is running. Even they do and 
ready to do it, Ironic can then send an agent to the node through 
scp/wget as admin/root and quickly do it and then kill the agent on 
the node. Just my 2 cents.


Ling Gao




From: Vladimir Kozhukalov vkozhuka...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org,

Date: 04/04/2014 08:24 AM
Subject: [openstack-dev] [Ironic][Agent]




Hello, everyone,

I'd like to involve more people to express their opinions about the 
way how we are going to run Ironic-python-agent. I mean should we run 
it with root privileges or not.


From the very beginning agent is supposed to run under ramdisk OS and 
it is intended to make disk partitioning, RAID configuring, firmware 
updates and other stuff according to installing OS. Looks like we 
always will run agent with root privileges. Right? There are no 
reasons to limit agent permissions.


On the other hand, it is easy to imagine a situation when you want to 
run agent on every node of your cluster after installing OS. It could 
be useful to keep hardware info consistent (for example, many hardware 
configurations allow one to add hard drives in run time). It also 
could be useful for on the fly firmware updates. It could be useful 
for on the fly manipulations with lvm groups/volumes and so on.


Frankly, I am not even sure that we need to run agent with root 
privileges even in ramdisk OS, because, for example, there are some 
system default limitations such as number of connections, number of 
open files, etc. which are different for root and ordinary user and 
potentially can influence agent behaviour. Besides, it is possible 
that some vulnerabilities will be found in the future and they 
potentially could be used to compromise agent and damage hardware 
configuration.


Consequently, it is better to run agent under ordinary user even under 
ramdisk OS and use rootwrap if agent needs to run commands with root 
privileges. I know that rootwrap has some performance issues 
_http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html_but 
it is still pretty suitable for ironic agent use case.


It would be great to hear as many opinions as possible according to 
this case.



Vladimir Kozhukalov___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Tenant quotas can now be updated during a benchmark

2014-04-04 Thread Bruno Semperlotti
Hi Joshua,

Quotas will not be expanded during the scenario, they will be updated
*prior* the scenario with the requested values as context of this scenario.
If values are too low, the scenario will continue to fail.
This update does not allow to benchmark quotas update modification time.

Regards,


--
Bruno Semperlotti


2014-04-04 0:45 GMT+02:00 Joshua Harlow harlo...@yahoo-inc.com:

  Cool, so would that mean that once a quota is reached (for whatever
 reason) and the scenario wants to continue running (instead of failing due
 to quota issues) that it can expand that quota automatically (for cases
 where this is needed/necessary). Or is this also useful for benchmarking
 how fast quotas can be  changed, or is it maybe a combination of both?

   From: Boris Pavlovic bo...@pavlovic.me
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, April 3, 2014 at 1:43 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [rally] Tenant quotas can now be updated
 during a benchmark

   Bruno,

  Well done. Finally we have this feature in Rally!


  Best regards,
 Boris Pavlovic


 On Thu, Apr 3, 2014 at 11:37 PM, Bruno Semperlotti 
 bruno.semperlo...@gmail.com wrote:

 Hi Rally users,

  I would like to inform you that the feature allowing to update tenant's
 quotas during a benchmark is available with the implementation of this
 blueprint:
 https://blueprints.launchpad.net/rally/+spec/benchmark-context-tenant-quotas

  Currently, only Nova and Cinder quotas are supported (Neutron coming
 soon).

  Here a small sample of how to do it:

  In the json file describing the benchmark scenario, use the context
 section to indicate quotas for each service. Quotas will be applied for
 each generated tenants.

   {
  NovaServers.boot_server: [
  {
  args: {
  flavor_id: 1,
  image_id: 6e25e859-2015-4c6b-9940-aa21b2ab8ab2
  },
  runner: {
  type: continuous,
  times:100,
  active_users: 10
  },
  context: {
  users: {
  tenants: 1,
  users_per_tenant: 1
  },
  *quotas: {*
  *nova: {*
  *instances: 150,*
  *cores: 150,*
  *ram: -1*
  *}*
  *}*
  }
  }
  ]
  }


  Following, the list of supported quotas:
 *nova:*
  instances, cores, ram, floating-ips, fixed-ips, metadata-items,
 injected-files, injected-file-content-bytes, injected-file-path-bytes,
 key-pairs, security-groups, security-group-rules

  *cinder:*
  gigabytes, snapshots, volumes

  *neutron (coming soon):*
  network, subnet, port, router, floatingip, security-group,
 security-group-rule


  Regards,

 --
 Bruno Semperlotti

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-04 Thread Armando M.
Hi Simon,

You are absolutely right in your train of thoughts: unless the
third-party CI monitors and vets all the potential changes it cares
about there's always a chance something might break. This is why I
think it's important that each Neutron third party CI should not only
test Neutron changes, but also Nova's, DevStack's and Tempest's.
Filters may be added to test only the relevant subtrees.

For instance, the VMware CI runs the full suite of tempest smoke
tests, as they come from upstream and it vets all the changes that go
in Tempest made to API and scenario tests as well as configuration
changes. As for Nova, we test changes to the vif parts, and for
DevStack, we validate changes made to lib/neutron*.

Vetting all the changes coming in VS only the ones that can
potentially break third-party support is a balancing act when you
don't have infinite resources at your disposal, or you're just ramping
up the CI infrastructure.

Cheers,
Armando

On 4 April 2014 02:00, Simon Pasquier simon.pasqu...@bull.net wrote:
 Hi Salvatore,

 On 03/04/2014 14:56, Salvatore Orlando wrote:
 Hi Simon,

 snip

 I hope stricter criteria will be enforced for Juno; I personally think
 every CI should run at least the smoketest suite for L2/L3 services (eg:
 load balancer scenario will stay optional).

 I had a little thinking about this and I feel like it might not have
 caught _immediately_ the issue Kyle talked about [1].

 Let's rewind the time line:
 1/ Change to *Nova* adding external events API is merged
 https://review.openstack.org/#/c/76388/
 2/ Change to *Neutron* notifying Nova when ports are ready is merged
 https://review.openstack.org/#/c/75253/
 3/ Change to *Nova* making libvirt wait for Neutron notifications is merged
 https://review.openstack.org/#/c/74832/

 At this point and assuming that the external ODL CI system were running
 the L2/L3 smoke tests, change #3 could have passed since external
 Neutron CI aren't voting for Nova. Instead it would have voted against
 any subsequent change to Neutron.

 Simon

 [1] https://bugs.launchpad.net/neutron/+bug/1301449


 Salvatore

 [1] https://review.openstack.org/#/c/75304/



 On 3 April 2014 12:28, Simon Pasquier simon.pasqu...@bull.net
 mailto:simon.pasqu...@bull.net wrote:

 Hi,

 I'm looking at [1] but I see no requirement of which Tempest tests
 should be executed.

 In particular, I'm a bit puzzled that it is not mandatory to boot an
 instance and check that it gets connected to the network. To me, this is
 the very minimum for asserting that your plugin or driver is working
 with Neutron *and* Nova (I'm not even talking about security groups). I
 had a quick look at the existing 3rd party CI systems and I found none
 running this kind of check (correct me if I'm wrong).

 Thoughts?

 [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
 --
 Simon Pasquier
 Software Engineer (OpenStack Expertise Center)
 Bull, Architect of an Open World
 Phone: + 33 4 76 29 71 49 tel:%2B%2033%204%2076%2029%2071%2049
 http://www.bull.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Release Notes for Icehouse

2014-04-04 Thread Mark McClain

On Apr 4, 2014, at 11:03 AM, Yaguang Tang 
yaguang.t...@canonical.commailto:yaguang.t...@canonical.com wrote:

I think it's important for our developers to publish an official Release Note 
as other core openstack projects does at the end of Icehouse development cycle, 
it contains the new features added and upgrade issue to be noticed by the 
users. any one like to be volunteer to help accomplish it?
https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse

I’ve typically waited until after we publish the second RC to add release notes 
to the Wiki. I’ll update the release notes for Neutron when we close out RC2.

mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest]:Please updated etherpad before adding tempest tests

2014-04-04 Thread Mike Spreitzer
Kekane, Abhishek abhishek.kek...@nttdata.com wrote on 04/04/2014 
06:26:58 AM:

 This is regarding implementation of blueprint https://
 blueprints.launchpad.net/tempest/+spec/testcases-expansion-icehouse.
 
 As per mentioned in etherpads for this blueprint, please add your 
 name if you are working on any of the items mentioned in the list.
 Otherwise efforts will get duplicated.

Why are only four projects listed?

I see that those etherpads have Icehouse in their names.  What happens as 
we work on Juno?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-04 Thread Tomas Sedovic
Hi All,

I was wondering if the time has come to document what exactly are we
doing with tripleo-heat-templates and merge.py[1], figure out what needs
to happen to move away and raise the necessary blueprints on Heat and
TripleO side.

(merge.py is a script we use to build the final TripleO Heat templates
from smaller chunks)

There probably isn't an immediate need for us to drop merge.py, but its
existence either indicates deficiencies within Heat or our unfamiliarity
with some of Heat's features (possibly both).

I worry that the longer we stay with merge.py the harder it will be to
move forward. We're still adding new features and fixing bugs in it (at
a slow pace but still).

Below is my understanding of the main marge.py functionality and a rough
plan of what I think might be a good direction to move to. It is almost
certainly incomplete -- please do poke holes in this. I'm hoping we'll
get to a point where everyone's clear on what exactly merge.py does and
why. We can then document that and raise the appropriate blueprints.


## merge.py features ##


1. Merging parameters and resources

Any uniquely-named parameters and resources from multiple templates are
put together into the final template.

If a resource of the same name is in multiple templates, an error is
raised. Unless it's of a whitelisted type (nova server, launch
configuration, etc.) in which case they're all merged into a single
resource.

For example: merge.py overcloud-source.yaml swift-source.yaml

The final template has all the parameters from both. Moreover, these two
resources will be joined together:

 overcloud-source.yaml 

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  ImageId: '0'
  InstanceType: '0'
Metadata:
  admin-password: {Ref: AdminPassword}
  admin-token: {Ref: AdminToken}
  bootstack:
public_interface_ip:
  Ref: NeutronPublicInterfaceIP


 swift-source.yaml 

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Metadata:
  swift:
devices:
  ...
hash: {Ref: SwiftHashSuffix}
service-password: {Ref: SwiftPassword}


The final template will contain:

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  ImageId: '0'
  InstanceType: '0'
Metadata:
  admin-password: {Ref: AdminPassword}
  admin-token: {Ref: AdminToken}
  bootstack:
public_interface_ip:
  Ref: NeutronPublicInterfaceIP
  swift:
devices:
  ...
hash: {Ref: SwiftHashSuffix}
service-password: {Ref: SwiftPassword}


We use this to keep the templates more manageable (instead of having one
huge file) and also to be able to pick the components we want: instead
of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
uses the VirtualPowerManager driver) or `ironic-vm-source`.



2. FileInclude

If you have a pseudo resource with the type of `FileInclude`, we will
look at the specified Path and SubKey and put the resulting dictionary in:

 overcloud-source.yaml 

  NovaCompute0Config:
Type: FileInclude
Path: nova-compute-instance.yaml
SubKey: Resources.NovaCompute0Config
Parameters:
  NeutronNetworkType: gre
  NeutronEnableTunnelling: True


 nova-compute-instance.yaml 

  NovaCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  InstanceType: '0'
  ImageId: '0'
Metadata:
  keystone:
host: {Ref: KeystoneHost}
  neutron:
host: {Ref: NeutronHost}
  tenant_network_type: {Ref: NeutronNetworkType}
  network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
  bridge_mappings: {Ref: NeutronBridgeMappings}
  enable_tunneling: {Ref: NeutronEnableTunnelling}
  physical_bridge: {Ref: NeutronPhysicalBridge}
  public_interface: {Ref: NeutronPublicInterface}
service-password:
  Ref: NeutronPassword
  admin-password: {Ref: AdminPassword}

The result:

  NovaCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  InstanceType: '0'
  ImageId: '0'
Metadata:
  keystone:
host: {Ref: KeystoneHost}
  neutron:
host: {Ref: NeutronHost}
  tenant_network_type: gre
  network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
  bridge_mappings: {Ref: NeutronBridgeMappings}
  enable_tunneling: True
  physical_bridge: {Ref: NeutronPhysicalBridge}
  public_interface: {Ref: NeutronPublicInterface}
service-password:
  Ref: NeutronPassword
  admin-password: {Ref: AdminPassword}

Note the `NeutronNetworkType` and `NeutronEnableTunneling` parameter
substitution.

This is useful when you want to pick only bits and pieces of an existing
template. In the example above, `nova-compute-instance.yaml` is a
standalone template you can launch on its own. 

[openstack-dev] PTL Voting is now open

2014-04-04 Thread Anita Kuno
Elections are underway and will remain open for you to cast your vote
until at least 1300 utc April 11, 2014.

We are having elections for Nova, Neutron, Cinder, Ceilometer,  Heat and
TripleO.

If you are a Foundation individual member and had a commit in one of the
program's projects[0] over the Havana-Icehouse timeframe (April 4, 2013
06:00 UTC to April 4, 2014 05:59 UTC) then you are eligible to vote. You
should find your email with a link to the Condorcet page to cast your
vote in the inbox of your gerrit preferred email[1].

What to do if you don't see the email and have a commit in at least one
of the programs having an election:
 * check the trash of your gerrit Preferred Email address, in case
it went into trash or spam
 * wait a bit and check again, in case your email server is a bit slow
 * find the sha of at least one commit from the program project
repos[0] and email me and Tristan[2] at the above email address. If we
can confirm that you are entitled to vote, we will add you to the voters
list for the appropriate election.

Our democratic process is important to the health of OpenStack, please
exercise your right to vote.

Candidate statements/platforms can be found linked to Candidate names on
this page:
https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014#Candidates

Happy voting,
Anita. (anteaya)

[0] The list of the program projects eligible for electoral status:
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
[1] Sign into review.openstack.org: Go to Settings  Contact
Information. Look at the email listed as your Preferred Email. That is
where the ballot has been sent.
[2] Anita's email: anteaya at anteaya dot info Tristan's email: tristan
dot cacqueray at enovance dot com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Devananda van der Veen
On Fri, Apr 4, 2014 at 5:19 AM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 On the other hand, it is easy to imagine a situation when you want to run
 agent on every node of your cluster after installing OS. It could be useful
 to keep hardware info consistent (for example, many hardware configurations
 allow one to add hard drives in run time). It also could be useful for on
 the fly firmware updates. It could be useful for on the fly
 manipulations with lvm groups/volumes and so on.


There are lots of configuration management agents already out there (chef?
puppet? salt? ansible? ... the list is pretty long these days...) which you
can bake into the images that you deploy with Ironic, but I'd like to be
clear that, in my opinion, Ironic's responsibility ends where the host OS
begins. Ironic is a bare metal provisioning service, not a configuration
management service.

What you're suggesting is similar to saying, we want to run an agent in
every KVM VM in our cloud, except most customers would clearly object to
this. The only difference here is that you (and tripleo) are the deployer
*and* the user of Ironic; that's a special case, but not the only use case
which Ironic is servicing.

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] Using oslo.cache in keystoneclient.middleware.auth_token

2014-04-04 Thread Dean Troyer
On Fri, Apr 4, 2014 at 10:51 AM, Kurt Griffiths 
kurt.griffi...@rackspace.com wrote:

  It appears the current version of oslo.cache is going to bring in quite
 a few oslo libraries that we would not want keystone client to depend on
 [1]. Moving the middleware to a separate library would solve that.


+++


 I think it makes a lot of sense to separate out the middleware. Would this
 be a new project under Identity or would it go to Oslo since it would be a
 shared library among the other programs?


I think it really just needs to be a separate repo, similar to how
keystoneclient is a separate repo but still part of the Keystone project.
 The primary problem being addressed is dependencies and packaging, not
governance.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] Using oslo.cache in keystoneclient.middleware.auth_token

2014-04-04 Thread Doug Hellmann
On Fri, Apr 4, 2014 at 12:22 PM, Dean Troyer dtro...@gmail.com wrote:
 On Fri, Apr 4, 2014 at 10:51 AM, Kurt Griffiths
 kurt.griffi...@rackspace.com wrote:

  It appears the current version of oslo.cache is going to bring in quite
 a few oslo libraries that we would not want keystone client to depend on
 [1]. Moving the middleware to a separate library would solve that.


 +++


 I think it makes a lot of sense to separate out the middleware. Would this
 be a new project under Identity or would it go to Oslo since it would be a
 shared library among the other programs?


 I think it really just needs to be a separate repo, similar to how
 keystoneclient is a separate repo but still part of the Keystone project.
 The primary problem being addressed is dependencies and packaging, not
 governance.

Right, that's what I meant.

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Tenant quotas can now be updated during a benchmark

2014-04-04 Thread Boris Pavlovic
Bruno,

Btw great idea add benchmark scenarios for quotas as well!

Best regards,
Boris Pavlovic


On Fri, Apr 4, 2014 at 7:28 PM, Bruno Semperlotti 
bruno.semperlo...@gmail.com wrote:

 Hi Joshua,

 Quotas will not be expanded during the scenario, they will be updated
 *prior* the scenario with the requested values as context of this
 scenario. If values are too low, the scenario will continue to fail.
 This update does not allow to benchmark quotas update modification time.

 Regards,


 --
 Bruno Semperlotti


 2014-04-04 0:45 GMT+02:00 Joshua Harlow harlo...@yahoo-inc.com:

  Cool, so would that mean that once a quota is reached (for whatever
 reason) and the scenario wants to continue running (instead of failing due
 to quota issues) that it can expand that quota automatically (for cases
 where this is needed/necessary). Or is this also useful for benchmarking
 how fast quotas can be  changed, or is it maybe a combination of both?

   From: Boris Pavlovic bo...@pavlovic.me
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Thursday, April 3, 2014 at 1:43 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [rally] Tenant quotas can now be updated
 during a benchmark

   Bruno,

  Well done. Finally we have this feature in Rally!


  Best regards,
 Boris Pavlovic


 On Thu, Apr 3, 2014 at 11:37 PM, Bruno Semperlotti 
 bruno.semperlo...@gmail.com wrote:

 Hi Rally users,

  I would like to inform you that the feature allowing to update
 tenant's quotas during a benchmark is available with the implementation of
 this blueprint:
 https://blueprints.launchpad.net/rally/+spec/benchmark-context-tenant-quotas

  Currently, only Nova and Cinder quotas are supported (Neutron coming
 soon).

  Here a small sample of how to do it:

  In the json file describing the benchmark scenario, use the context
 section to indicate quotas for each service. Quotas will be applied for
 each generated tenants.

   {
  NovaServers.boot_server: [
  {
  args: {
  flavor_id: 1,
  image_id: 6e25e859-2015-4c6b-9940-aa21b2ab8ab2
  },
  runner: {
  type: continuous,
  times:100,
  active_users: 10
  },
  context: {
  users: {
  tenants: 1,
  users_per_tenant: 1
  },
  *quotas: {*
  *nova: {*
  *instances: 150,*
  *cores: 150,*
  *ram: -1*
  *}*
  *}*
  }
  }
  ]
  }


  Following, the list of supported quotas:
 *nova:*
  instances, cores, ram, floating-ips, fixed-ips, metadata-items,
 injected-files, injected-file-content-bytes, injected-file-path-bytes,
 key-pairs, security-groups, security-group-rules

  *cinder:*
  gigabytes, snapshots, volumes

  *neutron (coming soon):*
  network, subnet, port, router, floatingip, security-group,
 security-group-rule


  Regards,

 --
 Bruno Semperlotti

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Clint Byrum
Excerpts from Stan Lagun's message of 2014-04-04 02:54:05 -0700:
 Hi Steve, Thomas
 
 I'm glad the discussion is so constructive!
 
 If we add type interfaces to HOT this may do the job.
 Applications in AppCatalog need to be portable across OpenStack clouds.
 Thus if we use some globally-unique type naming system applications could
 identify their dependencies in unambiguous way.
 
 We also would need to establish relations between between interfaces.
 Suppose there is My::Something::Database interface and 7 compatible
 materializations:
 My::Something::TroveMySQL
 My::Something::GaleraMySQL
 My::Something::PostgreSQL
 My::Something::OracleDB
 My::Something::MariaDB
 My::Something::MongoDB
 My::Something::HBase
 
 There are apps that (say SQLAlchemy-based apps) are fine with any
 relational DB. In that case all templates except for MongoDB and HBase
 should be matched. There are apps that design to work with MySQL only. In
 that case only TroveMySQL, GaleraMySQL and MariaDB should be matched. There
 are application who uses PL/SQL and thus require OracleDB (there can be
 several Oracle implementations as well). There are also applications
 (Marconi and Ceilometer are good example) that can use both some SQL and
 NoSQL databases. So conformance to Database interface is not enough and
 some sort of interface hierarchy required.

IMO that is not really true and trying to stick all these databases into
one SQL database interface is not a use case I'm interested in
pursuing.

Far more interesting is having each one be its own interface which apps
can assert that they support or not.

 
 Another thing that we need to consider is that even compatible
 implementations may have different set of parameters. For example
 clustered-HA PostgreSQL implementation may require additional parameters
 besides those needed for plain single instance variant. Template that
 consumes *any* PostgreSQL will not be aware of those additional parameters.
 Thus they need to be dynamically added to environment's input parameters
 and resource consumer to be patched to pass those parameters to actual
 implementation
 

I think this is a middleware pipe-dream and the devil is in the details.

Just give users the ability to be specific, and then generic patterns
will arise from those later on.

I'd rather see a focus on namespacing and relative composition, so that
providers of the same type that actually do use the same interface but
are alternate implementations will be able to be consumed. So for instance
there is the non-Neutron LBaaS and the Neutron LBaaS, and both have their
merits for operators, but are basically identical from an application
standpoint. That seems a better guiding use case than different databases.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Jim Rollenhagen
On April 4, 2014 at 9:12:56 AM, Devananda van der Veen 
(devananda@gmail.com) wrote:
Ironic's responsibility ends where the host OS begins. Ironic is a bare metal 
provisioning service, not a configuration management service.
+1

// jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Lucas Alvares Gomes
 There are lots of configuration management agents already out there (chef?
 puppet? salt? ansible? ... the list is pretty long these days...) which you
 can bake into the images that you deploy with Ironic, but I'd like to be
 clear that, in my opinion, Ironic's responsibility ends where the host OS
 begins. Ironic is a bare metal provisioning service, not a configuration
 management service.

 What you're suggesting is similar to saying, we want to run an agent in
 every KVM VM in our cloud, except most customers would clearly object to
 this. The only difference here is that you (and tripleo) are the deployer
 *and* the user of Ironic; that's a special case, but not the only use case
 which Ironic is servicing.


+1 (already agreed with something similar in another thread[1], seems that
this discussion is splitted in 2 threads)

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-April/031896.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Clint Byrum
Excerpts from Vladimir Kozhukalov's message of 2014-04-04 05:19:41 -0700:
 Hello, everyone,
 
 I'd like to involve more people to express their opinions about the way how
 we are going to run Ironic-python-agent. I mean should we run it with root
 privileges or not.
 
 From the very beginning agent is supposed to run under ramdisk OS and it is
 intended to make disk partitioning, RAID configuring, firmware updates and
 other stuff according to installing OS. Looks like we always will run agent
 with root privileges. Right? There are no reasons to limit agent
 permissions.
 
 On the other hand, it is easy to imagine a situation when you want to run
 agent on every node of your cluster after installing OS. It could be useful
 to keep hardware info consistent (for example, many hardware configurations
 allow one to add hard drives in run time). It also could be useful for on
 the fly firmware updates. It could be useful for on the fly
 manipulations with lvm groups/volumes and so on.
 
 Frankly, I am not even sure that we need to run agent with root privileges
 even in ramdisk OS, because, for example, there are some system default
 limitations such as number of connections, number of open files, etc. which
 are different for root and ordinary user and potentially can influence
 agent behaviour. Besides, it is possible that some vulnerabilities will be
 found in the future and they potentially could be used to compromise agent
 and damage hardware configuration.
 
 Consequently, it is better to run agent under ordinary user even under
 ramdisk OS and use rootwrap if agent needs to run commands with root
 privileges. I know that rootwrap has some performance issues
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.htmlbut
 it is still pretty suitable for ironic agent use case.
 

My opinion: If you are going to listen for connections, do so on a low
port as root, but then drop privs immediately thereafter. Run things
with sudo, not rootwrap, as the flexibility will just become a burden
if you ever do need to squeeze more performance out, and it won't be
much of a boon to what are likely to be very straight forward commands.

Finally, as others have said, this is for the deploy ramdisk only. For
the case where you want to do an on-the-fly firmware update, there are
a bazillion options to do remote execution. Ironic is for the case where
you don't have on-the-fly capabilities.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Ling Gao
seems that this discussion is splitted in 2 threads
Lucas,
 That's because I added a subject when responded. :-)

Ling Gao



From:   Lucas Alvares Gomes lucasago...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   04/04/2014 01:16 PM
Subject:Re: [openstack-dev] [Ironic][Agent]




There are lots of configuration management agents already out there (chef? 
puppet? salt? ansible? ... the list is pretty long these days...) which 
you can bake into the images that you deploy with Ironic, but I'd like to 
be clear that, in my opinion, Ironic's responsibility ends where the host 
OS begins. Ironic is a bare metal provisioning service, not a 
configuration management service.

What you're suggesting is similar to saying, we want to run an agent in 
every KVM VM in our cloud, except most customers would clearly object to 
this. The only difference here is that you (and tripleo) are the deployer 
*and* the user of Ironic; that's a special case, but not the only use case 
which Ironic is servicing.


+1 (already agreed with something similar in another thread[1], seems that 
this discussion is splitted in 2 threads)

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/031896.html 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Ezra Silvera
 Ironic's responsibility ends where the host OS begins. Ironic is a bare 
metal provisioning service, not a configuration management service.

I agree with the above, but just to clarify I would say that Ironic 
shouldn't *interact*  with the host OS once it booted. Obviously it can 
still perform BM tasks underneath the OS (while it's up and running)  if 
needed (e.g., force shutdown through IPMI, etc..)





Ezra


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-04 Thread Clint Byrum
Excerpts from Tomas Sedovic's message of 2014-04-04 08:47:46 -0700:
 Hi All,
 
 I was wondering if the time has come to document what exactly are we
 doing with tripleo-heat-templates and merge.py[1], figure out what needs
 to happen to move away and raise the necessary blueprints on Heat and
 TripleO side.
 

Yes indeed, it is time.

 (merge.py is a script we use to build the final TripleO Heat templates
 from smaller chunks)
 
 There probably isn't an immediate need for us to drop merge.py, but its
 existence either indicates deficiencies within Heat or our unfamiliarity
 with some of Heat's features (possibly both).
 
 I worry that the longer we stay with merge.py the harder it will be to
 move forward. We're still adding new features and fixing bugs in it (at
 a slow pace but still).


Merge.py is where we've amassed our debt. We'll pay it back by moving
features into Heat. A huge debt payment is coming in the form of
software config migration, which you mention at the bottom of this
message.

 Below is my understanding of the main marge.py functionality and a rough
 plan of what I think might be a good direction to move to. It is almost
 certainly incomplete -- please do poke holes in this. I'm hoping we'll
 get to a point where everyone's clear on what exactly merge.py does and
 why. We can then document that and raise the appropriate blueprints.
 
 
 ## merge.py features ##
 
 
 1. Merging parameters and resources
 
 Any uniquely-named parameters and resources from multiple templates are
 put together into the final template.
 
 If a resource of the same name is in multiple templates, an error is
 raised. Unless it's of a whitelisted type (nova server, launch
 configuration, etc.) in which case they're all merged into a single
 resource.
 
 For example: merge.py overcloud-source.yaml swift-source.yaml
 
 The final template has all the parameters from both. Moreover, these two
 resources will be joined together:
 
  overcloud-source.yaml 
 
   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP
 
 
  swift-source.yaml 
 
   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Metadata:
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}
 
 
 The final template will contain:
 
   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}
 
 
 We use this to keep the templates more manageable (instead of having one
 huge file) and also to be able to pick the components we want: instead
 of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
 uses the VirtualPowerManager driver) or `ironic-vm-source`.
 

The merging of white-listed types is superseded entirely by
OS::Heat::StructuredConfig and OS::Heat::StructuredDeployment. I would
move that we replace all uses of it with those, and deprecate the
feature.

 
 
 2. FileInclude
 
 If you have a pseudo resource with the type of `FileInclude`, we will
 look at the specified Path and SubKey and put the resulting dictionary in:
 
  overcloud-source.yaml 
 
   NovaCompute0Config:
 Type: FileInclude
 Path: nova-compute-instance.yaml
 SubKey: Resources.NovaCompute0Config
 Parameters:
   NeutronNetworkType: gre
   NeutronEnableTunnelling: True
 
 
  nova-compute-instance.yaml 
 
   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: {Ref: NeutronNetworkType}
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: {Ref: NeutronEnableTunnelling}
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}
 
 The result:
 
   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   

Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Stan Lagun
On Fri, Apr 4, 2014 at 9:05 PM, Clint Byrum cl...@fewbar.com wrote:

 IMO that is not really true and trying to stick all these databases into
 one SQL database interface is not a use case I'm interested in
 pursuing.


Indeed. Any SQL database is a useless interface. What I was trying to say
is that some apps may work just on any MySQL while others require some
specific version or variation or even impose some constraints on license or
underlying operating system. One possible solution was to have some sort of
interface hierarchy for that. Even better solution would be that all such
properties be declared somewhere in HOT so that consumer could say not just
I require MySQL-compatible template but I require MySQL-compatible
template with version = 5.0 and clustered = True. Probably you can come
with better example for this. Though interface alone is a good starting
point.

So for instance there is the non-Neutron LBaaS and the Neutron LBaaS, and
 both have their
 merits for operators, but are basically identical from an application
 standpoint.


While conforming to the same interface from consumer's point of view
different load-balancers has many configuration options (template
parameters) and many of them are specific to particular implementation
(otherwise all of them be functionally equal). This need to be addressed
somehow.

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Jiří Stránský

On 3.4.2014 13:02, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.


+1 to all.

Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][glance][heat][neutron][nova] Quick fix for upstream-translation-update Jenkins job failing (bug 1299349)

2014-04-04 Thread Andreas Jaeger
On 04/04/2014 05:57 PM, Dolph Mathews wrote:
 tl;dr:
 
   $ python clean_po.py PROJECT/locale/
   $ git commit
 
 The comments on bug 1299349 are already quite long, so apparently this
 got lost. To save everyone some time, the fix is as easy as above. So
 what's clean_po.py?
 
 Devananda van der Veen (devananda) posted a handy script to fix the
 issue in a related bug [1], to which I added a bit more automation [2]
 because I'm lazy and stuck it into a gist [3]. Grab a copy, and run it
 on milestone-proposed (or wherever necessary). It'll delete a bunch of
 lines from *.po files, and you can commit the result as Closes-Bug.
 
 Thanks, Devananda!
 
 [1] https://bugs.launchpad.net/ironic/+bug/1298645/comments/2
 [2] https://bugs.launchpad.net/keystone/+bug/1299349/comments/25
 [3] https://gist.github.com/dolph/9915293

thanks, Dolph.

I've been working on a check for this (with Clark Boylan's help) so that
this kind of duplicates cannot get merged in again. During the review, I
came upon quite a few situations that I did not anticipate before.

The IRC discussion we had let me create a patch that enhances the pep8
pipeline - instead of creating a new separate job - since pep8 already
contains some smaller patches and we don't need to start another VM.

Looking at the reviews, I've improved the patch so that it should work
now on Mac OS X (David Stanek updated 84211 so that it works there).

The patch now runs msgfmt and we thus need to require it. It's available
in our CI infrastructure but not available on some users machines In
https://review.openstack.org/#/c/85123/3/tox.ini it was suggested to
document in nova's doc/source/devref the requirement on msgfmt now.

The patch adds the following command to tox.ini (plus some tox sugar):
bash -c find nova -type f -regex '.*\.pot?' -print0| \
 xargs -0 -n 1 msgfmt --check-format -o /dev/null

This really needs to use bash -c, otherwise tox will not execute it.

There was a concern by Sean Dague about using a pipeline (see
https://review.openstack.org/#/c/83961/3/tox.ini). Sean, do you have a
pointer on how you like to see this done?

Could I get some advise on how to move this forward in the same way for
all projects, please? Also, testing on OS X is appreciated.

Basically my questions are:
* Should we run this as part of pep8 or is there a better place?
* Is there a better way to implement the above command?

I'll try to implement a solution if nobody beats me to it ;)

thanks,
Andreas


Patches:
https://review.openstack.org/#/c/85123/
https://review.openstack.org/#/c/84239/
https://review.openstack.org/#/c/84236/
https://review.openstack.org/#/c/84211/
https://review.openstack.org/#/c/84207/
https://review.openstack.org/#/c/85135/
https://review.openstack.org/#/c/84233/
https://review.openstack.org/#/c/83954/
https://review.openstack.org/#/c/84226/


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-04 Thread Meghal Gosalia
I am fine with taking the approach of user passing multiple avail. zones 
Az1,Az2 if he wants vm to be in (intersection of AZ1 and Az2).
It will be more cleaner.

But, similar approach should also be used while setting the 
default_scheduling_zone.

Since, we will not be able to add host to multiple zones,
only way to guarantee even distribution across zones when user does not pass 
any zone,
is to allow multiple zones in default_scheduling_zone param.

Thanks,
Meghal

On Apr 4, 2014, at 2:38 AM, Sylvain Bauza 
sylvain.ba...@gmail.commailto:sylvain.ba...@gmail.com wrote:




2014-04-04 10:30 GMT+02:00 Sylvain Bauza 
sylvain.ba...@gmail.commailto:sylvain.ba...@gmail.com:
Hi all,



2014-04-03 18:47 GMT+02:00 Meghal Gosalia 
meg...@yahoo-inc.commailto:meg...@yahoo-inc.com:

Hello folks,

Here is the bug [1] which is currently not allowing a host to be part of two 
availability zones.
This bug was targeted for havana.

The fix in the bug was made because it was assumed
that openstack does not support adding hosts to two zones by design.

The assumption was based on the fact that ---
if hostX is added to zoneA as well as zoneB,
and if you boot a vm vmY passing zoneB in boot params,
nova show vmY still returns zoneA.

In my opinion, we should fix the case of nova show
rather than changing aggregate api to not allow addition of hosts to multiple 
zones.

I have added my comments in comments #7 and #9 on that bug.

Thanks,
Meghal

[1] Bug - https://bugs.launchpad.net/nova/+bug/1196893





Thanks for the pointer, now I see why the API is preventing host to be added to 
a 2nd aggregated if there is a different AZ. Unfortunately, this patch missed 
the fact that aggregates metadata can be modified once the aggregate is 
created, so we should add a check when updating metadate in order to cover all 
corner cases.

So, IMHO, it's worth providing a patch for API consistency so as we enforce the 
fact that a host should be in only one AZ (but possibly 2 or more aggregates) 
and see how we can propose to user ability to provide 2 distincts AZs when 
booting.

Does everyone agree ?




Well, I'm replying to myself. The corner case is even trickier. I missed this 
patch [1] which already checks that when updating an aggregate to set an AZ, 
its hosts are not already part of another AZ. So, indeed, the coverage is 
already there... except for one thing :

If an operator is creating an aggregate with an AZ set to the default AZ 
defined in nova.conf and adds an host to this aggregate, nova 
availability-zone-list does show the host being part of this default AZ (normal 
behaviour). If we create an aggregate 'foo' without AZ,  then we add the same 
host to that aggregate, and then we update the metadata of the aggregate to set 
an AZ 'foo', then the AZ check won't notice that the host is already part of an 
AZ and will allow the host to be part of two distinct AZs.

Proof here : http://paste.openstack.org/show/75066/

I'm on that bug.
-Sylvain

[1] : https://review.openstack.org/#/c/36786
-Sylvain

On Apr 3, 2014, at 9:05 AM, Steve Gordon 
sgor...@redhat.commailto:sgor...@redhat.com wrote:

- Original Message -

Currently host aggregates are quite general, but the only ways for an
end-user to make use of them are:

1) By making the host aggregate an availability zones (where each host
is only supposed to be in one availability zone) and selecting it at
instance creation time.

2) By booting the instance using a flavor with appropriate metadata
(which can only be set up by admin).


I would like to see more flexibility available to the end-user, so I
think we should either:

A) Allow hosts to be part of more than one availability zone (and allow
selection of multiple availability zones when booting an instance), or

While changing to allow hosts to be in multiple AZs changes the concept from an 
operator/user point of view I do think the idea of being able to specify 
multiple AZs when booting an instance makes sense and would be a nice 
enhancement for users working with multi-AZ environments - I'm OK with this 
instance running in AZ1 and AZ2, but not AZ*.

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Clint Byrum
Excerpts from Robert Collins's message of 2014-04-03 04:02:20 -0700:
 Getting back in the swing of things...
 
 Hi,
 like most OpenStack projects we need to keep the core team up to
 date: folk who are not regularly reviewing will lose context over
 time, and new folk who have been reviewing regularly should be trusted
 with -core responsibilities.
 
 In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core
 


+1 for all changes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Jan Provazník

On 04/03/2014 01:02 PM, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core


+1 to all

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-04 Thread Clint Byrum
Excerpts from Michael Elder's message of 2014-04-04 07:16:55 -0700:
 Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624
 
 I still have concerns though about the design approach of creating a new 
 project for every stack and new users for every resource. 
 
 If I provision 1000 patterns a day with an average of 10 resources per 
 pattern, you're looking at 10,000 users per day. How can that scale? 
 

If that can't scale, then keystone is not viable at all. I like to think
we can scale keystone to the many millions of users level.

 How can we ensure that all stale projects and users are cleaned up as 
 instances are destroy? When users choose to go through horizon or nova to 
 tear down instances, what cleans up the project  users associated with 
 that heat stack? 
 

So, they created these things via Heat, but have now left the dangling
references in Heat, and expect things to work properly?

If they create it via Heat, they need to delete it via Heat.

 Keystone defines the notion of tokens to support authentication, why 
 doesn't the design provision and store a token for the stack and its 
 equivalent management? 
 

Tokens are _authentication_, not _authorization_. For the latter, we
need to have a way to lock down access to an individual resource in
Heat. This allows putting secrets in deployments and knowing that only
the instance which has been deployed to will have access to the secrets.
I do see an optimization possible, which is to just create a user for the
box that is given access to any deployments on the box. That would make
sense if users are going to create many many deployments per server. But
even at 10 per server, having 10 users is simpler than trying to manage
shared users and edit their authorization rules.

Now, I actually think that OAUTH tokens _are_ intended to be authorization
as well as authentication, so that is probably where the focus should
be long term. But really, you're talking about the same thing: a single
key lookup in keystone.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Chuck Thier
I think I have worked out the performance issues with eventlet and Requests
with most of it being that swiftclient needs to make use of
requests.session to re-use connections, and there are likely other areas
there that we can make improvements.

Now on to expect: 100-continue support, has anyone else looked into that?

--
Chuck


On Fri, Apr 4, 2014 at 9:41 AM, Chuck Thier cth...@gmail.com wrote:

 Howdy,

 Now that swift has aligned with the other projects to use requests in
 python-swiftclient, we have lost a couple of features.

 1.  Requests doesn't support expect: 100-continue.  This is very useful
 for services like swift or glance where you want to make sure a request can
 continue before you start uploading GBs of data (for example find out that
 you need to auth).

 2.  Requests doesn't play nicely with eventlet or other async frameworks
 [1].  I noticed this when suddenly swift-bench (which uses swiftclient)
 wasn't performing as well as before.  This also means that, for example, if
 you are using keystone with swift, the auth requests to keystone will block
 the proxy server until they complete, which is also not desirable.

 Does anyone know if these issues are being addressed, or begun working on
 them?

 Thanks,

 --
 Chuck

 [1]
 http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-04-04 Thread Zane Bitter

On 19/02/14 02:48, Clint Byrum wrote:

Since picking up Heat and trying to think about how to express clusters
of things, I've been troubled by how poorly the CFN language supports
using lists. There has always been the Fn::Select function for
dereferencing arrays and maps, and recently we added a nice enhancement
to HOT to allow referencing these directly in get_attr and get_param.

However, this does not help us when we want to do something with all of
the members of a list.

In many applications I suspect the template authors will want to do what
we want to do now in TripleO. We have a list of identical servers and
we'd like to fetch the same attribute from them all, join it with other
attributes, and return that as a string.

The specific case is that we need to have all of the hosts in a cluster
of machines addressable in /etc/hosts (please, Designate, save us,
eventually. ;). The way to do this if we had just explicit resources
named NovaCompute0, NovaCompute1, would be:

   str_join:
 - \n
 - - str_join:
 - ' '
 - get_attr:
   - NovaCompute0
   - networks.ctlplane.0
 - get_attr:
   - NovaCompute0
   - name
   - str_join:
 - ' '
 - get_attr:
   - NovaCompute1
   - networks.ctplane.0
 - get_attr:
   - NovaCompute1
   - name

Now, what I'd really like to do is this:

map:
   - str_join:
 - \n
 - - str_join:
   - ' '
   - get_attr:
 - $1
 - networks.ctlplane.0
   - get_attr:
 - $1
 - name
   - - NovaCompute0
 - NovaCompute1

This would be helpful for the instances of resource groups too, as we
can make sure they return a list. The above then becomes:


map:
   - str_join:
 - \n
 - - str_join:
   - ' '
   - get_attr:
 - $1
 - networks.ctlplane.0
   - get_attr:
 - $1
 - name
   - get_attr:
   - NovaComputeGroup
   - member_resources

Thoughts on this idea? I will throw together an implementation soon but
wanted to get this idea out there into the hive mind ASAP.


Apparently I read this at the time, but completely forgot about it. 
Sorry about that! Since it has come up again in the context of the 
TripleO Heat templates and merge.py thread, allow me to contribute my 2c.


Without expressing an opinion on this proposal specifically, consensus 
within the Heat core team has been heavily -1 on any sort of for-each 
functionality. I'm happy to have the debate again (and TBH I don't 
really know what the right answer is), but I wouldn't consider the lack 
of comment on this as a reliable indicator of lazy consensus in favour; 
equivalent proposals have been considered and rejected on multiple 
occasions.


Since it looks like TripleO will soon be able to move over to using 
AutoscalingGroups (or ResourceGroups, or something) for groups of 
similar servers, maybe we could consider baking this functionality into 
Autoscaling groups instead of as an intrinsic function.


For example, when you do get_attr on an autoscaling resource it could 
fetch the corresponding attribute from each member of the group and 
return them as a list. (It might be wise to prepend Output. or 
something similar - maybe Members. - to the attribute names, as 
AWS::CloudFormation::Stack does, so that attributes of the autoscaling 
group itself can remain in a separate namespace.)


Since members of your NovaComputeGroup will be nested stacks anyway 
(using ResourceGroup or some equivalent feature - preferably autoscaling 
with rolling updates), in the case above you'd define in the scaled 
template:


  outputs:
hosts_entry:
  description: An /etc/hosts entry for the NovaComputeServer
  value:
- str_join:
  - ' '
  - - get_attr:
  - NovaComputeServer
  - networks
  - ctlplane
  - 0
- get_attr:
  - NovaComputeServer
  - name

And then in the main template (containing the autoscaling group):

str_join:
  - \n
  - get_attr:
- NovaComputeGroup
- Members.hosts_entry

would give the same output as your example would.

IMHO we should do something like this regardless of whether it solves 
your use case, because it's fairly easy, requires no changes to the 
template format, and users have been asking for ways to access e.g. a 
list of IP addresses from a scaling group. That said, it seems very 
likely that making the other changes required for TripleO to get rid of 
merge.py (i.e. switching to scaling groups of templates instead of by 
multiplying resources in templates) will make this a viable solution for 
TripleO's use case as well.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Joshua Harlow
I found https://github.com/kennethreitz/requests/issues/713


Lukasahttps://github.com/Lukasa commented a month 
agohttps://github.com/kennethreitz/requests/issues/713#issuecomment-35594520

There's been no progress on this, and it's not high on the list of priorities 
for any of the core development team. This is only likely to happen any time 
soon if someone else develops it. =)



So maybe someone from openstack (or other) just needs to finish the above up?

From: Chuck Thier cth...@gmail.commailto:cth...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, April 4, 2014 at 11:50 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Issues with Python Requests

I think I have worked out the performance issues with eventlet and Requests 
with most of it being that swiftclient needs to make use of requests.session to 
re-use connections, and there are likely other areas there that we can make 
improvements.

Now on to expect: 100-continue support, has anyone else looked into that?

--
Chuck


On Fri, Apr 4, 2014 at 9:41 AM, Chuck Thier 
cth...@gmail.commailto:cth...@gmail.com wrote:
Howdy,

Now that swift has aligned with the other projects to use requests in 
python-swiftclient, we have lost a couple of features.

1.  Requests doesn't support expect: 100-continue.  This is very useful for 
services like swift or glance where you want to make sure a request can 
continue before you start uploading GBs of data (for example find out that you 
need to auth).

2.  Requests doesn't play nicely with eventlet or other async frameworks [1].  
I noticed this when suddenly swift-bench (which uses swiftclient) wasn't 
performing as well as before.  This also means that, for example, if you are 
using keystone with swift, the auth requests to keystone will block the proxy 
server until they complete, which is also not desirable.

Does anyone know if these issues are being addressed, or begun working on them?

Thanks,

--
Chuck

[1] 
http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Tempest without branches

2014-04-04 Thread Rochelle.RochelleGrober
(easier to insert my questions at top of discussion as they are more general)


How would test deprecations work in a branchless Tempest?  Right now, there is 
the discussion on removing the XML tests from Tempest, yet they are still valid 
for Havana and Icehouse.  If they get removed, will they still be accessible 
and runnable for Havana version tests?  I can see running from a tagged version 
for Havana, but if you are *not* running from the tag, then the files would be 
gone.  So, I'm wondering how this would work for Refstack, testing backported 
bugfixes, etc.

Another related question arises from the discussion of Nova API versions.  
Tempest tests are being enhanced to do validation, and the newer API versions  
(2.1,  3.n, etc. when the approach is decided) will do validation, etc.  How 
will these backward incompatible tests be handled if the test that works for 
Havana gets modified to work for Juno and starts failing Havana code base?

With the discussion of project functional tests that could be maintained in one 
place, but run in two (maintenance location undecided, run locale local and 
Tempest/Integrated), how would this cross project effort be affected by a 
branchless Tempest?

Maybe we need some use cases to ferret out the corner cases of a branchless 
Tempest implementation?  I think we need to get more into some of the details 
to understand what would be needed to be added/modified/ removed to make this 
design proposal work.

--Rocky



From: David Kranz [mailto:dkr...@redhat.com]
Sent: Friday, April 04, 2014 6:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [RFC] Tempest without branches

On 04/04/2014 07:37 AM, Sean Dague wrote:

An interesting conversation has cropped up over the last few days in -qa

and -infra which I want to bring to the wider OpenStack community. When

discussing the use of Tempest as part of the Defcore validation we came

to an interesting question:



Why does Tempest have stable/* branches? Does it need them?



Historically the Tempest project has created a stable/foo tag the week

of release to lock the version of Tempest that will be tested against

stable branches. The reason we did that is until this cycle we had

really limited nobs in tempest to control which features were tested.

stable/havana means - test everything we know how to test in havana. So

when, for instance, a new API extension landed upstream in icehouse,

we'd just add the tests to Tempest. It wouldn't impact stable/havana,

because we wouldn't backport changes.



But is this really required?



For instance, we don't branch openstack clients. They are supposed to

work against multiple server versions. Tempest, at some level, is

another client. So there is some sense there.



Tempest now also have flags on features, and tests are skippable if

services, or even extensions aren't enabled (all explicitly setable in

the tempest.conf). This is a much better control mechanism than the

course grained selection of stable/foo.





If we decided not to set a stable/icehouse branch in 2 weeks, the gate

would change as follows:



Project masters: no change

Project stable/icehouse: would be gated against Tempest master

Tempest master: would double the gate jobs, gate on project master and

project stable/icehouse on every commit.



(That last one needs infra changes to work right, those are all in

flight right now to assess doability.)



Some interesting effects this would have:



 * Tempest test enhancements would immediately apply on stable/icehouse *



... giving us more confidence. A large amount of tests added to master

in every release are enhanced checking for existing function.



 * Tempest test changes would need server changes in master and

stable/icehouse *



In trying tempest master against stable/havana we found a number of

behavior changes in projects that there had been a 2 step change in the

Tempest tests to support. But this actually means that stable/havana and

stable/icehouse for the same API version are different. Going forward

this would require master + stable changes on the projects + Tempest

changes. Which would provide much more friction in changing these sorts

of things by accident.



 * Much more stable testing *



If every Tempest change is gating on stable/icehouse, the week long

stable/havana can't pass tests won't happen. There will be much more

urgency to keep stable branches functioning.





If we got rid of branches in Tempest the path would be:

 * infrastructure to support this in infra - in process, probably

landing today

 * don't set stable/icehouse - decision needed by Apr 17th

 * changes to d-g/devstack to be extra explicit about what features

stable/icehouse should support in tempest.conf

 * see if we can make master work with stable/havana to remove the

stable/havana Tempest branch (if this is doable in a month, great, if

not just wait for havana to age out).




Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-04 Thread Zane Bitter

On 04/04/14 13:58, Clint Byrum wrote:

We could keep roughly the same structure: a separate template for each
OpenStack service (compute, block storage, object storage, ironic, nova
baremetal). We would then use Heat environments to treat each of these
templates as a custom resource (e.g. OS::TripleO::Nova,
OS::TripleO::Swift, etc.).


I've never fully embraced providers for composition. Perhaps I've missed
that as a key feature. An example of this would be helpful. I think if
we deprecated all of merge.py except the merge unique params and
resources into one template part, we could probably just start using
nested stacks for that and drop merge.py. However, I'm not a huge fan of
nested stacks as they are a bit clunky. Maybe providers would make that
better?

Anyway, I think I need to see how this would actually work before I can
really grasp it.


AIUI this use case is pretty much a canonical example of where you'd 
want to use providers. You have a server that you would like to treat as 
just a server, but can't because it comes with a WaitCondition and a 
random string generator (or whatever) into the bargain. So you group 
those resources together into a provider template behind a server-like 
facade, and just treat them like the single server you'd prefer them to be.


This could actually be a big win where you're creating multiple ones 
with a similar configuration, because you can parametrise it and move it 
inside the template and then you only need to specify the custom parts 
rather than repeat the whole declaration when you add more resources in 
the same group.


From there moving into scaling groups when the time comes should be 
trivial. I'm actually pushing for the autoscaling code to literally use 
the providers mechanism to implement scaling of stacks, but the 
ResourceGroup does something basically equivalent too - splitting your 
scaling unit into a separate template is in all cases the first step.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack] [nova] admin user create instance for another user/tenant

2014-04-04 Thread Xu (Simon) Chen
I wonder if there is a way to do the following. I have a user A with admin
role in tenant A, and I want to create a VM in/for tenant B as user A.
Obviously, I can use A's admin privilege to add itself to tenant B, but I
want to avoid that.

Based on the policy.json file, it seems doable:
https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L8

I read this as, as long as a user is an admin, it can create an instance..
Just like an admin user can remove an instance from another tenant.

But in here, it looks like as long as the context project ID and target
project ID don't match, an action would be rejected:
https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L968

Indeed, when I try to use user A's token to create a VM (POST to
v2/tenant_b/servers), I got the exception from the above link.

On the other hand, according to here, VM's project_id only comes from the
context:
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L767

I wonder if it makes sense to allow admin users to specify a project_id
field (which overrides context.project_id) when creating a VM. This
probably requires non-trivial code change.

Or maybe there is another way of doing what I want?

Thanks.
-Simon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-04 Thread Clint Byrum
Excerpts from Adam Young's message of 2014-04-04 18:48:40 -0700:
 On 04/04/2014 02:46 PM, Clint Byrum wrote:
  Excerpts from Michael Elder's message of 2014-04-04 07:16:55 -0700:
  Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624
 
  I still have concerns though about the design approach of creating a new
  project for every stack and new users for every resource.
 
  If I provision 1000 patterns a day with an average of 10 resources per
  pattern, you're looking at 10,000 users per day. How can that scale?
 
  If that can't scale, then keystone is not viable at all. I like to think
  we can scale keystone to the many millions of users level.
 
  How can we ensure that all stale projects and users are cleaned up as
  instances are destroy? When users choose to go through horizon or nova to
  tear down instances, what cleans up the project  users associated with
  that heat stack?
 
  So, they created these things via Heat, but have now left the dangling
  references in Heat, and expect things to work properly?
 
  If they create it via Heat, they need to delete it via Heat.
 
  Keystone defines the notion of tokens to support authentication, why
  doesn't the design provision and store a token for the stack and its
  equivalent management?
 
  Tokens are _authentication_, not _authorization_.
 
 Tokens are authorization, not authentication.  For Authentication you 
 should be using a real cryptographically secure authentication 
 mechanism:  either Kerberos or X509.
 

Indeed, I may have used the terms incorrectly.

Unless I'm mistaken, a token is valid wherever it is presented. It is
simply proving that you authenticated yourself to keystone and that you
have xyz roles.

Perhaps the roles are authorization. But those roles aren't scoped to
a token, they're scoped to a user, so it still remains that it serves
as authentication for what you have and what you're authorized to do as
a whole user.

That is why I suggest OAUTH, because that is a scheme which offers
tokens with limited scope. We kind of have the same thing with trusts,
but that also doesn't really offer the kind of isolation what we want,
nor does it really offer advantages over user-per-deployment.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev