Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Excerpts form Clint Byrum's message on 12.11.2013 19:32:50:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 12.11.2013 19:35
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
 
  Hi all,
 
  I have just posted the following wiki page to reflect a refined
proposal
snip
 Hi Thomas, thanks for spelling this out clearly.

 I am still -1 on anything that specifies the place a configuration is
 hosted inside the configuration definition itself. Because configurations
 are encapsulated by servers, it makes more sense to me that the servers
 (or server groups) would specify their configurations. If changing to a

IMO the current proposal does _not_ the concrete hosting inside component
definition. The component definition is in this external template file and
all we do is give it a pointer to the server at deploy time so that the
implementation can perform whatever is needed to perform at that time.
The resource in the actual template file is like the intermediate
association resource you are suggesting below (similar to what
VolumeAttachment does), so this is the place where you say which component
gets deployed where. This represents a concrete use of a software
component. Again, all we do is pass in a pointer to the server where _this
use_ of the software component shall be installed.

 more logical model is just too hard for TOSCA to adapt to, then I suggest
 this be an area that TOSCA differs from Heat. We don't need two models

The current proposal was done completely unrelated to TOSCA, but really
just a try to have a pragmatic approach for solving the use cases we talked
about. I don't really care in which directions the relations point. Both
ways can be easily mapped to TOSCA. I just think the current proposal is
intuitive, at least to me. And you could see it as kind of a short notation
that avoids another association class.

 for communicating configurations to servers, and I'd prefer Heat stay
 focused on making HOT template authors' and users' lives better.

 I have seen an alternative approach which separates a configuration
 definition from a configuration deployer. This at least makes it clear
 that the configuration is a part of a server. In pseudo-HOT:

 resources:
   WebConfig:
 type: OS::Heat::ChefCookbook
 properties:
   cookbook_url: https://some.test/foo
   parameters:
 endpoint_host:
   type: string
   WebServer:
 type: OS::Nova::Server
 properties:
   image: webserver
   flavor: 100
   DeployWebConfig:
 type: OS::Heat::ConfigDeployer
 properties:
   configuration: {get_resource: WebConfig}
   on_server: {get_resource: WebServer}
   parameters:
 endpoint_host: {get_attribute: [ WebServer, first_ip]}

The DeployWebConfig association class actually is the 'mysql' resource in
the template on the wiki page. See the Design alternative section I put it.
That would be fine with me as well.


snip


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 12.11.2013
21:27:13:
 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 12.11.2013 21:29
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 Hi,

 I agree with Clint that component placement specified inside
 component configuration is not a right thing. I remember that mostly
 everyone agreed that hosted_on should not be in HOT templates.
 When one specify placement explicitly inside  a component definition
 it prevents the following:
 1. Reusability - you can't reuse component without creating its
 definition copy with another placement parameter.

See my reply to Clint's mail. The deployment location in form of the
server reference is _not_ hardcoded in the component definition. All we
do is to provide a pointer to the server where a software shall be deployed
at deploy time. You can use a component definition in many place, and in
each place where you use it you provide it a pointer to the target server.

 2. Composability - it will be no clear way to express composable
 configurations. There was a clear way in a template showed during
 design session where server had a list of components to be placed.

I think we have full composability with the deployment resources that
mark uses of software component definitions.

 3. Deployment order - some components should be placed in strict
 order and it will be much easier just make an ordered list of
 components then express artificial dependencies between them just
 for ordering.

With the deployment resources and Heat's normal way of handling
dependencies between any resources, we should be able have proper ordering.
I agree that strict ordering is probably the most easy way of doing it, but
we have implementations that do deployment in a more flexible manner
without any problems.


 Thanks
 Georgy


 On Tue, Nov 12, 2013 at 10:32 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
 
  Hi all,
 
  I have just posted the following wiki page to reflect a refined
proposal
  for HOT software configuration based on discussions at the design
summit
  last week. Angus also put a sample up in an etherpad last week, but we
did
  not have enough time to go thru it in the design session. My write-up
is
  based on Angus' sample, actually a refinement, and on discussions we
had in
  breaks, plus it is trying to reflect all the good input from ML
discussions
  and Steve Baker's initial proposal.
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
  Please review and provide feedback.

 Hi Thomas, thanks for spelling this out clearly.

 I am still -1 on anything that specifies the place a configuration is
 hosted inside the configuration definition itself. Because configurations
 are encapsulated by servers, it makes more sense to me that the servers
 (or server groups) would specify their configurations. If changing to a
 more logical model is just too hard for TOSCA to adapt to, then I suggest
 this be an area that TOSCA differs from Heat. We don't need two models
 for communicating configurations to servers, and I'd prefer Heat stay
 focused on making HOT template authors' and users' lives better.

 I have seen an alternative approach which separates a configuration
 definition from a configuration deployer. This at least makes it clear
 that the configuration is a part of a server. In pseudo-HOT:

 resources:
   WebConfig:
     type: OS::Heat::ChefCookbook
     properties:
       cookbook_url: https://some.test/foo
       parameters:
         endpoint_host:
           type: string
   WebServer:
     type: OS::Nova::Server
     properties:
       image: webserver
       flavor: 100
   DeployWebConfig:
     type: OS::Heat::ConfigDeployer
     properties:
       configuration: {get_resource: WebConfig}
       on_server: {get_resource: WebServer}
       parameters:
         endpoint_host: {get_attribute: [ WebServer, first_ip]}

 I have implementation questions about both of these approaches though,
 as it appears they'd have to reach backward in the graph to insert
 their configuration, or have a generic bucket for all configuration
 to be inserted. IMO that would look a lot like the method I proposed,
 which was to just have a list of components attached directly to the
 server like this:

 components:
   WebConfig:
     type: Chef::Cookbook
     properties:
       cookbook_url: https://some.test/foo
       parameters:
         endpoing_host:
           type: string
 resources:
   WebServer:
     type: OS::Nova::Server
     properties:
       image: webserver
       flavor: 100
     components:
       - webconfig:
         component: {get_component: WebConfig}
         parameters:
           endpoint_host: {get_attribute: [ WebServer, first_ip ]}

 Of course, 

Re: [openstack-dev] [Mistral] really simple workflow for Heat configuration tasks

2013-11-13 Thread Thomas Spatzier
Angus Salkeld asalk...@redhat.com wrote on 12.11.2013 23:05:57:
 From: Angus Salkeld asalk...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 12.11.2013 23:09
 Subject: Re: [openstack-dev] [Mistral] really simple workflow for
 Heat configuration tasks

 On 12/11/13 13:04 +0100, Thomas Spatzier wrote:
 Hi Angus,
 
 that is an interesting idea. Since you mentioned the software config
 proposal in the beginning as a related item, I guess you are trying to
 solve some software config related issues with Mistral. So a few
questions,
 looking at this purely from a software config perspect:
 
 Are you thinking about doing the infrastructure orchestration (VMs,
 volumes, network etc) with Heat's current capabilities and then let the
 complete software orchestration be handled by Mistral tasks? I.e.
bootstrap
 the workers on each VM and have the definition of when which agent does
 something defined in a flow?

 Well we either add an api to heat to do install_config or we use
 a service that is designed to do tasks. Clint convienced me quite
 easily that install/apply_config is just a task.

 
 If yes, is there a way for passing data around - e.g. output produced by
 one software config step is input for another software config step?
 
 Again, if my above assumption is true, couldn't there be problems when
we
 having two ways of doing orchestration, when the software layer thing
would
 take the Heat engine out of some processing and take away some control?
Or
 are you thinking about using Mistral as a general mechanism for task
 execution in Heat, which would then probably resolve the conflict?
 
 At this point we really do not need a flow, just a task concept
 from Mistral. Prehaps ways of grouping them and targeting them
 for a particular server.

 I'd see the config_deployer resource posting a task to Mistral
 and we have an agent in the server that can consume tasks and
 pass them to sub-agents that understand the particular format.

Ok, makes sense to me. And I don't see a conflict with the software config
proposal, but this is one of the implementation details we said need to
be figured out :-)


 If we do this then Heat is in charge of the orchestration and
 there are not two workflows fighting for control. I do agree
 that there should just be one.

 I think once Mistral is more mature we can decide whether to pass
 full workflow control over to it, but for now the task functionality
 is all we need. (and a time based one would be neat too btw).

 -Angus

 Regards,
 Thomas
 
 Angus Salkeld asalk...@redhat.com wrote on 12.11.2013 02:15:15:
  From: Angus Salkeld asalk...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 12.11.2013 02:18
  Subject: [openstack-dev] [Mistral] really simple workflow for Heat
  configuration tasks
 
  Hi all
 
  I think some of you were at the Software Config session at summit,
  but I'll link the ideas that were discussed:
 
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
 
  To me the basics of it are:
  1. we need an entity/resource to place the configuration (in Heat)
  2. we need a resource to install the configuration
  (basically a task in Mistral)
 
 
  A big issue to me is the conflict between heat's taskflow and the new
  external one. What I mean by conflict is that it will become tricky
  to manage two parallel taskflow instances in one stack.
 
  This could be solved by:
  1: totally using mistral (only use mistral workflow)
  2: use a very simple model of just asking mistral to run tasks (no
  workflow) this allows us to use heat's workflow
  but mistral's task runner.
 
  Given that mistral has no real implementation yet, 2 would seem
  reasonable to me. (I think Heat developers are open to 1 when
  Mistral is more mature.)
 
  How could we use Mistral for config installation?
  -
  1. We have a resource type in Heat that creates tasks in a Mistral
  workflow (manual workflow).
  2. Heat pre-configures the server to have a Mistral worker
  installed.
  3. the Mistral worker pulls tasks from the workflow and passes them
  to an agent that can run it. (the normal security issues jump up
  here - giving access to the taskflow from a guest).
 
  To do this we need an api that can add tasks to a workflow
dynamically.
  like this:
  - create a simple workflow
  - create and run task A [run on server X]
  - create and run task B [run on server Y]
  - create and run task C [run on server X]
 
  (note: the task is run and completes before the next is added if there
  is a dependancy, if tasks can be run in parallel then we add
 multiple
  tasks)
 
  The api could be something like:
  CRUD mistral/workflows/
  CRUD mistral/workflows/wf/tasks
 
 
  One thing that I am not sure of is how a server(worker) would know if
a
 task
  was for it or not.
  - perhaps we have a 

Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2013-11-13 00:28:59 -0800:
 Angus Salkeld asalk...@redhat.com wrote on 13.11.2013 00:22:44:
  From: Angus Salkeld asalk...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 13.11.2013 00:25
  Subject: Re: [openstack-dev] [Heat] HOT software configuration
  refined after design summit discussions
 
  On 12/11/13 10:32 -0800, Clint Byrum wrote:
  Excerpts from Thomas Spatzier's message of 2013-11-11 08:57:58 -0800:
  
   Hi all,
  
   I have just posted the following wiki page to reflect a refined
 proposal
   for HOT software configuration based on discussions at the design
 summit
   last week. Angus also put a sample up in an etherpad last week, but we
 did
   not have enough time to go thru it in the design session. My write-up
 is
   based on Angus' sample, actually a refinement, and on discussionswe
 had in
   breaks, plus it is trying to reflect all the good input from ML
 discussions
   and Steve Baker's initial proposal.
  
  
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
  
   Please review and provide feedback.
  
  Hi Thomas, thanks for spelling this out clearly.
  
  I am still -1 on anything that specifies the place a configuration is
  hosted inside the configuration definition itself. Because
 configurations
  are encapsulated by servers, it makes more sense to me that the servers
  (or server groups) would specify their configurations. If changing to a
  more logical model is just too hard for TOSCA to adapt to, then I
 suggest
  this be an area that TOSCA differs from Heat. We don't need two models
  for communicating configurations to servers, and I'd prefer Heat stay
  focused on making HOT template authors' and users' lives better.
  
  I have seen an alternative approach which separates a configuration
  definition from a configuration deployer. This at least makes it clear
  that the configuration is a part of a server. In pseudo-HOT:
  
  resources:
WebConfig:
  type: OS::Heat::ChefCookbook
  properties:
cookbook_url: https://some.test/foo
parameters:
  endpoint_host:
type: string
WebServer:
  type: OS::Nova::Server
  properties:
image: webserver
flavor: 100
DeployWebConfig:
  type: OS::Heat::ConfigDeployer
  properties:
configuration: {get_resource: WebConfig}
on_server: {get_resource: WebServer}
parameters:
  endpoint_host: {get_attribute: [ WebServer, first_ip]}
 
 
  This is what Thomas defined, with one optimisation.
  - The webconfig is a yaml template.
 
  As you say the component is static - if so why even put it inline in
  the template (well that was my thinking, it seems like a template not
  really a resource).
 
 Yes, exactly. Our idea was to put it in its own file since it is really
 static and having it in its own file makes it much more reusable.
 With 'WebConfig' defined inline in the template as in the snippet above,
 you will have to update many template files where you use the component,
 whereas you will only have to touch one place when it is in its own file.
 Ok, the example above looks simple, but in reality we will see more complex
 sets of parameters etc.
 Maybe for very simple use cases, we can allow a shortcut of inlining it in
 the template (I mentioned this in the wiki) and avoid the need for a
 separate file.
 

I think I understand now, and we're all basically on the same page. As
usual, I was confused by the subtleties.

I think the in-line capability is critical to have in the near-term
plan, but would +2 an implementation that left it out at the beginning.

Before we ratify this and people run off and write code, I'd like to
present my problems in TripleO and try to see if I can express them
using the spec you've laid out. Will try and do that in the next couple
of days.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Climate IRC meeting exceptionnally today at 1000 UTC

2013-11-13 Thread Sylvain Bauza

Hi,

As previously agreed, regular weekly IRC meeting will take place today 
at 1000 UTC exceptionnally.

We'll switch back to Mondays 1000 UTC starting next week.

Save the date, don't forget it.

Thanks,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Introduction of AngularJS in membership workflow

2013-11-13 Thread Jiri Tomasek

Hi,

I'd like to point out, that our main intent should be to use mostly 
AngularJS's Directives feature.
As Jordan mentions, It is a self-contained reusable item that is 
initialized on the html element
(see line 6 in [2]), you can pass it variables that Django template has 
available. Then Angular takes
over and replaces the html element with template that belongs to 
directive. The business logic
is taken care by controller that is also assigned to the directive. The 
directive can get data either
from the variables passed to the html element or better, through the 
service injected to controller.

This service brings data asynchronously from our API.

In our patch we are getting data using the current membership code, that 
brings data from
hidden form. Maintaining the synchronization between the directive and 
the form involves quite
a lot of code. Once we'd have the API on the Django side that would 
serve the data for membership
component in json, the membership directive code would get reduced by a 
good amount.


Reading back on yesterday's Horizon meeting, there was some confusion 
about compile phase
The compile phase in angular does not have much to do with jasvascript 
compilation/minification.
It is a phase in AngularJS when compiler parses the template and 
instantiates directives and expressions.
( 
http://www.benlesh.com/2013/08/angular-compile-how-it-works-how-to-use.html 
)


Jirka

On 11/11/2013 08:21 PM, Jordan OMara wrote:

Hello Horizon!

On November 11th, we submitted a patch to introduce AngularJS into
Horizon [1]. We believe AngularJS adds a lot of value to Horizon.

First, AngularJS allows us to write HTML templates for interactive
elements instead of doing jQuery-based DOM manipulation. This allows
the JavaScript layer to focus on business logic, provides easy to
write JavaScript testing that focuses on the concern (e.g. business
logic, template, DOM manipulation), and eases the on-boarding for new
developers working with the JavaScript libraries.
Second, AngularJS is not an all or nothing solution and integrates
with the existing Django templates. For each feature that requires
JavaScript, we can write a self-contained directive to handle the DOM,
a template to define our view and a controller to contain the business
logic. Then, we can add this directive to the existing template. To
see an example in action look at _workflow_step_update_member.html
[2]. It can also be done incrementally - this isn't an all-or-nothing
approach with a massive front-end time investment, as the Angular
components can be introduced over time.

Finally, the initial work to bring AngularJS to Horizon provides a
springboard to remove the DOM Database (i.e. hidden-divs) used on
the membership page (and others). Instead of abusing the DOM, we can
instead expose an API for membership data, add an AngularJS resource
(i.e. reusable representation of API entities) for the API. The data
can then be loaded data asynchronously and allow the HTML to focus on
expressing a semantic representation of the data to the user.
  Please give our patch a try! You can find the interactions on
Domains/Groups, Flavors/Access(this form does not seem to work in
current master or on my patch) and Projects/UsersGroups. You should
notice that it behaves...exactly the same!
  We look forward to your feedback.  Jordan O'Mara  Jirka Tomasek

[1] [https://review.openstack.org/#/c/55901/] [2] 
[https://github.com/jsomara/horizon/blob/angular2/horizon/templates/horizon/common/_workflow_step_update_members.html]



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Climate IRC meeting exceptionnally today at 1000 UTC

2013-11-13 Thread Sylvain Bauza

Thanks all for your presence.
Please find the logs of the meeting here : 
http://eavesdrop.openstack.org/meetings/climate/2013/climate.2013-11-13-09.59.html


Next meeting will be held Monday 18th Nov 1000 UTC.

Cheers,
-Sylvain

Le 13/11/2013 10:11, Sylvain Bauza a écrit :

Hi,

As previously agreed, regular weekly IRC meeting will take place today 
at 1000 UTC exceptionnally.

We'll switch back to Mondays 1000 UTC starting next week.

Save the date, don't forget it.

Thanks,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Climate meeting minutes

2013-11-13 Thread Nikolay Starodubtsev
Hi all,
You can see Climate meeting minutes here
http://eavesdrop.openstack.org/meetings/climate/2013/climate.2013-11-13-09.59.html

Nikolay Starodubtsev

Software Engineer

Mirantis Inc.

Skype: dark_harlequine1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer][Horizon] The future or pagination

2013-11-13 Thread Julien Danjou
Hi,

We've been discussing and working for a while on support for pagination
on our API v2 in Ceilometer. There's a large amount that already been
done, but that is now stalled because we are not sure about the
consensus.

There's mainly two approaches around pagination as far as I know, one
being using limit/offset and the other one being marker based. As of
today, we have no clue of which one we should pick, in the case we would
have a technical choice doable between these two.

I've added the Horizon tag in the subject because I think it may concern
Horizon, since it shall be someday in the future one of the main
consumer of the Ceilometer API.

I'd be also happy to learn what other projects do in this regard, and
what has been said and discussed during the summit.

To a certain extend, we Ceilometer would also be happy to find common
technical ground on this to some extend so _maybe_ we can generalise
this into WSME itself for consumption from other projects.

Cheers,
-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][qa]Tempest tests for Ceilometer

2013-11-13 Thread Julien Danjou
On Tue, Nov 12 2013, Nadya Privalova wrote:

Hi Nadya,

 Here is a list of ceilometer-regarding cr in tempest (just a reminder):

1. https://review.openstack.org/#/c/39237/
2. https://review.openstack.org/#/c/55276/

39237 seems on a good road to go in at this stage.

 And even more but they are abandoned due to reviewers' inactivity (take a
 look in whiteboard):
 https://blueprints.launchpad.net/tempest/+spec/add-basic-ceilometer-tests .
 Is there any reasons why cr were not reviewed?

Many patches expired because it was impossible for them to get a +1 from
Jenkins, due to a bug in devstack. Now that it has been fixed, it's
possible to have a patch such as 39237 to pass.

 I guess the first step to be done is test plan. I've created a doc
 https://etherpad.openstack.org/p/ceilometer-test-plan and plan to start
 working on it. If you have any thoughts about the plan - you are welcome!

I'd suggest to use the blueprint whiteboard at:

  https://blueprints.launchpad.net/tempest/+spec/add-basic-ceilometer-tests

I'm afraid that otherwise this Etherpad will be lost. :-(

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling

2013-11-13 Thread Renat Akhmerov

On 13 нояб. 2013 г., at 6:39, Angus Salkeld asalk...@redhat.com wrote:

 On 12/11/13 15:13 -0800, Christopher Armstrong wrote:
 Given the recent discussion of scheduled autoscaling at the summit session
 on autoscaling, I looked into the state of scheduling-as-a-service in and
 around OpenStack. I found two relevant wiki pages:
 
 https://wiki.openstack.org/wiki/EventScheduler
 
 https://wiki.openstack.org/wiki/Mistral/Cloud_Cron_details
 
 The first one proposes and describes in some detail a new service and API
 strictly for scheduling the invocation of webhooks.
 
 The second one describes a part of Mistral (in less detail) to basically do
 the same, except executing taskflows directly.
 
 Here's the first question: should scalable cloud scheduling exist strictly
 as a feature of Mistral, or should it be a separate API that only does
 event scheduling? Mistral could potentially make use of the event
 scheduling API (or just rely on users using that API directly to get it to
 execute their task flows).
 

Good point. We changed our opinion on that several times by now. We need to 
have a closer look at this API in order to understand what would be the best 
responsibility distribution here. But basically yes, Mistral might not contain 
that if this API makes a value of using it somewhere else.


 Second question: if the proposed EventScheduler becomes a real project,
 which OpenStack Program should it live under?
 
 Third question: Is anyone actively working on this stuff? :)
 

Yes, we started actively working on this. And you’re very welcome to join :)

https://etherpad.openstack.org/p/TaskServiceDesign
https://etherpad.openstack.org/p/TaskFlowAndMistral
https://etherpad.openstack.org/p/MistralQuestionsBeforeImplementation
https://etherpad.openstack.org/p/MistralRoadmap
https://etherpad.openstack.org/p/MistralAPISpecification
https://etherpad.openstack.org/p/MistralDSLSpecification

And we have a meeting at #openstack-meeting on Mondays at 16.00 UTC.

 Your work mates;) https://github.com/rackerlabs/qonos
 
 how about merge qonos into mistral, or at lest put it into stack forge?

Worth considering, we need to think it over.

 -Angus
 
 
 -- 
 IRC: radix
 Christopher Armstrong
 Rackspace
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [RFC] Straw man to start the incubation / graduation requirements discussion

2013-11-13 Thread Sean Dague
(Apologies, this started on the TC list, and really should have started
on -dev, correctly posting here now for open discussion)

There were a few chats at summit about this, mostly on the infra /
devstack / qa side of the house. Consider the following a straw man to
explain the current state of the world, and what I'd like to see change here
I call out projects by name here, not to
make fun of them, but that I think concrete examples bring the point
home much more than abstract arguments (at least for me).

This is looking at raising the bar quite a bit along the way. However,
as someone that spends a lot of time trying to keep the whole ball of
wax holding together, and is spending a lot of time retroactively trying
to get projects into our integrated gate (and huge pain to everyone, as
their gate commits get bounced by racey projects), I think we really
need to up this bar if we want a 20 integrated project version of
OpenStack to hold together.


=
 Proposed new Incubation and Graduation Requirements
=


The Current State of the World
==
The integrated gate in OpenStack is a set of devstack / tempest tests
which run symmetrically between projects. I.e. we run all the tempest
tests (nova, keystone, swift, etc.), on a change to cinder. That means
that nova can ensure that a cinder change that would break them will
be prevented from landing. An example being the
gate-tempest-devstack-vm-full job.

OpenStack is currently 9 integrated projects in Havana, 10 once we get
to Icehouse (+Trove).

Heat added tests to our integrated gate during the Havana cycle,
though the deeper guest testing is not part of our gate. Ceilometer
has been integrated for 2 releases, and doesn't have integrated gate
testing. Trove is just now exploring how they'd get into the gate for
Icehouse release.

Upgrade testing situation is worse. Only 5 projects currently are part
of upgrade testing: Nova, Cinder, Swift, Glance, Keystone. The rest
are not set up in that model at all, so we have no infrastructure to
ensure the other projects don't break upgrades of themselves.

Heat, Trove, and Savana represent an interesting challenge from
current gate models in that real validation requires something beyond
a trivial guest, as they care about the contents inside compute
resources. This desire is likely to grow with other Layer 4 projects
coming into the OpenStack ecosystem [1].

Ironic as an incubated project provides an even different set of
challenges, as not only do we not have a gating approach, but we also
really should have an approach to test an upgrade from nova-baremetal
= ironic to ensure there is a migration path for people in the
future.

Ceilometer once relied on a version of MongoDB that worked in the
gate, but they were never gating, so now they rely on a version of
MongoDB that doesn't work in the gate. They made a technical decision
to upgrade requirements with no gate feedback because they weren't
actually integration testing on a regular basis.



Proposed Incubation requirements

Once something becomes an integrated project, it's important that they
are able to run in the gate.

Both devstack and devstack-gate now support hooks, so with a couple of
days of work any project in stackforge could build a gate job which
sets up a devstack of their configuration, including their code,
running some project specific test they feel is appropriate to ensure
they could run in the gate environment.

This would ensure an incubated project works with OpenStack global
requirements, or if it requires something new, that's known very
clearly before incubation.


Proposed Graduation requirements

All integrated projects should be in the integrated gate, as this is
the only way we provably know that they can all work together, at the
same level of requirements, in a consistent way.

During incubation landing appropriate tests in Tempest is fair
game. So the expectation would be that once a project is incubated
they would be able to land tests in tempest. Before integrated we'd
need to ensure the project had tests which could take part in the
integrated gate, so as soon as a project is voted integrated, it has
some working integrated gate tests. (Note: there is actually a
symmetric complexity here, to be worked out later).


Proposed Stable Release requirements

We have this automatic transition that happens when a project that's
integrated for a release, actually releases as part of
that. I.e. Trove and Icehouse. There is no additional TC decision
about whether or not Trove is part of the stable release, once
integrated, it just is. Nothing that it does over that cycle will kick
it out of the stable release. This is one of the reasons it needs to
be in the integrated gate **before** graduation.

Additionally, 

Re: [openstack-dev] [RFC] Straw man to start the incubation / graduation requirements discussion

2013-11-13 Thread Thierry Carrez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Sean Dague wrote:
 [...] Proposed Incubation requirements 
  Once something becomes an
 integrated project, it's important that they are able to run in the
 gate.
 
 Both devstack and devstack-gate now support hooks, so with a couple
 of days of work any project in stackforge could build a gate job
 which sets up a devstack of their configuration, including their
 code, running some project specific test they feel is appropriate
 to ensure they could run in the gate environment.
 
 This would ensure an incubated project works with OpenStack global 
 requirements, or if it requires something new, that's known very 
 clearly before incubation.

That makes sense, my only concern with it is, how much support from
QA/Infra would actually be needed *before* incubation can even be
requested. One of the ideas behind the incubation status is to allow
incubated projects to tap into common resources (QA, infra, release
management...) as they cover the necessary ground before being fully
integrated. Your proposal sounds like they would also need some
support even before being incubated.

Also does it place a requirement that all projects wanting to request
incubation to be placed in stackforge ? That sounds like a harsh
requirement if we were to reject them.

(sidenote: I'm planning to suggest we create an emerging technology
label for projects that are (1) in stackforge, (2) applied for
incubation but got rejected purely for community maturity reasons.
Projects under this label would potentially get some limited space at
summits to gain more visibility. Designate belongs to that category,
but without a clear label it seems to fall in the vast bucket of
openstack-related projects and not gaining more traction. Not sure we
can leverage it to solve the issue here though).

 Proposed Graduation requirements  
 All integrated projects should be in the integrated gate, as this
 is the only way we provably know that they can all work together,
 at the same level of requirements, in a consistent way.
 
 During incubation landing appropriate tests in Tempest is fair 
 game. So the expectation would be that once a project is incubated 
 they would be able to land tests in tempest. Before integrated
 we'd need to ensure the project had tests which could take part in
 the integrated gate, so as soon as a project is voted integrated,
 it has some working integrated gate tests. (Note: there is actually
 a symmetric complexity here, to be worked out later).

+1 -- I think we already made that decision for any future graduation.

 Proposed Stable Release requirements 
  We have this automatic
 transition that happens when a project that's integrated for a
 release, actually releases as part of that. I.e. Trove and
 Icehouse. There is no additional TC decision about whether or not
 Trove is part of the stable release, once integrated, it just is.
 Nothing that it does over that cycle will kick it out of the stable
 release. This is one of the reasons it needs to be in the
 integrated gate **before** graduation.
 
 Additionally, upgrade path is critically important to our users,
 and the number one piece of feedback we received from the User
 Survey. It was also important enough to our developers that it was
 scattered all over the Icehouse Design Summit. All integrated
 projects should be included in upgrade testing the moment they are
 in a stable release. (ex: when Icehouse is released, Trove should
 be in master grenade, and upgrade testing from Icehouse - master
 for the J cycle from day one).

I agree with you, but I don't see how we can enforce this one. Like
you say, integrated projects get commonly released and get a stable
branch in all cases. We can strongly encourage them to get their
grenade act together before the final release, but there is nothing we
can do (short of kicking them out of the integrated release
altogether) to ensure it happens.

 [...] Raised Questions  - what about existing
 incubated projects, what would be their time frame to get with this
 new program - what about existing integrated projects that
 currently don't exist with either an upgrade or gate story? - what
 about an upgrade deprecation path (i.e. nova-network = neutron,
 nova-baremetal = ironic)

The transition for existing incubated/integrated projects is an
interesting question. I think it's fine to require that
currently-incubated projects get into the integrated gate before they
can graduate. For currently-integrated projects that are not up to
snuff, I think we should strongly suggest that they fix it before the
icehouse release, otherwise the next TC might be driven to make
unpleasant decisions.

- -- 
Thierry Carrez (ttx)
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/


Re: [openstack-dev] [PTL] Proposed Icehouse release schedule

2013-11-13 Thread Russell Bryant
On 11/13/2013 08:15 AM, Thierry Carrez wrote:
 Two options are possible for that off week:
 
 * Week of April 21 - this one is just after release, and some people
 still have a lot to do during that week. On the plus side it's
 conveniently placed next to the Easter weekend.
 * Week of April 28 - that's the middle week, which sounds a bit weird...
 but for me that would be the less active week, so I have a slight
 preference for it.
 
 What would be your preference, if any ? I'm especially interested in
 opinions from people who have a hard time taking some time off (PTLs,
 infra and release management people).

I think my preference is the second week.  Easter makes the first week
tempting, but as you point out, realistically there is still going to be
some amount of looking out for and potentially dealing with release
aftermath.

The second week everyone really should be able to relax.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Proposed Icehouse release schedule

2013-11-13 Thread Flavio Percoco

On 13/11/13 14:15 +0100, Thierry Carrez wrote:

Hi PTLs and everyone else,

Last week we had the traditional release schedule session at the
Design Summit where we discussed options for the Icehouse schedule.

Here is the proposal we selected:
https://wiki.openstack.org/wiki/Icehouse_Release_Schedule

Please let me know if something is obviously wrong with it. I'd like to
officialize it at the project/release status meeting next week, so that
we can quickly start moving towards icehouse-1 (which will happen fast).

One particularity in this schedule is that we have three full weeks
between release and design summit, which is a long time (we usually only
have 2). We discussed declaring one of those weeks an off week, where
everyone is encouraged to take a well-deserved vacation. Activity would
be reduced during this week, and nobody should expect anyone to read
their email. For those of us who find it difficult to let go and stop
constantly checking dev activity (that includes me), pre-designating a
specific off week will really help us to take a break.

Two options are possible for that off week:

* Week of April 21 - this one is just after release, and some people
still have a lot to do during that week. On the plus side it's
conveniently placed next to the Easter weekend.
* Week of April 28 - that's the middle week, which sounds a bit weird...
but for me that would be the less active week, so I have a slight
preference for it.


I personally prefer the second one. Taking some time off right after
the release will raise Murphy's attention.



What would be your preference, if any ? I'm especially interested in
opinions from people who have a hard time taking some time off (PTLs,
infra and release management people).


And me! ;)

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Alembic or SA Migrate (again)

2013-11-13 Thread Herndon, John Luke
Hi Folks!

Sorry to dig up a really old topic[1][2], but I¹d like to know the status
of
ceilometer db migrations.

Rehash: I¹d like to submit two branches to modify the Event and Trait
tables. If I
were to do that now, I would need to write SQLAlchemy scripts to do the
database migration [3]. Since the unit tests use db migrations to build up
the db
schema, there’s currently no way to get the unit tests to run if your new
code uses an alembic migration and needs to alter columns�.

A couple of questions:
1) What is the progress of creating the schema from the models for unit
tests?
2) What is the time frame for requiring alembic migrations?
3) Should I push these branches up now, or wait and use an alembic
migration?
4) Is there anything I can do to help with 1 or 2?


Thanks,
-john

1: 
http://lists.openstack.org/pipermail/openstack-dev/2013-August/014214.html
2: 
http://lists.openstack.org/pipermail/openstack-dev/2013-September/014593.ht
ml
3: 
https://bitbucket.org/zzzeek/alembic/issue/21/column-renames-not-supported-
on-sqlite


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ipv6] IPv6 meeting - Thursdays 21:00UTC - #openstack-meeting-alt

2013-11-13 Thread Collins, Sean (Contractor)
I haven't heard any negative response to the proposed time,
so I'd like to put a stake in the ground and utilize that time slot.

We will have our first meeting on Nov 21st.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Introduction of AngularJS in membership workflow

2013-11-13 Thread Jordan OMara

On 13/11/13 11:47 +0100, Jiri Tomasek wrote:

On 11/13/2013 11:20 AM, Maxime Vidori wrote:

Hi all,

I was wondering how can we continue to maintain a no js version of Horizon with 
the integration of Angular, it seems to be a lot of work on top of it.


I would favor not having to maintain the non-js functionality, as IMHO  
most of current modern UIs depend on javascript and the command line  
interface should take over where javascript is not available.
Though, if we want to maintain non-js functionality, directives are not  
a blocker. The directive html element can include the non-js code which  
is replaced by directive template when js and angular get's in.

If not, the original content of directive's element is available.

Maintaining non-js functionality becomes problematic when we need to  
serve multiple types of responses in controller - correct me if I am  
wrong, please.




I agree that a command line utility seems like the most sensible
non-js implementation of Horizon features. Additionally, we can
write javascript with AngularJS that is friendly to various
accessibility needs, like screen readers. I mentioned this in the chat
last night and promised some examples. Here's an excellent walkthrough
of using ARIA tags with javascript:

http://stackoverflow.com/questions/15318661/accessibility-in-single-page-applications-aria-etc 


And a little more:

http://webaim.org/techniques/javascript/eventhandlers
http://stackoverflow.com/questions/18853183/what-are-the-accessibility-implications-of-using-a-framework-like-angularjs

Basically, if you can make an HTML page friendly to screen readers,
you can make a javascript-built app friendly to screen readers.


In addition, do we know the performance of Angularjs, where are the limits, it 
could be good to check some documentation and made some POC. I have tried the 
asynchronous API and I encountered some issues with the two way data bind.
Does people have some feedbacks?


I didn't get any perfromance issues while using Angular, could you  
elaborate o the issues you had? I will try to search some performance  
related topics.




In my experience, the performance has always been Excellent, but
there could certainly be use cases where it's not

Thanks!
--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgpt0gsCbrkvn.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Problems with devstack installation

2013-11-13 Thread Telles Nobrega
Hi, I'm trying to setup a dev environment, but I'm getting this error on
nova http://paste.openstack.org/show/52392/ can anyone give me a hint on
how to work this out?

thanks

-- 
--
Telles Mota Vidal Nobrega
Developer at PulsarOpenStack Project - HP/LSD-UFCG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Definition of template

2013-11-13 Thread Alexander Ignatov
Hi, Andrew

Agreed with your opinion. Initially Savanna’s templates approach is the option 
1 you are talking about. 
This was designed at the start of Savanna 0.2 release cycle. It was also 
documented here: https://wiki.openstack.org/wiki/Savanna/Templates . 
Maybe some points are outdated but the idea is the same as the option 1: user 
can create cluster template and don’t need to specify all fields, for example 
’node_groups’ field. And these fields, both required and optional, can be 
overwritten in the cluster object even if it contains ‘cluster_template_id’.

I see you raised this question because of patch 
https://review.openstack.org/#/c/56060/. I think it’s just a bug in the 
validation level not in api.

I also agree that we should change UI part accordingly, at least add an ability 
for users to override fields set in cluster and node group templates during the 
cluster creation.

Regards,
Alexander Ignatov



On 12 Nov 2013, at 23:20, Andrey Lazarev alaza...@mirantis.com wrote:

 Hi all,
 
 I want to raise the question What template is. Answer to this question 
 could influence UI, validation and user experience significantly. I see two 
 possible answers:
 1. Template is a simplification for object creation. It allows to keep common 
 params in one place and not specify them each time. 
 2. Template is a full description of object. User should be able to create 
 object from template without specifying any params.
 
 As I see the current approach is the option 1, but UI is done mostly for 
 option 2. This leads to situations when user creates incomplete template 
 (backend allows it because of option 1), but can't use it later (UI doesn't 
 allow to work with incomplete templates).
 
 Let's define common vision on how will we treat templates and document this 
 somehow.
 
 My opinion is that we should proceed with the option 1 and change UI 
 accordingly.
 
 Thanks,
 Andrew
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problems with devstack installation

2013-11-13 Thread Gary Kotton
Please delete the file - /usr/bin/nova-rootwrap (this code was updated to use 
the openstack common root wrap code).
I also hit the same issue yesterday
Thanks
Gary

From: Sullivan, Jon Paul 
jonpaul.sulli...@hp.commailto:jonpaul.sulli...@hp.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, November 13, 2013 5:02 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Problems with devstack installation


From: Telles Nobrega [mailto:tellesnobr...@gmail.com]


Hi, I'm trying to setup a dev environment, but I'm getting this error on nova 
http://paste.openstack.org/show/52392/https://urldefense.proofpoint.com/v1/url?u=http://paste.openstack.org/show/52392/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=5bAz1L03WBCey3wJPRpIh7UBTgV838HPW%2Fh2TPhWgtU%3D%0As=a840e8fcf6b472a19e342db259351af16c5a1a9d2b9de78efcac0782e0ba31e6
 can anyone give me a hint on how to work this out?

I just saw the same problem.  I found that in my case the 2012.1 (Essex!) 
packages were installed from the Ubuntu apt repositories, which contained an 
outdated nova-rootwrap in /usr/bin/ that was using the wrong import statement.  
The nova-rootwrap built by devstack was in /usr/local/bin/ and so I copied that 
in place of the incorrect one in /usr/bin/.

Yes, this is a massive hack, but it did work for me for a similar error in the 
scheduler.

thanks

--
--
Telles Mota Vidal Nobrega
Developer at PulsarOpenStack Project - HP/LSD-UFCG

Thanks,
Jon-Paul Sullivan:)Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2.
Registered Number: 361933

The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipv6] IPv6 meeting - Thursdays 21:00UTC - #openstack-meeting-alt

2013-11-13 Thread Shixiong Shang
Hi, Sean:

Thanks a bunch for finalizing the time! Sorry for my ignorance….how do we 
usually run the meeting? On Webex or IRC channel? 

Look forward to it!

Shixiong


On Nov 13, 2013, at 9:32 AM, Collins, Sean (Contractor) 
sean_colli...@cable.comcast.com wrote:

 I haven't heard any negative response to the proposed time,
 so I'd like to put a stake in the ground and utilize that time slot.
 
 We will have our first meeting on Nov 21st.
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

2013-11-13 Thread Koderer, Marc
Hi,

see below.

Regards
Marc

From: Kenichi Oomichi [oomi...@mxs.nes.nec.co.jp]
Sent: Wednesday, November 13, 2013 7:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

 -Original Message-
 From: David Kranz [mailto:dkr...@redhat.com]
 Sent: Wednesday, November 13, 2013 4:33 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

 So that is the starting point. Comments and suggestions welcome! Marc
 and I just started working on an etherpad
 https://etherpad.openstack.org/p/bp_negative_tests but any one is
 welcome to contribute there.

Negative tests based on yaml would be nice because of cleaning the code up
and making the tests more readable.
just one question:
 On the etherpad, there are some invaid_uuids.
 Does that mean invalid string (ex. utf-8 string, not ascii)?
 or invalid uuid format(ex. uuid.uuid4() + foo)?

Great that you already had a look!
So my idea is that we have a battery of functions which can create erroneous 
input.
My intention for invalid_uuid was just something like uuid.uuid4() - but the 
name is a bit misleading.
We can use additional functions that create the input that you are suggesting. 
I think all of them make sense.

 IIUC, in negative test session, we discussed that tests passing utf-8 string
 as API parameter should be negative tests, and the server should return a
 BadRequest response.
 I guess we need to implement such API negative tests. After that, if finding
 an unfavorable behavior of some server, we need to implement API validation
 for the server.
 (unfavorable behavior ex. When a client send utf-8 request, the server returns
 a NotFound response, not a BadRequest one)

+1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread John Griffith
On Wed, Nov 13, 2013 at 7:21 AM, Andrew Laski
andrew.la...@rackspace.com wrote:
 On 11/13/13 at 05:48am, Gary Kotton wrote:

 I recall a few cycles ago having str(uuid.uuid4()) replaced by
 generate_uuid(). There was actually a helper function in neutron (back when
 it was called quantum) and it was replaced. So now we are going back…
 I am not in favor of this change.


 I'm also not really in favor of it.  Though it is a trivial method having it
 in oslo implies that this is what uuids should look like across OpenStack
 projects.  And I'm in favor of consistency for uuids across the projects
 because the same parsers and checkers can then be used for input validation
 or log parsing.


 From: Zhongyue Luo zhongyue@intel.commailto:zhongyue@intel.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org

 Date: Wednesday, November 13, 2013 8:07 AM
 To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org

 Subject: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

 Hi all,

 We had a discussion of the modules that are incubated in Oslo.


 https://etherpad.openstack.org/p/icehouse-oslo-statushttps://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.org/p/icehouse-oslo-statusk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=3ns0o3FRyS2%2Fg%2FTFIH7waZX1o%2FHdXvrJ%2FnH9XMCRy08%3D%0As=63eaa20d8c94217d86793a24379b4391179fbfa1fb2c961fb37a5512dbdff69a


 One of the conclusions we came to was to deprecate/remove uuidutils in
 this cycle.

 The first step into this change should be to remove generate_uuid() from
 uuidutils.

 The reason is that 1) generating the UUID string seems trivial enough to
 not need a function and 2) string representation of uuid4 is not what we
 want in all projects.

 To address this, a patch is now on gerrit.
 https://review.openstack.org/#/c/56152/https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/56152/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=3ns0o3FRyS2%2Fg%2FTFIH7waZX1o%2FHdXvrJ%2FnH9XMCRy08%3D%0As=adb860d11d1ad02718e306b9408c603daa00970685a208db375a9ec011f13978


 Each project should directly use the standard uuid module or implement its
 own helper function to generate uuids if this patch gets in.

 Any thoughts on this change? Thanks.

 --
 Intel SSG/STO/DCST/CIT
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
 China
 +862161166500


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Trivial or not, people use it and frankly I don't see any value at all
in removing it.  As far as the some projects want a different format
of UUID that doesn't make a lot of sense to me but if that's what
somebody wants they should write their own method.  I strongly agree
with others with respect to the comments around code-churn.  I see
little value in this.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Straw man to start the incubation / graduation requirements discussion

2013-11-13 Thread Doug Hellmann
On Wed, Nov 13, 2013 at 7:49 AM, Thierry Carrez thie...@openstack.orgwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Sean Dague wrote:
  [...] Proposed Incubation requirements
   Once something becomes an
  integrated project, it's important that they are able to run in the
  gate.
 
  Both devstack and devstack-gate now support hooks, so with a couple
  of days of work any project in stackforge could build a gate job
  which sets up a devstack of their configuration, including their
  code, running some project specific test they feel is appropriate
  to ensure they could run in the gate environment.
 
  This would ensure an incubated project works with OpenStack global
  requirements, or if it requires something new, that's known very
  clearly before incubation.

 That makes sense, my only concern with it is, how much support from
 QA/Infra would actually be needed *before* incubation can even be
 requested. One of the ideas behind the incubation status is to allow
 incubated projects to tap into common resources (QA, infra, release
 management...) as they cover the necessary ground before being fully
 integrated. Your proposal sounds like they would also need some
 support even before being incubated.


This was my main concern for making this an incubation requirement, too.
It's not that I think the new project will have a lot of trouble with it,
but any questions they do have will need to be answered by a team that is
already operating pretty close to capacity. If you think the teams in
question can take on the extra potential load, then I like the idea.



 Also does it place a requirement that all projects wanting to request
 incubation to be placed in stackforge ? That sounds like a harsh
 requirement if we were to reject them.

 (sidenote: I'm planning to suggest we create an emerging technology
 label for projects that are (1) in stackforge, (2) applied for
 incubation but got rejected purely for community maturity reasons.
 Projects under this label would potentially get some limited space at
 summits to gain more visibility. Designate belongs to that category,
 but without a clear label it seems to fall in the vast bucket of
 openstack-related projects and not gaining more traction. Not sure we
 can leverage it to solve the issue here though).

  Proposed Graduation requirements 
  All integrated projects should be in the integrated gate, as this
  is the only way we provably know that they can all work together,
  at the same level of requirements, in a consistent way.
 
  During incubation landing appropriate tests in Tempest is fair
  game. So the expectation would be that once a project is incubated
  they would be able to land tests in tempest. Before integrated
  we'd need to ensure the project had tests which could take part in
  the integrated gate, so as soon as a project is voted integrated,
  it has some working integrated gate tests. (Note: there is actually
  a symmetric complexity here, to be worked out later).

 +1 -- I think we already made that decision for any future graduation.

  Proposed Stable Release requirements
   We have this automatic
  transition that happens when a project that's integrated for a
  release, actually releases as part of that. I.e. Trove and
  Icehouse. There is no additional TC decision about whether or not
  Trove is part of the stable release, once integrated, it just is.
  Nothing that it does over that cycle will kick it out of the stable
  release. This is one of the reasons it needs to be in the
  integrated gate **before** graduation.
 
  Additionally, upgrade path is critically important to our users,
  and the number one piece of feedback we received from the User
  Survey. It was also important enough to our developers that it was
  scattered all over the Icehouse Design Summit. All integrated
  projects should be included in upgrade testing the moment they are
  in a stable release. (ex: when Icehouse is released, Trove should
  be in master grenade, and upgrade testing from Icehouse - master
  for the J cycle from day one).

 I agree with you, but I don't see how we can enforce this one. Like
 you say, integrated projects get commonly released and get a stable
 branch in all cases. We can strongly encourage them to get their
 grenade act together before the final release, but there is nothing we
 can do (short of kicking them out of the integrated release
 altogether) to ensure it happens.


Can we be more clear about documenting which projects are doing upgrade
testing, so users of projects who are not won't be surprised (and can
potentially apply pressure to the developers)?



  [...] Raised Questions  - what about existing
  incubated projects, what would be their time frame to get with this
  new program - what about existing integrated projects that
  currently don't exist with either an upgrade or gate 

Re: [openstack-dev] [Ceilometer][qa]Tempest tests for Ceilometer

2013-11-13 Thread Mehdi Abaakouk
Hi, 


On Wed, Nov 13, 2013 at 05:25:51PM +0400, Nadya Privalova wrote:
 Eoghan,
 
 I've updated the agenda. Actually, I'm ready to start working on tasks'
 coordination (division) but need some time to get acquainted with
 Ceilometer infra (gating, devstack problems). Anyway, we will discuss it on
 irc. So, Zhi Kun Liu, please join us :)

I have updated the blueprint with all pending issues/reviews around
devstack/gate/ceilometer to have ceilometer running in gate without errors in
the log files.

  https://blueprints.launchpad.net/tempest/+spec/add-basic-ceilometer-tests
 
  I'm afraid that otherwise this Etherpad will be lost. :-(
+1



Cheers, 

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Ceilometer installation problem

2013-11-13 Thread Shixiong Shang
Hi, guys:

I am trying to install Ceilometer on Ubuntu 13.10 Cloud version (64-bit) and 
encounter the following error. Would you please help?

Thanks!

Shixiong



root@net-meter2:/opt/stack/ceilometer# sudo python setup.py install

snipped

creating build/temp.linux-x86_64-2.7/src

creating build/temp.linux-x86_64-2.7/src/lxml

x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 
-Wall -Wstrict-prototypes -fPIC -I/tmp/pip_build_root/lxml/src/lxml/includes 
-I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o 
build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o

src/lxml/lxml.etree.c:8:22: fatal error: pyconfig.h: No such file or directory

 #include pyconfig.h

  ^

compilation terminated.

error: command 'x86_64-linux-gnu-gcc' failed with exit status 1


Cleaning up...
Command /usr/bin/python -c import 
setuptools;__file__='/tmp/pip_build_root/lxml/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file__, 'exec')) install --record 
/tmp/pip-AaopVj-record/install-record.txt --single-version-externally-managed 
failed with error code 1 in /tmp/pip_build_root/lxml
Traceback (most recent call last):
  File /usr/lib/python2.7/runpy.py, line 162, in _run_module_as_main
__main__, fname, loader, pkg_name)
  File /usr/lib/python2.7/runpy.py, line 72, in _run_code
exec code in run_globals
  File /usr/lib/python2.7/dist-packages/pip/__init__.py, line 233, in module
exit = main()
  File /usr/lib/python2.7/dist-packages/pip/__init__.py, line 148, in main
return command.main(args[1:], options)
  File /usr/lib/python2.7/dist-packages/pip/basecommand.py, line 169, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 42: 
ordinal not in range(128)
error: ['/usr/bin/python', '-m', 'pip.__init__', 'install', 'pbr=0.5.21,1.0', 
'WebOb=1.2.3,1.3', 'kombu=2.4.8', 'iso8601=0.1.8', 
'SQLAlchemy=0.7.8,=0.7.99', 'sqlalchemy-migrate=0.7.2', 'alembic=0.4.1', 
'netaddr=0.7.6', 'pymongo=2.4', 'eventlet=0.13.0', 'anyjson=0.3.3', 
'Flask=0.10,1.0', 'pecan=0.2.0', 'stevedore=0.10', 'msgpack-python', 
'python-glanceclient=0.9.0', 'python-novaclient=2.15.0', 
'python-keystoneclient=0.4.1', 'python-ceilometerclient=1.0.6', 
'python-swiftclient=1.5', 'lxml=2.3', 'requests=1.1', 'six=1.4.1', 
'WSME=0.5b6', 'PyYAML=3.1.0', 'oslo.config=1.2.0', 'happybase=0.4'] 
returned 1
root@net-meter2:/opt/stack/ceilometer#
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-11-13 Thread Anita Kuno

Neutron Tempest code sprint

In the second week of January in Montreal, Quebec, Canadathere will be a 
Neutron Tempest code sprint to improve the status of Neutron tests in 
Tempest and to add new tests.
It will be a 3 day event. Right now there are 14 peoplewho came forward 
when it was announced on the Friday at the summit. We need to know how 
many additional people are interested in attending.


This is an impromptu event based on my assessment of the need for this 
to happen, so don't feel left out if you didn't know about it in advance.


We picked Montreal for two main reasons:
1. All 4 people whose attendance is critical (markmclain, salv-orlando, 
sdague and mtrenish) can get there. It was New York or Montreal.
2. I can't think in New York, love it, can't compose a thought, so 
Montreal it is.


It turns out this location choice has some resultant effects:
1. People who wouldn't have time to get a visa to attend an event in the 
States have an easier time entering Canada.
US requires visa applications filed 2 months in advance of travel 
and we are inside that timeframe.

2. Montreal is cheaper than NYC.
3. Being Canadian it is going to be easier for me to produce this event 
in Canada since I am in Canada.
4. It will be cold. We had few choices on the timing and this event 
can't wait on good weather.


There is no location that will make everyone happy, so people will be 
disappointed by this choice and I accept that. It is my hope that this 
event is a success and we can create a schedule of some sort so that 
people who have a high possibility of attending can vote on the 
location. So that is the future vision.


I have a tenative hold on a venue and am working on getting a rate on a 
block of rooms at a hotel.


I am preparing a budget to submit to the Foundation in the hopes they 
will sponsor the event. Since this was planned with no warning, the 
Foundation has no budget for it. Mark is supportive of the event 
happening and if I can come up with some reasonable numbers, I hope that 
the money can come from the Foundation.


The event will be vendor neutral. We will talk to each other based on 
who we are and our interests, not based on who signs our paycheque. If 
folks arrive with logoed shirts (I don't know which logos are work logos 
and which aren't, so I will request no logos please) I will issue you a 
white T-shirt to wear. We need to work collaboratively to effectlvely 
make progress during the code sprint.


Someone at the summit choose not to wear footwear at the event. If you 
want to come to the code sprint please plan on wearing appropriate 
footwear in the public areas at the code sprint. For two reasons:

1. It will be cold.
2. The event is meant to facilitate mutual respect between us to 
increase communication, both at the event and afterwards. I feel wearing 
appropriate footwear supports this goal.


Please indicate your interest by sending an email to 
ante...@anteaya.info, subject Neutron Tempest code sprint. Don't worry 
about the body of the email, I just need addresses. We will send out 
subsequent emails to this group to gather specific details like shirt 
size, dietary requirements. If you came forward at the summit, no need 
to email again.


If you want to come, but don't feel your employer will fund the trip, 
please include that information in the email. It will depend on what we 
can do for accomodation and travel but hopefully we will have a little 
bit for a few folks. Of course please talk to your manager now to work 
on getting approval to attend, and hopefully your employeer will fund 
your travel and accomodation.


Additional questions? Hit me up on irc in #openstack-neutron nick 
anteaya. I read the neutron logs: 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/So I will 
get back to you if I am not around when you ask.


Also rossella_s has come forward to help, thank you rossella_s!

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] design summit outcomes

2013-11-13 Thread Dolph Mathews
I guarantee there's a few things I'm forgetting, but this is my collection
of things we discussed at the summit and determined to be good things to
pursue during the icehouse timeframe. The contents represent a high level
mix of etherpad conclusions and hallway meetings.

https://gist.github.com/dolph/7366031

Corrections and amendments appreciated - thanks!

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Ironic] scheduling flow with Ironic?

2013-11-13 Thread Alex Glikson
Hi, 

Is there a documentation somewhere on the scheduling flow with Ironic? 

The reason I am asking is because we would like to get virtualized and 
bare-metal workloads running in the same cloud (ideally with the ability 
to repurpose physical machines between bare-metal workloads and 
virtualized workloads), and would like to better understand where the gaps 
are (and potentially help bridging them). 

Thanks, 
Alex 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problems with devstack installation

2013-11-13 Thread Telles Nobrega
Thanks Gary, it worked


On Wed, Nov 13, 2013 at 12:11 PM, Gary Kotton gkot...@vmware.com wrote:

 Please delete the file - /usr/bin/nova-rootwrap (this code was updated to
 use the openstack common root wrap code).
 I also hit the same issue yesterday
 Thanks
 Gary

 From: Sullivan, Jon Paul jonpaul.sulli...@hp.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Wednesday, November 13, 2013 5:02 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Problems with devstack installation



 *From:* Telles Nobrega 
 [mailto:tellesnobr...@gmail.comtellesnobr...@gmail.com]




 Hi, I'm trying to setup a dev environment, but I'm getting this error on
 nova 
 http://paste.openstack.org/show/52392/https://urldefense.proofpoint.com/v1/url?u=http://paste.openstack.org/show/52392/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=5bAz1L03WBCey3wJPRpIh7UBTgV838HPW%2Fh2TPhWgtU%3D%0As=a840e8fcf6b472a19e342db259351af16c5a1a9d2b9de78efcac0782e0ba31e6can
  anyone give me a hint on how to work this out?



 I just saw the same problem.  I found that in my case the 2012.1 (Essex!)
 packages were installed from the Ubuntu apt repositories, which contained
 an outdated nova-rootwrap in /usr/bin/ that was using the wrong import
 statement.  The nova-rootwrap built by devstack was in /usr/local/bin/ and
 so I copied that in place of the incorrect one in /usr/bin/.



 Yes, this is a massive hack, but it did work for me for a similar error in
 the scheduler.



 thanks



 --

 --
 Telles Mota Vidal Nobrega
 Developer at PulsarOpenStack Project - HP/LSD-UFCG



 Thanks,
 Jon-Paul SullivanJ*Cloud Services - @hpcloud*



 Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park,
 Galway.

 Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John
 Rogerson's Quay, Dublin 2.

 Registered Number: 361933



 The contents of this message and any attachments to it are confidential
 and may be legally privileged. If you have received this message in error
 you should delete it from your system immediately and advise the sender.



 To any recipient of this message within HP, unless otherwise stated, you
 should consider this message and attachments as HP CONFIDENTIAL.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Telles Mota Vidal Nobrega
Developer at PulsarOpenStack Project - HP/LSD-UFCG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Horizon] The future or pagination

2013-11-13 Thread John Dickinson
Swift uses marker+limit for pagination when listing containers or objects (with 
additional support for prefix, delimiters, and end markers). This is done 
because the total size of the listing may be rather large, and going to a 
correct page based on an offset gets expensive and doesn't allow for 
repeatable queries.

Pagination implies some sort of ordering, and I'm guessing (assuming+hoping) 
that your listings are based around something more meaningful that an 
incrementing id. By itself, metric number 32592 doesn't mean anything, and 
listings like go to metric 4200 and give me the next 768 items doesn't 
tell the consumer anything and probably isn't even a very repeatable query. 
Therefore, using a marker+prefix+limit style pagination system is very useful 
(eg give me up to 1000 metrics that start with 'nova/instance_id/42/'). Also, 
end_marker queries are very nice (half-closed ranges).

One thing I would suggest (and I hope we change in Swift whenever we update the 
API version) is that you don't promise to return the full page in a response. 
Instead, you should return a no matches or end of listing token. This 
allows you the flexibility to return responses quickly without consuming too 
many resources on the server side. Clients can then continue to iterate over 
subsequent pages as they are needed.

Something else that I'd like to see in Swift (it was almost added once) is the 
ability to reverse the order of the listings so you can iterate backwards over 
pages.

--John




On Nov 13, 2013, at 2:58 AM, Julien Danjou jul...@danjou.info wrote:

 Hi,
 
 We've been discussing and working for a while on support for pagination
 on our API v2 in Ceilometer. There's a large amount that already been
 done, but that is now stalled because we are not sure about the
 consensus.
 
 There's mainly two approaches around pagination as far as I know, one
 being using limit/offset and the other one being marker based. As of
 today, we have no clue of which one we should pick, in the case we would
 have a technical choice doable between these two.
 
 I've added the Horizon tag in the subject because I think it may concern
 Horizon, since it shall be someday in the future one of the main
 consumer of the Ceilometer API.
 
 I'd be also happy to learn what other projects do in this regard, and
 what has been said and discussed during the summit.
 
 To a certain extend, we Ceilometer would also be happy to find common
 technical ground on this to some extend so _maybe_ we can generalise
 this into WSME itself for consumption from other projects.
 
 Cheers,
 -- 
 Julien Danjou
 ;; Free Software hacker ; independent consultant
 ;; http://julien.danjou.info
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Command Line Interface for Solum

2013-11-13 Thread Doug Hellmann
On Sun, Nov 10, 2013 at 10:15 AM, Noorul Islam K M noo...@noorul.comwrote:


 Hello all,

 I registered a new blueprint [1] for command line client interface for
 Solum. We need to decide whether we should have a separate repository
 for this or go with new unified CLI framework [2]. Since Solum is not
 part of OpenStack I think it is not the right time to go with the
 unified CLI.


One of the key features of the cliff framework used for the unified command
line app is that the subcommands can be installed independently of the main
program. So you can write plugins that work with the openstack client, but
put them in the solum client library package (and source repository). That
would let you, for example:

  $ pip install python-solumclient
  $ pip install python-openstackclient
  $ openstack solum make me a paas

Dean has done a lot of work to design a consistent noun-followed-by-verb
command structure, so please look at that work when picking subcommand
names (for example, you shouldn't use solum as a prefix as I did in my
example above, since we are removing the project names from the commands).

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Mark Washenberger
On Wed, Nov 13, 2013 at 8:02 AM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Nov 13 2013, John Griffith wrote:

  Trivial or not, people use it and frankly I don't see any value at all
  in removing it.  As far as the some projects want a different format
  of UUID that doesn't make a lot of sense to me but if that's what
  somebody wants they should write their own method.  I strongly agree
  with others with respect to the comments around code-churn.  I see
  little value in this.

 The thing is that code in oslo-incubator is supposed to be graduated to
 standalone Python library.

 We see little value in a library providing a library for a helper doing
 str(uuid.uuid4()).


For the currently remaining function in uuidutils, is_uuid_like, could we
potentially just add this functionality to the standard library?
Something like:

 uuid.UUID('----')
UUID('----')
 uuid.UUID('----'.replace('-', ''))
UUID('----')
 uuid.UUID('----'.replace('-', ''),
strict=True)
Traceback (most recent call last):
  File stdin, line 1, in module
  File
/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/uuid.py,
line 134, in __init__
raise ValueError('badly formed hexadecimal UUID string')
ValueError: badly formed hexadecimal UUID string

I've had a few situations where UUID's liberal treatment of what it
consumes has seemed a bit excessive, anyway. Not sure if this approach is a
bit too naive, however.




 --
 Julien Danjou
 /* Free Software hacker * independent consultant
http://julien.danjou.info */

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Thomas Spatzier
Zane Bitter zbit...@redhat.com wrote on 13.11.2013 18:11:18:
 From: Zane Bitter zbit...@redhat.com
 To: openstack-dev@lists.openstack.org,
 Date: 13.11.2013 18:14
 Subject: Re: [openstack-dev] [Heat] HOT software configuration
 refined after design summit discussions

 On 11/11/13 17:57, Thomas Spatzier wrote:
snip
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-WIP
 
  Please review and provide feedback.

 I believe there's an error in the Explicit dependency section, where it
 says that depends_on is a property. In cfn DependsOn actually exists at
 the same level as Type, Properties, c.

 resources:
client:
  type: My::Software::SomeClient
  properties:
server: { get_resource: my_server }
params:
  # params ...
  depends_on:
- get_resource: server_process1
- get_resource: server_process2

Good point. I think the reason was tied too much to the provider template
concept where all properties get passed automatically to the provider
template and in there you can basically do anything that is necessary,
including hanlding dependencies. But I was missing the fact that this is a
generic concept for all resources.
I'll fix it in the wiki.


 And conceptually this seems correct, because it applies to any kind of
 resource, whereas properties are defined per-resource-type.

 Don't be fooled by our implementation:
 https://review.openstack.org/#/c/44733/

 It also doesn't support a list, but I think we can and should fix that
 in HOT.

Doesn't DependsOn already support lists? I quickly checked the code and it
seems it does:
https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L288


 cheers,
 Zane.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread John Griffith
On Wed, Nov 13, 2013 at 9:02 AM, Julien Danjou jul...@danjou.info wrote:
 On Wed, Nov 13 2013, John Griffith wrote:

 Trivial or not, people use it and frankly I don't see any value at all
 in removing it.  As far as the some projects want a different format
 of UUID that doesn't make a lot of sense to me but if that's what
 somebody wants they should write their own method.  I strongly agree
 with others with respect to the comments around code-churn.  I see
 little value in this.

 The thing is that code in oslo-incubator is supposed to be graduated to
 standalone Python library.

 We see little value in a library providing a library for a helper doing
 str(uuid.uuid4()).

Well I see your point, probably should've never been there in the
first place :)  Although I suppose it is good to have some form of
standarization for something no matter how trivial.  Anyway, my
opinion is it seems like unnecessary churn but I do see your point.  I
can modify it in Cinder easy enough and won't complain (too much
more), but I'm also wondering how many *other* things might fall in to
this category.


 --
 Julien Danjou
 /* Free Software hacker * independent consultant
http://julien.danjou.info */

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Core pinning

2013-11-13 Thread Jiang, Yunhong


 -Original Message-
 From: Tuomas Paappanen [mailto:tuomas.paappa...@tieto.com]
 Sent: Wednesday, November 13, 2013 4:46 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [nova] Core pinning
 
 Hi all,
 
 I would like to hear your thoughts about core pinning in Openstack.
 Currently nova(with qemu-kvm) supports usage of cpu set of PCPUs what
 can be used by instances. I didn't find blueprint, but I think this
 feature is for isolate cpus used by host from cpus used by
 instances(VCPUs).
 
 But, from performance point of view it is better to exclusively dedicate
 PCPUs for VCPUs and emulator. In some cases you may want to guarantee
 that only one instance(and its VCPUs) is using certain PCPUs.  By using
 core pinning you can optimize instance performance based on e.g. cache
 sharing, NUMA topology, interrupt handling, pci pass through(SR-IOV) in
 multi socket hosts etc.

My 2 cents.
When you talking about  performance point of view, are you talking about 
guest performance, or overall performance? Pin PCPU is sure to benefit guest 
performance, but possibly not for overall performance, especially if the vCPU 
is not consume 100% of the CPU resources. 

I think pin CPU is common to data center virtualization, but not sure if it's 
in scope of cloud, which provide computing power, not hardware resources.

And I think part of your purpose can be achieved through 
https://wiki.openstack.org/wiki/CPUEntitlement and 
https://wiki.openstack.org/wiki/InstanceResourceQuota . Especially I hope a 
well implemented hypervisor will avoid needless vcpu migration if the vcpu is 
very busy and required most of the pCPU's computing capability (I knew Xen used 
to have some issue in the scheduler to cause frequent vCPU migration long 
before).

--jyh


 
 We have already implemented feature like this(PoC with limitations) to
 Nova Grizzly version and would like to hear your opinion about it.
 
 The current implementation consists of three main parts:
 - Definition of pcpu-vcpu maps for instances and instance spawning
 - (optional) Compute resource and capability advertising including free
 pcpus and NUMA topology.
 - (optional) Scheduling based on free cpus and NUMA topology.
 
 The implementation is quite simple:
 
 (additional/optional parts)
 Nova-computes are advertising free pcpus and NUMA topology in same
 manner than host capabilities. Instances are scheduled based on this
 information.
 
 (core pinning)
 admin can set PCPUs for VCPUs and for emulator process, or select NUMA
 cell for instance vcpus, by adding key:value pairs to flavor's extra specs.
 
 EXAMPLE:
 instance has 4 vcpus
 key:value
 vcpus:1,2,3,4 -- vcpu0 pinned to pcpu1, vcpu1 pinned to pcpu2...
 emulator:5 -- emulator pinned to pcpu5
 or
 numacell:0 -- all vcpus are pinned to pcpus in numa cell 0.
 
 In nova-compute, core pinning information is read from extra specs and
 added to domain xml same way as cpu quota values(cputune).
 
 cputune
vcpupin vcpu='0' cpuset='1'/
vcpupin vcpu='1' cpuset='2'/
vcpupin vcpu='2' cpuset='3'/
vcpupin vcpu='3' cpuset='4'/
emulatorpin cpuset='5'/
 /cputune
 
 What do you think? Implementation alternatives? Is this worth of
 blueprint? All related comments are welcome!
 
 Regards,
 Tuomas
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Configuration validation

2013-11-13 Thread Doug Hellmann
On Mon, Nov 11, 2013 at 6:08 PM, Mark McLoughlin mar...@redhat.com wrote:

 Hi Nikola,

 On Mon, 2013-11-11 at 12:44 +0100, Nikola Đipanov wrote:
  Hey all,
 
  During the summit session on the the VMWare driver roadmap, a topic of
  validating the passed configuration prior to starting services came up
  (see [1] for more detail on how it's connected to that specific topic).
 
  Several ideas were thrown around during the session mostly documented in
  [1].
 
  There are a few more cases when something like this could be useful (see
  bug [2] and related patch [3]), and I was wondering if a slightly
  different approach might be useful. For example use an already existing
  validation hook in the service class [4] to call into a validation
  framework that will potentially stop the service with proper
  logging/notifications. The obvious benefit would be that there is no
  pre-run required from the user, and the danger of running a
  misconfigured stack is smaller.

 One thing worth trying would be to encode the validation rules in the
 config option declaration.

 Some rules could be straightforward, like:

 opts = [
   StrOpt('foo_url',
  validate_rule=cfg.MatchesRegexp('(git|http)://')),
 ]

 but the rule you describe is more complex e.g.

 def validate_proxy_url(conf, group, key, value):
 if not conf.vnc_enabled:
 return
 if conf.ssl_only and value.startswith(http://;):
 raise ValueError('ssl_only option detected, but ...')

 opts = [
   StrOpt('novncproxy_base_url',
  validate_rule=validate_proxy_url),
   ...
 ]

 I'm not sure I love this yet, but it's worth experimenting with.


One thing to keep in mind with the move to calling register_opt() at
runtime instead of import time is the service may run for a little while
before it reaches the point in the code where the option validation code is
triggered. So I like the idea, but we may want a shortcut for validation.

We could add a small app to oslo.config that will load the options in the
same way the conf generator and doc tool will, but then also read the
configuration file and perform the validation. Another benefit of a
separate tool is it could produce a full list of warnings and errors,
rather than having the service stop on each bad value.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipv6] IPv6 meeting - Thursdays 21:00UTC - #openstack-meeting-alt

2013-11-13 Thread Collins, Sean (Contractor)
On Wed, Nov 13, 2013 at 10:20:55AM -0500, Shixiong Shang wrote:
 Thanks a bunch for finalizing the time! Sorry for my ignorance….how do we 
 usually run the meeting? On Webex or IRC channel? 

IRC.

I'm not opposed to Webex (other teams have used it before) - but it
would involve more set-up. We'd need to publish recordings,
so that there is a way for those that couldn't attend to review,
similar to how the IRC meetings are logged.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Horizon] The future or pagination

2013-11-13 Thread Lyle, David
From a purely UI perspective, the limit/offset is a lot more useful.  Then we 
can show links to previous page, next page and display the current page number.

Past mailing list conversations have indicated that limit/offset can be less 
efficient on the server side.  The marker/limit approach works for paginating 
UI side, just in a more primitive way.  With that approach, we are generally 
limited to a next page link only.

David 

 -Original Message-
 From: John Dickinson [mailto:m...@not.mn]
 Sent: Wednesday, November 13, 2013 10:09 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer][Horizon] The future or
 pagination
 
 Swift uses marker+limit for pagination when listing containers or objects
 (with additional support for prefix, delimiters, and end markers). This is
 done because the total size of the listing may be rather large, and going to a
 correct page based on an offset gets expensive and doesn't allow for
 repeatable queries.
 
 Pagination implies some sort of ordering, and I'm guessing
 (assuming+hoping) that your listings are based around something more
 meaningful that an incrementing id. By itself, metric number 32592
 doesn't mean anything, and listings like go to metric 4200 and give me
 the next 768 items doesn't tell the consumer anything and probably isn't
 even a very repeatable query. Therefore, using a marker+prefix+limit style
 pagination system is very useful (eg give me up to 1000 metrics that start
 with 'nova/instance_id/42/'). Also, end_marker queries are very nice (half-
 closed ranges).
 
 One thing I would suggest (and I hope we change in Swift whenever we
 update the API version) is that you don't promise to return the full page in a
 response. Instead, you should return a no matches or end of listing
 token. This allows you the flexibility to return responses quickly without
 consuming too many resources on the server side. Clients can then continue
 to iterate over subsequent pages as they are needed.
 
 Something else that I'd like to see in Swift (it was almost added once) is the
 ability to reverse the order of the listings so you can iterate backwards over
 pages.
 
 --John
 
 
 
 
 On Nov 13, 2013, at 2:58 AM, Julien Danjou jul...@danjou.info wrote:
 
  Hi,
 
  We've been discussing and working for a while on support for
  pagination on our API v2 in Ceilometer. There's a large amount that
  already been done, but that is now stalled because we are not sure
  about the consensus.
 
  There's mainly two approaches around pagination as far as I know, one
  being using limit/offset and the other one being marker based. As of
  today, we have no clue of which one we should pick, in the case we
  would have a technical choice doable between these two.
 
  I've added the Horizon tag in the subject because I think it may
  concern Horizon, since it shall be someday in the future one of the
  main consumer of the Ceilometer API.
 
  I'd be also happy to learn what other projects do in this regard, and
  what has been said and discussed during the summit.
 
  To a certain extend, we Ceilometer would also be happy to find common
  technical ground on this to some extend so _maybe_ we can generalise
  this into WSME itself for consumption from other projects.
 
  Cheers,
  --
  Julien Danjou
  ;; Free Software hacker ; independent consultant ;;
  http://julien.danjou.info
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Core pinning

2013-11-13 Thread Chris Friesen

On 11/13/2013 11:40 AM, Jiang, Yunhong wrote:


But, from performance point of view it is better to exclusively
dedicate PCPUs for VCPUs and emulator. In some cases you may want
to guarantee that only one instance(and its VCPUs) is using certain
PCPUs.  By using core pinning you can optimize instance performance
based on e.g. cache sharing, NUMA topology, interrupt handling, pci
pass through(SR-IOV) in multi socket hosts etc.


My 2 cents. When you talking about  performance point of view, are
you talking about guest performance, or overall performance? Pin PCPU
is sure to benefit guest performance, but possibly not for overall
performance, especially if the vCPU is not consume 100% of the CPU
resources.


It can actually be both.  If a guest has several virtual cores that both 
access the same memory, it can be highly beneficial all around if all 
the memory/cpus for that guest come from a single NUMA node on the host. 
 That way you reduce the cross-NUMA-node memory traffic, increasing 
overall efficiency.  Alternately, if a guest has several cores that use 
lots of memory bandwidth but don't access the same data, you might want 
to ensure that the cores are on different NUMA nodes to equalize 
utilization of the different NUMA nodes.


Similarly, once you start talking about doing SR-IOV networking I/O 
passthrough into a guest (for SDN/NFV stuff) for optimum efficiency it 
is beneficial to be able to steer interrupts on the physical host to the 
specific cpus on which the guest will be running.  This implies some 
form of pinning.



I think pin CPU is common to data center virtualization, but not sure
if it's in scope of cloud, which provide computing power, not
hardware resources.

And I think part of your purpose can be achieved through
https://wiki.openstack.org/wiki/CPUEntitlement and
https://wiki.openstack.org/wiki/InstanceResourceQuota . Especially I
hope a well implemented hypervisor will avoid needless vcpu migration
if the vcpu is very busy and required most of the pCPU's computing
capability (I knew Xen used to have some issue in the scheduler to
cause frequent vCPU migration long before).


I'm not sure the above stuff can be done with those.  It's not just 
about quantity of resources, but also about which specific resources 
will be used so that other things can be done based on that knowledge.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Group-based Policy Sub-team Meetings

2013-11-13 Thread Kyle Mestery (kmestery)
On Nov 13, 2013, at 10:36 AM, Stephen Wong s3w...@midokura.com
 wrote:

 Hi Kyle,
 
So no meeting this Thursday?
 
I am inclined to skip this week's meeting due to the fact I haven't heard many
replies yet. I think a good starting point for people would be to review the
BP [1] and Design Document [2] and provide feedback where appropriate.
We should start to formalize what the APIs will look like at next week's 
meeting,
and the Design Document has a first pass at this.

Thanks,
Kyle

[1] 
https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
[2] 
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?usp=sharing

 Thanks,
 - Stephen
 
 On Wed, Nov 13, 2013 at 7:11 AM, Kyle Mestery (kmestery)
 kmest...@cisco.com wrote:
 On Nov 13, 2013, at 8:58 AM, Stein, Manuel (Manuel) 
 manuel.st...@alcatel-lucent.com wrote:
 
 Kyle,
 
 I'm afraid your meeting vanished from the Meetings page [2] when user 
 amotiki reworked neutron meetings ^.^
 Is the meeting for Thu 1600 UTC still on?
 
 Ack, thanks for the heads up here! I have re-added the meeting. I only heard
 back from one other person other than yourself, so at this point I'm inclined
 to wait until next week to hold our first meeting unless I hear back from 
 others.
 
 A few heads-up questions (couldn't attend the HK design summit Friday 
 meeting):
 
 1) In the summit session Etherpad [3], ML2 implementation mentions 
 insertion of arbitrary metadata to hint to underlying implementation. Is 
 that (a) the plug-ing reporting its policy-bound realization? (b) the user 
 further specifying what should be used? (c) both? Or (d) none of that but 
 just some arbitrary message of the day?
 
 I believe that would be (a).
 
 2) Would policies _always_ map to the old Neutron entities?
 E.g. when I have policies in place, can I query related network/port, 
 subnet/address, router elements on the API or are there no equivalents 
 created? Would the logical topology created under the policies be exposed 
 otherwise? for e.g. monitoring/wysiwyg/troubleshoot purposes.
 
 No, this is up to the plugin/MechanismDriver implementation.
 
 3) Do the chain identifier in your policy rule actions match to Service 
 Chain UUID in Service Insertion, Chaining and API [4]
 
 That's one way to look at this, yes.
 
 4) Are you going to describe L2 services the way group policies work? I 
 mean, why would I need a LoadBalancer or Firewall instance before I can 
 insert it between two groups when all that load balancing/firewalling 
 requires is nothing but a policy for group communication itself? - 
 regardless the service instance used to carry out the service.
 
 These are things I'd like to discuss at the IRC meeting each week. The goal
 would be to try and come up with some actionable items we can drive towards
 in both Icehouse-1 and Icehouse-2. Given how close the closing of Icehouse-1
 is, we need to focus on this very fast if we want to have a measurable 
 impact in
 Icehouse-1.
 
 Thanks,
 Kyle
 
 
 Best, Manuel
 
 [2] 
 https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting
 [3] 
 https://etherpad.openstack.org/p/Group_Based_Policy_Abstraction_for_Neutron
 [4] 
 https://docs.google.com/document/d/1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh0Wl3YF2U/edit#
 
 -Original Message-
 From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com]
 Sent: Montag, 11. November 2013 19:41
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [neutron] Group-based Policy
 Sub-team Meetings
 
 Hi folks! Hope everyone had a safe trip back from Hong Kong.
 Friday afternoon in the Neutron sessions we discussed the
 Group-based Policy Abstraction BP [1]. It was decided we
 would try to have a weekly IRC meeting to drive out further
 requirements with the hope of coming up with a list of
 actionable tasks to begin working on by December.
 I've tentatively set the meeting [2] for Thursdays at 1600
 UTC on the #openstack-meeting-alt IRC channel. If there are
 serious conflicts with this day and time, please speak up
 soon. Otherwise, we'll host our first meeting on Thursday this week.
 
 Thanks!
 Kyle
 
 [1]
 https://blueprints.launchpad.net/neutron/+spec/group-based-pol
 icy-abstraction
 [2]
 https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_
 Sub-Team_Meeting
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Dolph Mathews
On Wed, Nov 13, 2013 at 9:47 AM, John Griffith
john.griff...@solidfire.comwrote:

 On Wed, Nov 13, 2013 at 7:21 AM, Andrew Laski
 andrew.la...@rackspace.com wrote:
  On 11/13/13 at 05:48am, Gary Kotton wrote:
 
  I recall a few cycles ago having str(uuid.uuid4()) replaced by
  generate_uuid(). There was actually a helper function in neutron (back
 when
  it was called quantum) and it was replaced. So now we are going back…
  I am not in favor of this change.
 
 
  I'm also not really in favor of it.  Though it is a trivial method
 having it
  in oslo implies that this is what uuids should look like across OpenStack
  projects.

And I'm in favor of consistency for uuids across the projects
  because the same parsers and checkers can then be used for input
 validation
  or log parsing.


Parsers? UUID's should be treated as opaque strings once they're generated.


 
 
  From: Zhongyue Luo zhongyue@intel.commailto:
 zhongyue@intel.com
  Reply-To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.orgmailto:
 openstack-dev@lists.openstack.org
 
  Date: Wednesday, November 13, 2013 8:07 AM
  To: OpenStack Development Mailing List
  openstack-dev@lists.openstack.orgmailto:
 openstack-dev@lists.openstack.org
 
  Subject: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils
 
  Hi all,
 
  We had a discussion of the modules that are incubated in Oslo.
 
 
  https://etherpad.openstack.org/p/icehouse-oslo-status
 https://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.org/p/icehouse-oslo-statusk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=3ns0o3FRyS2%2Fg%2FTFIH7waZX1o%2FHdXvrJ%2FnH9XMCRy08%3D%0As=63eaa20d8c94217d86793a24379b4391179fbfa1fb2c961fb37a5512dbdff69a
 
 
 
  One of the conclusions we came to was to deprecate/remove uuidutils in
  this cycle.
 
  The first step into this change should be to remove generate_uuid() from
  uuidutils.
 
  The reason is that 1) generating the UUID string seems trivial enough to
  not need a function and 2) string representation of uuid4 is not what we
  want in all projects.


There's room for long term improvement such as decreasing string length,
increasing entropy, linearly distributed output, etc. I agree that the
current implementation is useless/trivial, but the work to build upon it
should happen in oslo to benefit all projects.


 
  To address this, a patch is now on gerrit.
  https://review.openstack.org/#/c/56152/
 https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/56152/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=3ns0o3FRyS2%2Fg%2FTFIH7waZX1o%2FHdXvrJ%2FnH9XMCRy08%3D%0As=adb860d11d1ad02718e306b9408c603daa00970685a208db375a9ec011f13978
 
 
 
  Each project should directly use the standard uuid module or implement
 its
  own helper function to generate uuids if this patch gets in.
 
  Any thoughts on this change? Thanks.
 
  --
  Intel SSG/STO/DCST/CIT
  880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
  China
  +862161166500
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Trivial or not, people use it and frankly I don't see any value at all
 in removing it.  As far as the some projects want a different format
 of UUID that doesn't make a lot of sense to me but if that's what
 somebody wants they should write their own method.  I strongly agree
 with others with respect to the comments around code-churn.  I see
 little value in this.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Group-based Policy Sub-team Meetings

2013-11-13 Thread Edgar Magana
I do agree with need to sometime to recovery from HK and get the meetings
started next week!

Edgar

On 11/13/13 9:57 AM, Kyle Mestery (kmestery) kmest...@cisco.com wrote:

On Nov 13, 2013, at 10:36 AM, Stephen Wong s3w...@midokura.com
 wrote:

 Hi Kyle,
 
So no meeting this Thursday?
 
I am inclined to skip this week's meeting due to the fact I haven't heard
many
replies yet. I think a good starting point for people would be to review
the
BP [1] and Design Document [2] and provide feedback where appropriate.
We should start to formalize what the APIs will look like at next week's
meeting,
and the Design Document has a first pass at this.

Thanks,
Kyle

[1] 
https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstract
ion
[2] 
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIru
pCD9E/edit?usp=sharing

 Thanks,
 - Stephen
 
 On Wed, Nov 13, 2013 at 7:11 AM, Kyle Mestery (kmestery)
 kmest...@cisco.com wrote:
 On Nov 13, 2013, at 8:58 AM, Stein, Manuel (Manuel)
manuel.st...@alcatel-lucent.com wrote:
 
 Kyle,
 
 I'm afraid your meeting vanished from the Meetings page [2] when user
amotiki reworked neutron meetings ^.^
 Is the meeting for Thu 1600 UTC still on?
 
 Ack, thanks for the heads up here! I have re-added the meeting. I only
heard
 back from one other person other than yourself, so at this point I'm
inclined
 to wait until next week to hold our first meeting unless I hear back
from others.
 
 A few heads-up questions (couldn't attend the HK design summit Friday
meeting):
 
 1) In the summit session Etherpad [3], ML2 implementation mentions
insertion of arbitrary metadata to hint to underlying implementation.
Is that (a) the plug-ing reporting its policy-bound realization? (b)
the user further specifying what should be used? (c) both? Or (d) none
of that but just some arbitrary message of the day?
 
 I believe that would be (a).
 
 2) Would policies _always_ map to the old Neutron entities?
 E.g. when I have policies in place, can I query related network/port,
subnet/address, router elements on the API or are there no equivalents
created? Would the logical topology created under the policies be
exposed otherwise? for e.g. monitoring/wysiwyg/troubleshoot purposes.
 
 No, this is up to the plugin/MechanismDriver implementation.
 
 3) Do the chain identifier in your policy rule actions match to
Service Chain UUID in Service Insertion, Chaining and API [4]
 
 That's one way to look at this, yes.
 
 4) Are you going to describe L2 services the way group policies work?
I mean, why would I need a LoadBalancer or Firewall instance before I
can insert it between two groups when all that load
balancing/firewalling requires is nothing but a policy for group
communication itself? - regardless the service instance used to carry
out the service.
 
 These are things I'd like to discuss at the IRC meeting each week. The
goal
 would be to try and come up with some actionable items we can drive
towards
 in both Icehouse-1 and Icehouse-2. Given how close the closing of
Icehouse-1
 is, we need to focus on this very fast if we want to have a measurable
impact in
 Icehouse-1.
 
 Thanks,
 Kyle
 
 
 Best, Manuel
 
 [2] 
https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_
Meeting
 [3] 
https://etherpad.openstack.org/p/Group_Based_Policy_Abstraction_for_Neu
tron
 [4] 
https://docs.google.com/document/d/1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh
0Wl3YF2U/edit#
 
 -Original Message-
 From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com]
 Sent: Montag, 11. November 2013 19:41
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [neutron] Group-based Policy
 Sub-team Meetings
 
 Hi folks! Hope everyone had a safe trip back from Hong Kong.
 Friday afternoon in the Neutron sessions we discussed the
 Group-based Policy Abstraction BP [1]. It was decided we
 would try to have a weekly IRC meeting to drive out further
 requirements with the hope of coming up with a list of
 actionable tasks to begin working on by December.
 I've tentatively set the meeting [2] for Thursdays at 1600
 UTC on the #openstack-meeting-alt IRC channel. If there are
 serious conflicts with this day and time, please speak up
 soon. Otherwise, we'll host our first meeting on Thursday this week.
 
 Thanks!
 Kyle
 
 [1]
 https://blueprints.launchpad.net/neutron/+spec/group-based-pol
 icy-abstraction
 [2]
 https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_
 Sub-Team_Meeting
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [keystone] design summit outcomes

2013-11-13 Thread Dolph Mathews
On Wed, Nov 13, 2013 at 11:00 AM, David Chadwick d.w.chadw...@kent.ac.ukwrote:

 Hi Dolph

 I have one comment concerning Refactoring:
 role assignment tables in SQL backend should be unified into a SQL table
 which lacks referential integrity

 I am not quite sure what this means, but I did suggest creating an
 attribute definition table that would include the definitions of all
 Keystone authz attributes such as role, domain, project as well as identity
 attributes such as location, group, email address etc. Definitions would
 include such things as: delegatable or not, for keystone authz or not (see
 below). Once this table has been defined, then all user role assignments
 can be collapsed into one table (no need for separate role, domain tables
 etc) with the assigned attribute pointing to the entry in the definition
 table.

 Was this what your bullet point was referring to, or was it something
 different?


I intended to leave the specific implementation details out of this
document (they're already captured in the relevant etherpad), but yes --
that would be an improvement on the current table that fits the one liner
in the gist. The additional features (such as delegatable) would require
a subsequent discussion / change.



 Here is a strawman proposal

 Table name = Attribute
 id: the unique primary key
 Attribute name: (user friendly name e.g. role, domain, etc.)
 Attribute Ref: (global id of attribute such as OID or URL, as used by SAML
 and LDAP)
 Type: (authz or identity)
 Delegatable: [Yes|No]
 Values: [string|integer|Boolean]

 Now when you assign a role, domain, location, email address or any other
 attribute to a user a single assignment table can be used such as:

 Table name: Attribute Assignment
 id: the unique primary key of this assignment
 UserID: id of user this attribute is assigned to
 AttributeID: id of attribute from above table
 Value: the value of the assigned attribute

 you dont need to change the existing APIs and procedure calls, as they can
 be re-written to access the new tables.

 regards

 David


 On 13/11/2013 16:04, Dolph Mathews wrote:

 I guarantee there's a few things I'm forgetting, but this is my
 collection of things we discussed at the summit and determined to be
 good things to pursue during the icehouse timeframe. The contents
 represent a high level mix of etherpad conclusions and hallway meetings.

 https://gist.github.com/dolph/7366031

 Corrections and amendments appreciated - thanks!

 -Dolph


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-11-13 Thread Edgar Magana
Anita,

Could you prepare the invitation letters for the ones that will require a
visa?, which is my case.

Thanks,

Edgar

On 11/13/13 8:10 AM, Anita Kuno ante...@anteaya.info wrote:

Neutron Tempest code sprint

In the second week of January in Montreal, Quebec, Canadathere will be a
Neutron Tempest code sprint to improve the status of Neutron tests in
Tempest and to add new tests.
It will be a 3 day event. Right now there are 14 peoplewho came forward
when it was announced on the Friday at the summit. We need to know how
many additional people are interested in attending.

This is an impromptu event based on my assessment of the need for this
to happen, so don't feel left out if you didn't know about it in advance.

We picked Montreal for two main reasons:
1. All 4 people whose attendance is critical (markmclain, salv-orlando,
sdague and mtrenish) can get there. It was New York or Montreal.
2. I can't think in New York, love it, can't compose a thought, so
Montreal it is.

It turns out this location choice has some resultant effects:
1. People who wouldn't have time to get a visa to attend an event in the
States have an easier time entering Canada.
 US requires visa applications filed 2 months in advance of travel
and we are inside that timeframe.
2. Montreal is cheaper than NYC.
3. Being Canadian it is going to be easier for me to produce this event
in Canada since I am in Canada.
4. It will be cold. We had few choices on the timing and this event
can't wait on good weather.

There is no location that will make everyone happy, so people will be
disappointed by this choice and I accept that. It is my hope that this
event is a success and we can create a schedule of some sort so that
people who have a high possibility of attending can vote on the
location. So that is the future vision.

I have a tenative hold on a venue and am working on getting a rate on a
block of rooms at a hotel.

I am preparing a budget to submit to the Foundation in the hopes they
will sponsor the event. Since this was planned with no warning, the
Foundation has no budget for it. Mark is supportive of the event
happening and if I can come up with some reasonable numbers, I hope that
the money can come from the Foundation.

The event will be vendor neutral. We will talk to each other based on
who we are and our interests, not based on who signs our paycheque. If
folks arrive with logoed shirts (I don't know which logos are work logos
and which aren't, so I will request no logos please) I will issue you a
white T-shirt to wear. We need to work collaboratively to effectlvely
make progress during the code sprint.

Someone at the summit choose not to wear footwear at the event. If you
want to come to the code sprint please plan on wearing appropriate
footwear in the public areas at the code sprint. For two reasons:
1. It will be cold.
2. The event is meant to facilitate mutual respect between us to
increase communication, both at the event and afterwards. I feel wearing
appropriate footwear supports this goal.

Please indicate your interest by sending an email to
ante...@anteaya.info, subject Neutron Tempest code sprint. Don't worry
about the body of the email, I just need addresses. We will send out
subsequent emails to this group to gather specific details like shirt
size, dietary requirements. If you came forward at the summit, no need
to email again.

If you want to come, but don't feel your employer will fund the trip,
please include that information in the email. It will depend on what we
can do for accomodation and travel but hopefully we will have a little
bit for a few folks. Of course please talk to your manager now to work
on getting approval to attend, and hopefully your employeer will fund
your travel and accomodation.

Additional questions? Hit me up on irc in #openstack-neutron nick
anteaya. I read the neutron logs:
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/So I will
get back to you if I am not around when you ask.

Also rossella_s has come forward to help, thank you rossella_s!

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Proposed Icehouse release schedule

2013-11-13 Thread Dolph Mathews
On Wed, Nov 13, 2013 at 7:58 AM, Russell Bryant rbry...@redhat.com wrote:

 On 11/13/2013 08:15 AM, Thierry Carrez wrote:
  Two options are possible for that off week:
 
  * Week of April 21 - this one is just after release, and some people
  still have a lot to do during that week. On the plus side it's
  conveniently placed next to the Easter weekend.
  * Week of April 28 - that's the middle week, which sounds a bit weird...
  but for me that would be the less active week, so I have a slight
  preference for it.
 
  What would be your preference, if any ? I'm especially interested in
  opinions from people who have a hard time taking some time off (PTLs,
  infra and release management people).

 I think my preference is the second week.  Easter makes the first week
 tempting, but as you point out, realistically there is still going to be
 some amount of looking out for and potentially dealing with release
 aftermath.


Conversely, there's likely to be less immediate feedback about the release
since it occurs just before Easter weekend.

(I don't have a preference between the two weeks... yet)



 The second week everyone really should be able to relax.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] API Deep Dive today @2100 UTC

2013-11-13 Thread Adrian Otto
Team,

This is a reminder that we will conduct our API deep dive discussion on IRC in 
#solum at 2100 UTC today (1:00 PM US/Pacific).

Details: https://wiki.openstack.org/wiki/Solum/BreakoutMeetings

Thanks,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Core pinning

2013-11-13 Thread Jiang, Yunhong


 -Original Message-
 From: Chris Friesen [mailto:chris.frie...@windriver.com]
 Sent: Wednesday, November 13, 2013 9:57 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] Core pinning
 
 On 11/13/2013 11:40 AM, Jiang, Yunhong wrote:
 
  But, from performance point of view it is better to exclusively
  dedicate PCPUs for VCPUs and emulator. In some cases you may want
  to guarantee that only one instance(and its VCPUs) is using certain
  PCPUs.  By using core pinning you can optimize instance performance
  based on e.g. cache sharing, NUMA topology, interrupt handling, pci
  pass through(SR-IOV) in multi socket hosts etc.
 
  My 2 cents. When you talking about  performance point of view, are
  you talking about guest performance, or overall performance? Pin PCPU
  is sure to benefit guest performance, but possibly not for overall
  performance, especially if the vCPU is not consume 100% of the CPU
  resources.
 
 It can actually be both.  If a guest has several virtual cores that both
 access the same memory, it can be highly beneficial all around if all
 the memory/cpus for that guest come from a single NUMA node on the
 host.
   That way you reduce the cross-NUMA-node memory traffic, increasing
 overall efficiency.  Alternately, if a guest has several cores that use
 lots of memory bandwidth but don't access the same data, you might want
 to ensure that the cores are on different NUMA nodes to equalize
 utilization of the different NUMA nodes.

I think the Tuomas is talking about  exclusively dedicate PCPUs for VCPUs, in 
that situation, that pCPU can't be shared by other vCPU anymore. If this vCPU 
like cost only 50% of the PCPU usage, it's sure to be a waste of the overall 
performance. 

As to the cross NUMA node access, I'd let hypervisor, instead of cloud OS, to 
reduce the cross NUMA access as much as possible.

I'm not against such usage, it's sure to be used on data center virtualization. 
Just question if it's for cloud.


 
 Similarly, once you start talking about doing SR-IOV networking I/O
 passthrough into a guest (for SDN/NFV stuff) for optimum efficiency it
 is beneficial to be able to steer interrupts on the physical host to the
 specific cpus on which the guest will be running.  This implies some
 form of pinning.

Still, I think hypervisor should achieve this, instead of openstack.


 
  I think pin CPU is common to data center virtualization, but not sure
  if it's in scope of cloud, which provide computing power, not
  hardware resources.
 
  And I think part of your purpose can be achieved through
  https://wiki.openstack.org/wiki/CPUEntitlement and
  https://wiki.openstack.org/wiki/InstanceResourceQuota . Especially I
  hope a well implemented hypervisor will avoid needless vcpu migration
  if the vcpu is very busy and required most of the pCPU's computing
  capability (I knew Xen used to have some issue in the scheduler to
  cause frequent vCPU migration long before).
 
 I'm not sure the above stuff can be done with those.  It's not just
 about quantity of resources, but also about which specific resources
 will be used so that other things can be done based on that knowledge.

With the above stuff, it ensure the QoS and the compute capability for the 
guest, I think.

--jyh
 
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-11-13 Thread Anita Kuno

Hi Edgar:

I hadn't thought of that, I guess I am going to have to do that.

For others, check if you need a visa to enter Canada: 
http://www.cic.gc.ca/ENGLISH/visit/visas.asp
Please notify me ASAP if you do, even if you aren't sure you can attend 
so I can get the paperwork you need.


Edgar, can you email me your address to my personal email anteaya at 
anteaya dot info so I can work on this for you.


Thanks,
Anita.

On 11/13/2013 01:12 PM, Edgar Magana wrote:

Anita,

Could you prepare the invitation letters for the ones that will require a
visa?, which is my case.

Thanks,

Edgar

On 11/13/13 8:10 AM, Anita Kuno ante...@anteaya.info wrote:


Neutron Tempest code sprint

In the second week of January in Montreal, Quebec, Canadathere will be a
Neutron Tempest code sprint to improve the status of Neutron tests in
Tempest and to add new tests.
It will be a 3 day event. Right now there are 14 peoplewho came forward
when it was announced on the Friday at the summit. We need to know how
many additional people are interested in attending.

This is an impromptu event based on my assessment of the need for this
to happen, so don't feel left out if you didn't know about it in advance.

We picked Montreal for two main reasons:
1. All 4 people whose attendance is critical (markmclain, salv-orlando,
sdague and mtrenish) can get there. It was New York or Montreal.
2. I can't think in New York, love it, can't compose a thought, so
Montreal it is.

It turns out this location choice has some resultant effects:
1. People who wouldn't have time to get a visa to attend an event in the
States have an easier time entering Canada.
 US requires visa applications filed 2 months in advance of travel
and we are inside that timeframe.
2. Montreal is cheaper than NYC.
3. Being Canadian it is going to be easier for me to produce this event
in Canada since I am in Canada.
4. It will be cold. We had few choices on the timing and this event
can't wait on good weather.

There is no location that will make everyone happy, so people will be
disappointed by this choice and I accept that. It is my hope that this
event is a success and we can create a schedule of some sort so that
people who have a high possibility of attending can vote on the
location. So that is the future vision.

I have a tenative hold on a venue and am working on getting a rate on a
block of rooms at a hotel.

I am preparing a budget to submit to the Foundation in the hopes they
will sponsor the event. Since this was planned with no warning, the
Foundation has no budget for it. Mark is supportive of the event
happening and if I can come up with some reasonable numbers, I hope that
the money can come from the Foundation.

The event will be vendor neutral. We will talk to each other based on
who we are and our interests, not based on who signs our paycheque. If
folks arrive with logoed shirts (I don't know which logos are work logos
and which aren't, so I will request no logos please) I will issue you a
white T-shirt to wear. We need to work collaboratively to effectlvely
make progress during the code sprint.

Someone at the summit choose not to wear footwear at the event. If you
want to come to the code sprint please plan on wearing appropriate
footwear in the public areas at the code sprint. For two reasons:
1. It will be cold.
2. The event is meant to facilitate mutual respect between us to
increase communication, both at the event and afterwards. I feel wearing
appropriate footwear supports this goal.

Please indicate your interest by sending an email to
ante...@anteaya.info, subject Neutron Tempest code sprint. Don't worry
about the body of the email, I just need addresses. We will send out
subsequent emails to this group to gather specific details like shirt
size, dietary requirements. If you came forward at the summit, no need
to email again.

If you want to come, but don't feel your employer will fund the trip,
please include that information in the email. It will depend on what we
can do for accomodation and travel but hopefully we will have a little
bit for a few folks. Of course please talk to your manager now to work
on getting approval to attend, and hopefully your employeer will fund
your travel and accomodation.

Additional questions? Hit me up on irc in #openstack-neutron nick
anteaya. I read the neutron logs:
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/So I will
get back to you if I am not around when you ask.

Also rossella_s has come forward to help, thank you rossella_s!

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list

Re: [openstack-dev] [trove] Configuration API BP

2013-11-13 Thread Craig Vyvial
I have updated the reviews related to configuration groups.

Trove - https://review.openstack.org/#/c/53168/
Python-TroveClient - https://review.openstack.org/#/c/53169/

Please review at your leisure.

TODOs:
* add pagination support for configuration on instances
* mark configuration groups as deleted instead of doing a hard delete in
the db.


Thanks,
Craig Vyvial


On Thu, Oct 3, 2013 at 3:48 PM, Craig Vyvial cp16...@gmail.com wrote:

 Oops forgot the link on BP for versioning templates.


 https://blueprints.launchpad.net/trove/+spec/configuration-templates-versionable


 On Thu, Oct 3, 2013 at 3:47 PM, Craig Vyvial cp16...@gmail.com wrote:

 I have been trying to figure out where a call for the default
 configuration should go. I just finished adding a method to get the
 [mysqld] section via an api call but not sure where this should go yet.

 Currently i made it:
 GET - /instance/{id}/configuration

 This kinda only half fits in the path here because it doesnt really
 describe that this is the default configuration on the instance. On the
 other hand, it shows that it is coupled to the instance because we need the
 instance flavor to give what the current values are in the template applied
 to the instance.

 Maybe other options could be:
 GET - /instance/{id}/configuration/default
 GET - /instance/{id}/defaultconfiguration
 GET - /instance/{id}/default-configuration
 GET - /configuration/default/instance/{id}

 Suggestions welcome on the path.

 There is some wonkiness showing this information to the user because of
 the difference in the values used. [1] This example shows that the template
 uses 50M as a value applied and the configuration-group would apply the
 value equivalent to 52428800. I dont think we should worry about this now
 but this could lead to confusion by a user. If they are a power-user type
 then they might understand compared to a entry level user.

 [1] https://gist.github.com/cp16net/6816691



  On Thu, Oct 3, 2013 at 2:36 PM, McReynolds, Auston amcreyno...@ebay.com
  wrote:

 If User X's existing instance is isolated from the change, but there's
 no snapshot/clone/versioning of the current settings on X's instance
 (via the trove database or jinja template), then how will
 GET /configurations/:id return the correct/current settings? Unless
 you're planning on communicating with the guest? There's nothing
 wrong with that approach, it's just not explicitly noted anywhere in
 the blueprint. For some reason I inferred that it would be handled
 like trove security-groups.

 So this is a great point. There are talks about making the templating
 versioned in some form or fashion. ekonetzk(irc) said he would write up a
 BP around versioning.



 On a slightly different note: If the default template will not be
 represented as a default configuration-group from an api standpoint,
 then how will you support the ability for a user to enumerate the list
 of default configuration-group values for a service-type?
 GET /configurations/:id won't be applicable, so will it be
 something like GET /configurations/default?

 see above paragraph.





 From:  Craig Vyvial cp16...@gmail.com
 Reply-To:  OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date:  Thursday, October 3, 2013 11:17 AM
 To:  OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [trove] Configuration API BP


 inline.


 On Wed, Oct 2, 2013 at 1:03 PM, McReynolds, Auston
 amcreyno...@ebay.com wrote:

 Awesome! I only have one follow-up question:

 Regarding #6  #7, how will the clone behavior work given that the
 defaults are hydrated by a non-versioned jinja template?


 I am not sure i understand clone behavior because there is not really a
 concept of cloning here. The jinja template is created and passed in the
 prepare call to the guest to write to the default my.cnf file.

 When a configuration-group is removed the instance will return to the
 default state. This does not exactly act as a clone behavior.



 Scenario Timeline:

 T1) Cloud provider begins with the default jinja template, but changes
the values for properties 'a' and 'b'. (Template Version #1)
 T2) User X deploys a database instance
 T3) Cloud provider decides to update the existing template by modifying
property 'c'. (Template Version #2)
 T4) User Z deploys a database instance

 I think it goes without saying that User Z's instance gets Template
 Version #2 (w/ changes to a  b  c), but does User X?


 No User X does not get the changes. For User X to get the changes a
 maintenance may need to be scheduled.



 If it's a true clone, User X should be isolated from a change in
 defaults, no?


 User X will not see these default changes until a new instance is
 created.



 Come to think about it, this is eerily similar to security-groups:
 administratively, it can be beneficial to share a
 configuration/security-group across multiple instances, but it can
 also be a 

[openstack-dev] [Nova] question about DB migration difficulty

2013-11-13 Thread Mike Spreitzer
This is a follow-up to the design summit discussion about DB migrations. 
There was concern about the undo-ability of some migrations.  The specific 
example cited was removal of a column.  Could that be done with the 
following three migrations, each undo-able?  First, change the code to 
keep writing the column but no longer read the column.  Second migration 
changes the code to neither read nor write the column.  Third migration 
physically removes the column.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova SSL Apache2 Question

2013-11-13 Thread Adam Young
On 11/06/2013 07:20 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis) 
wrote:

Hello,

I am trying to front all of the Grizzly OpenStack services with Apache2 in 
order to enable SSL. I've got Horizon and Keystone working but am struggling 
with Nova. The only documentation I have been able to find is at URL 
http://www.rackspace.com/blog/enabling-ssl-for-the-openstack-api/

However, the Nova sample osapi.wsgi and osapi files are not working with 
Grizzly. Does anyone have a set of these files for Nova?

Thanks,

Mark Miller

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


This was on my To Do list, but for Icehouse.  What are you seeing as the 
failure?


The original article was written a while ago, so I am not surprised 
things have changed out from underneath it.  In particular, there are 
some times where Eventlet code gets monkey patched in that you won't 
want when working in HTTPD.  In Keystone, we isolated the Monkeypatching 
into a single function, to ensure the same logic was done in both 
starting the App and the unit tests.  I suspect we'll need to something 
comparable in Nova.


There are also potential SELinux issues.  I'd run with SELinux in 
Permissive mode until you get things sorted.






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] configuration groups using overrides file

2013-11-13 Thread Craig Vyvial
We need to determine if we should not use a separate file for the overrides
config as it might not be supported by all dbs trove supports. (works well
for mysql but might not for cassandra as we discussed in the irc channel)

To support this for all dbs we could setup the jinja templates to add the
configs to the end of the main config file (my.cnf for mysql example).


-Craig Vyvial
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Using of oslo.config options in openstack.common modules

2013-11-13 Thread Doug Hellmann
On Wed, Nov 13, 2013 at 4:13 AM, Roman Podoliaka rpodoly...@mirantis.comwrote:

 Hi Flavio,

 Thanks for sharing this! I attended that session, but haven't seen the
 corresponding blueprint before.

 Nevertheless, I'm not sure that implementing this entirely solves the
 original problem. Removing import side-effects is definitely the right
 thing to do, but options will be eventually registered at runtime
 anyway, so they could possibly conflict (e.g. Ironic uses oslo.db and
 lockutils, oslo.db uses lockutils too, but with newer definitions of
 the same options, so even if we moved registration of options to
 lockutils.synchronized() function, they would conflict when the
 function would be called).


This came up at one point during the summit and I believe the consensus was
that it made sense to try to push all of the Oslo libraries to not rely on
oslo.config as much as possible. That gets trickier with driver-based
libraries like messaging (where the options aren't known to the core
library), and for maintaining backwards compatibility for upgrades (since
we already have all of these configuration options). We also want to
minimize duplicate option definitions in the applications, which could end
up with different names or defaults.

I'm interested in whether anyone has suggestions for solving the issue
while addressing all of these cases.

Doug



 Thanks,
 Roman

 On Wed, Nov 13, 2013 at 10:11 AM, Flavio Percoco fla...@redhat.com
 wrote:
  On 12/11/13 17:21 +0200, Roman Podoliaka wrote:
 
  Hi all,
 
  Currently, many modules from openstack.common package register
  oslo.config options. And this is completely OK while these modules are
  copied to target projects using update.py script.
 
  But consider the situation, when we decide to split a new library from
  oslo-incubator - oslo.spam - and this library uses module
  openstack.common.eggs, just because we don't want to reinvent the
  wheel and this module is really useful. Lets say module eggs defines
  config option 'foo' and this module is also used in Nova. Now we want
  to use oslo.spam in Nova too.
 
  So here is the tricky part: if the versions of openstack.common.eggs
  in oslo.spam and openstack.common.eggs in Nova define config option
  'foo' differently (e.g. the version in Nova is outdated and doesn't
  provide the help string), oslo.config will raise DuplicateOptError.
 
  There are at least two ways to solve this problem:
  1) don't use openstack.common code in olso.* libraries
  2) don't register config options in openstack.common modules
 
  The former is totally doable, but it means that we will end up
  repeating ourselves, because we already have a set of very useful
  modules (e.g. lockutils) and there is little sense in rewriting them
  from scratch within oslo.* libraries.
 
  The latter means that we should refactor the existing code in
  openstack.common package. As these modules are meant to be libraries,
  it's strange that they rely on config values to control their behavior
  instead of using the traditional approach of passing
  function/method/class constructor arguments.
 
  ...or I might be missing something :)
 
  Thoughts?
 
 
  FWIW, We had a session about removing side-effects at the summit [0].
  I can see the cases you mention being fixed as part of the work for
  that blueprint.
 
  [0] http://summit.openstack.org/cfp/details/125
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-11-13 Thread Anita Kuno
So if you are requesting a letter of invitation to enter Canada, here is 
what I have to do to provide it:

http://www.cic.gc.ca/english/visit/letter.asp

It appears that letters of invitation are only required sometimes:
Sometimes, when you apply for a visa to visit Canada, we ask you to 
give us a letter of invitation from someone in Canada.
so I will encourage you to contact your closest Canadian embassy or 
consulate outside of Canada to assess if you need one. Hopefully if you 
are planning on staying less that 1 week, you won't need one. But 
contact the embassy and be sure.


Thanks,
Anita.

On 11/13/2013 01:25 PM, Edgar Magana wrote:

Will do ASAP.

Thanks,

Edgar

On 11/13/13 10:20 AM, Anita Kuno ante...@anteaya.info wrote:


Hi Edgar:

I hadn't thought of that, I guess I am going to have to do that.

For others, check if you need a visa to enter Canada:
http://www.cic.gc.ca/ENGLISH/visit/visas.asp
Please notify me ASAP if you do, even if you aren't sure you can attend
so I can get the paperwork you need.

Edgar, can you email me your address to my personal email anteaya at
anteaya dot info so I can work on this for you.

Thanks,
Anita.

On 11/13/2013 01:12 PM, Edgar Magana wrote:

Anita,

Could you prepare the invitation letters for the ones that will require
a
visa?, which is my case.

Thanks,

Edgar

On 11/13/13 8:10 AM, Anita Kuno ante...@anteaya.info wrote:


Neutron Tempest code sprint

In the second week of January in Montreal, Quebec, Canadathere will be
a
Neutron Tempest code sprint to improve the status of Neutron tests in
Tempest and to add new tests.
It will be a 3 day event. Right now there are 14 peoplewho came forward
when it was announced on the Friday at the summit. We need to know how
many additional people are interested in attending.

This is an impromptu event based on my assessment of the need for this
to happen, so don't feel left out if you didn't know about it in
advance.

We picked Montreal for two main reasons:
1. All 4 people whose attendance is critical (markmclain, salv-orlando,
sdague and mtrenish) can get there. It was New York or Montreal.
2. I can't think in New York, love it, can't compose a thought, so
Montreal it is.

It turns out this location choice has some resultant effects:
1. People who wouldn't have time to get a visa to attend an event in
the
States have an easier time entering Canada.
  US requires visa applications filed 2 months in advance of travel
and we are inside that timeframe.
2. Montreal is cheaper than NYC.
3. Being Canadian it is going to be easier for me to produce this event
in Canada since I am in Canada.
4. It will be cold. We had few choices on the timing and this event
can't wait on good weather.

There is no location that will make everyone happy, so people will be
disappointed by this choice and I accept that. It is my hope that this
event is a success and we can create a schedule of some sort so that
people who have a high possibility of attending can vote on the
location. So that is the future vision.

I have a tenative hold on a venue and am working on getting a rate on a
block of rooms at a hotel.

I am preparing a budget to submit to the Foundation in the hopes they
will sponsor the event. Since this was planned with no warning, the
Foundation has no budget for it. Mark is supportive of the event
happening and if I can come up with some reasonable numbers, I hope
that
the money can come from the Foundation.

The event will be vendor neutral. We will talk to each other based on
who we are and our interests, not based on who signs our paycheque. If
folks arrive with logoed shirts (I don't know which logos are work
logos
and which aren't, so I will request no logos please) I will issue you a
white T-shirt to wear. We need to work collaboratively to effectlvely
make progress during the code sprint.

Someone at the summit choose not to wear footwear at the event. If you
want to come to the code sprint please plan on wearing appropriate
footwear in the public areas at the code sprint. For two reasons:
1. It will be cold.
2. The event is meant to facilitate mutual respect between us to
increase communication, both at the event and afterwards. I feel
wearing
appropriate footwear supports this goal.

Please indicate your interest by sending an email to
ante...@anteaya.info, subject Neutron Tempest code sprint. Don't
worry
about the body of the email, I just need addresses. We will send out
subsequent emails to this group to gather specific details like shirt
size, dietary requirements. If you came forward at the summit, no need
to email again.

If you want to come, but don't feel your employer will fund the trip,
please include that information in the email. It will depend on what we
can do for accomodation and travel but hopefully we will have a little
bit for a few folks. Of course please talk to your manager now to work
on getting approval to attend, and hopefully your employeer will fund
your travel and accomodation.

Additional 

Re: [openstack-dev] [keystone] design summit outcomes

2013-11-13 Thread David Chadwick

Hi Henry

I dont think the two proposals are incompatible. One table defines the 
attributes, similar to LDAP schema, whilst the other stores the actual 
assignments. I was talking about the former and you the latter in our 
emails. But we did discuss the former in the dev lounge as well


regards

david

On 13/11/2013 17:57, Henry Nash wrote:

Hi David,

I think that's the wrong table (remembering our conversation!)the role 
assignment table(s) in keystone today is part of what you would call a mapper 
table, they don't define the attributes.  What I think we agreed was rather 
than taking one giant lead to the all-encompassing mapper table, we would:

- refactor the current 4 tables into 1 that stored assignments (i.e. actor X, 
has attribute Y, on target z), where today:

actor can be user or group
attribute is a role
target can be project or domain

- create a first version of the true  mapper table, who's sole job was to map 
IDP groups into something keystone understands (usually a keystone group, I 
would suggest)

I think that's we decided..or is the memory fading already

Henry
On 13 Nov 2013, at 17:00, David Chadwick d.w.chadw...@kent.ac.uk wrote:


Hi Dolph

I have one comment concerning Refactoring:
role assignment tables in SQL backend should be unified into a SQL table which 
lacks referential integrity

I am not quite sure what this means, but I did suggest creating an attribute 
definition table that would include the definitions of all Keystone authz 
attributes such as role, domain, project as well as identity attributes such as 
location, group, email address etc. Definitions would include such things as: 
delegatable or not, for keystone authz or not (see below). Once this table has 
been defined, then all user role assignments can be collapsed into one table 
(no need for separate role, domain tables etc) with the assigned attribute 
pointing to the entry in the definition table.

Was this what your bullet point was referring to, or was it something different?

Here is a strawman proposal

Table name = Attribute
id: the unique primary key
Attribute name: (user friendly name e.g. role, domain, etc.)
Attribute Ref: (global id of attribute such as OID or URL, as used by SAML and 
LDAP)
Type: (authz or identity)
Delegatable: [Yes|No]
Values: [string|integer|Boolean]

Now when you assign a role, domain, location, email address or any other 
attribute to a user a single assignment table can be used such as:

Table name: Attribute Assignment
id: the unique primary key of this assignment
UserID: id of user this attribute is assigned to
AttributeID: id of attribute from above table
Value: the value of the assigned attribute

you dont need to change the existing APIs and procedure calls, as they can be 
re-written to access the new tables.

regards

David

On 13/11/2013 16:04, Dolph Mathews wrote:

I guarantee there's a few things I'm forgetting, but this is my
collection of things we discussed at the summit and determined to be
good things to pursue during the icehouse timeframe. The contents
represent a high level mix of etherpad conclusions and hallway meetings.

https://gist.github.com/dolph/7366031

Corrections and amendments appreciated - thanks!

-Dolph


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Group-based Policy Sub-team Meetings

2013-11-13 Thread Tim Hinrichs
Are there plans for a concrete policy language (e.g. a grammar and semantics) 
to be part of the proposal, or does each plugin to Neutron supply its own 
policy language?

I'm trying to envision how Heat would utilize the policy API.  If there's a 
concrete policy language, then Heat can take an app template, extract the 
networking-relevant policy, and express that policy in the concrete language.  
Then whatever plugin we're using for Neutron can implement that policy in any 
way it sees fit as long as it obeys the policy's semantics (according to the 
language--the semantics Heat intended).

But if there's no concrete policy language, how does Heat know which policy 
statements to send?  It doesn't know which plugin is being used for Neutron.  
So it doesn't even know which strings are valid policy statements.  Or are we 
assuming that Heat knows which plugin is being used for Neutron?  Or am I 
missing something?

Thanks,
Tim

- Original Message -
| From: Kyle Mestery (kmestery) kmest...@cisco.com
| To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
| Sent: Wednesday, November 13, 2013 9:57:54 AM
| Subject: Re: [openstack-dev] [neutron] Group-based Policy Sub-team Meetings
| 
| On Nov 13, 2013, at 10:36 AM, Stephen Wong s3w...@midokura.com
|  wrote:
| 
|  Hi Kyle,
|  
| So no meeting this Thursday?
|  
| I am inclined to skip this week's meeting due to the fact I haven't
| heard many
| replies yet. I think a good starting point for people would be to
| review the
| BP [1] and Design Document [2] and provide feedback where
| appropriate.
| We should start to formalize what the APIs will look like at next
| week's meeting,
| and the Design Document has a first pass at this.
| 
| Thanks,
| Kyle
| 
| [1]
| https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
| [2]
| 
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?usp=sharing
| 
|  Thanks,
|  - Stephen
|  
|  On Wed, Nov 13, 2013 at 7:11 AM, Kyle Mestery (kmestery)
|  kmest...@cisco.com wrote:
|  On Nov 13, 2013, at 8:58 AM, Stein, Manuel (Manuel)
|  manuel.st...@alcatel-lucent.com wrote:
|  
|  Kyle,
|  
|  I'm afraid your meeting vanished from the Meetings page [2] when
|  user amotiki reworked neutron meetings ^.^
|  Is the meeting for Thu 1600 UTC still on?
|  
|  Ack, thanks for the heads up here! I have re-added the meeting. I
|  only heard
|  back from one other person other than yourself, so at this point
|  I'm inclined
|  to wait until next week to hold our first meeting unless I hear
|  back from others.
|  
|  A few heads-up questions (couldn't attend the HK design summit
|  Friday meeting):
|  
|  1) In the summit session Etherpad [3], ML2 implementation
|  mentions insertion of arbitrary metadata to hint to underlying
|  implementation. Is that (a) the plug-ing reporting its
|  policy-bound realization? (b) the user further specifying what
|  should be used? (c) both? Or (d) none of that but just some
|  arbitrary message of the day?
|  
|  I believe that would be (a).
|  
|  2) Would policies _always_ map to the old Neutron entities?
|  E.g. when I have policies in place, can I query related
|  network/port, subnet/address, router elements on the API or are
|  there no equivalents created? Would the logical topology created
|  under the policies be exposed otherwise? for e.g.
|  monitoring/wysiwyg/troubleshoot purposes.
|  
|  No, this is up to the plugin/MechanismDriver implementation.
|  
|  3) Do the chain identifier in your policy rule actions match to
|  Service Chain UUID in Service Insertion, Chaining and API [4]
|  
|  That's one way to look at this, yes.
|  
|  4) Are you going to describe L2 services the way group policies
|  work? I mean, why would I need a LoadBalancer or Firewall
|  instance before I can insert it between two groups when all that
|  load balancing/firewalling requires is nothing but a policy for
|  group communication itself? - regardless the service instance
|  used to carry out the service.
|  
|  These are things I'd like to discuss at the IRC meeting each week.
|  The goal
|  would be to try and come up with some actionable items we can
|  drive towards
|  in both Icehouse-1 and Icehouse-2. Given how close the closing of
|  Icehouse-1
|  is, we need to focus on this very fast if we want to have a
|  measurable impact in
|  Icehouse-1.
|  
|  Thanks,
|  Kyle
|  
|  
|  Best, Manuel
|  
|  [2]
|  
https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting
|  [3]
|  
https://etherpad.openstack.org/p/Group_Based_Policy_Abstraction_for_Neutron
|  [4]
|  
https://docs.google.com/document/d/1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh0Wl3YF2U/edit#
|  
|  -Original Message-
|  From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com]
|  Sent: Montag, 11. November 2013 19:41
|  To: OpenStack Development Mailing List (not for usage questions)
|  Subject: [openstack-dev] 

Re: [openstack-dev] [Neutron][LBaaS] LBaaS subteam meeting Thursday, 14, at 14-00 UTC

2013-11-13 Thread Samuel Bercovici
Hi,

I will not be able to join the meeting this time.
For item 1. We are starting to work on SSL termination and L7 based routing.

Regards,
 -Sam.

On Nov 12, 2013, at 9:30 PM, Eugene Nikanorov 
enikano...@mirantis.commailto:enikano...@mirantis.com wrote:

Hi folks,

LBaaS subteam meeting will be held on Thursday, 14 at 14-00 UTC on 
#openstack-meeting irc channel on freenode, as specified in 
https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting

The agenda is the following:
1. Blueprint list to be proposed for the icehouse-1
2. QA  third-party testing
3. dev resources evaluation
4. Additional features requested by users.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disable/enable services and agent tests

2013-11-13 Thread Sean Dague
On 11/13/2013 02:06 PM, Joe Gordon wrote:
 
 
 
 On Wed, Nov 13, 2013 at 8:18 PM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 11/12/2013 10:25 PM, Robert Collins wrote:
  We shouldn't really be changing the config of the cloud we're testing
  - that stops us being run against actual prod clouds.
 
  Instead we should have some set of profiles we test against - and then
  run different jobs for different profiles, avoiding the problem
  entirely.
 
 The issue here is we've got an API that's exposed to the admin to do
 this, which means for some folks, it would be nice to test this
 functionality is working. Per our community mantra If it's not tested,
 it's assumed broken. -
 https://twitter.com/russellbryant/status/396889282155008000
 
 That being said, in this case in particular, we should probably be more
 careful about disabling our only nova-compute during the tests, because,
 you know, that might be bad. Today the calls happen so quickly, I think
 we've just avoided the race entirely (the enable_disable test has been
 in the gate since H2). I've honestly never seen a race that we can track
 down to this in the gate.
 
 For now, I'd agree that enable_disable should probably be removed from a
 normal tempest run. I think we've started to find that we have a class
 of APIs that impact our runtime in a dramatic enough way, that we need
 to be really careful about how we tickle them. I'd be interested in
 ideas about how we do that. Remember we also have locking in our bag of
 tricks, which is what we need to do around all the aggregate tests, as
 those are admin tests on global state.
 
 
 Why not just use the lock in this case?

Well... the only issue is we'd actually have to also lock every compute
call that would get to n-cpu, because the resource we are making go away
is n-cpu. Which is a ton of lock metering for one test.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Problems with glance when using token v3

2013-11-13 Thread Telles Nobrega
Hi, I'm trying to start an instance using a token v3 but I'm getting this
error
http://paste.openstack.org/show/52504/
Anyone has experienced this before or have any ideas how to solve it?

-- 
--
Telles Mota Vidal Nobrega
Developer at PulsarOpenStack Project - HP/LSD-UFCG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-13 Thread Clayton Coleman


- Original Message -
 +1 for ease of iteration till we decide on the model
 (and so not worry about remotability right now :)).
 
 Just to verify that I understand the proposed strawman commit,
 it will use nova/object/* like hierarchy to define Solum specific domain
 objects
 in which specific methods would be exposed whose implementation will use
 sqlalchemy ORM calls internally.

Yeah, base abstract class / interface / field definitions, with a subclass 
implementation that is accessed via a factory/lookup table.  I had started on 
that track but wanted to get the remotability feedback early (I don't like 
mucking around with the sqlalchemy metaclasses early on unless it's something 
we view as critical).  The remotable path would separate the orm model and then 
do the same translation that happens today in nova/ironic with _from_db_object 
where we create two objects, then copy them back and forth a bunch.  The first 
implementation could be converted to the second, I don't see the second being 
something we'd convert to the first since it's more work.  I've also included 
examples of live schema update with the three states (old schema, new schema 
but write-only, new schema and no access to the old schema) for various types 
of changes (rename, split columns, add subtable relationship).

I could do both to compare but figured we could argue out remotability via ML.

 
 Sounds good to me.
 
 As long as we are able to discuss and debate the perceived advantages of the
 object approach
 (that it makes handling versioned data easier, and that it allows using sql
 and non-sql
  backends) we should be good.
 
 Btw, thanks for sending across link to the F1 paper.
 
 Regards,
 - Devdatta
 
 
 -Original Message-
 From: Clayton Coleman ccole...@redhat.com
 Sent: Wednesday, November 13, 2013 12:29pm
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: devdatta kulkarni devdatta.kulka...@rackspace.com
 Subject: Re: [openstack-dev] [Solum] Some initial code copying for
 db/migration
 
 - Original Message -
  
  The abstraction will probably always leak something, but in general it
  comes
  down to isolating a full transaction behind a coarse method.  Was going to
  try to demonstrate that as I went.
  
 
 I've almost got something ready for feedback and review - before I do that I
 wanted to follow up with remotability and it's relative importance:
 
 Is remotability of DB interactions a prime requirement for all OpenStack
 services? Judging by what I've seen so far, it's primarily to ensure that DB
 passwords are isolated from the API code, with a tiny amount of being able
 to scale business logic independently.  Are there other reasons?
 
 For DB password separation, it's never been a huge concern to us
 operationally - do others have strong enough opinions either way to say that
 it continues to be important vs. not?
 
 For the separated scale behavior, at the control plane scale outs we suspect
 we'll have (2-20?), does separating the api and a background layer provide
 benefit?

 The other items that have been mentioned that loosely couple with
 remotability are versioned upgrades, but we can solve those in a combined
 layer as well with an appropriately chosen API abstraction.
 
 If remotability of DB calls is not a short term or medium term objective,
 then I was going to put together a strawman commit that binds domain object
 implementation subclasses that are tied to sqlalchemy ORM, but with the
 granular create/save/update calls called out and enforced.  If it is a
 short/medium objective, we'd use the object field list to copy from the ORM,
 with the duplicate object creation that it entails.  The former is easier to
 iterate on as we develop to decide on the model, the latter is easier to
 make remoteable (can't have the SQL orm state inside the object that is
 being serialized easily).  There's an argument the latter enforces stronger
 code guarantees as well (there's no way for people to use tricky orm magic
 methods added to the object, although that's a bit less of an issue with
 sqlalchemy than other models).
 
 Thoughts?
 
 
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] configuration groups using overrides file

2013-11-13 Thread Denis Makogon
I would like to see this functionality in the next way:
1. Creating parameters group.
2. Validate and Save.
3. Send to an instance those parameters in dict representation.
4. Merge into main config.

PS: #4 is database specific, so it's should be handled by manager.


2013/11/13 Craig Vyvial cp16...@gmail.com

 We need to determine if we should not use a separate file for the
 overrides config as it might not be supported by all dbs trove supports.
 (works well for mysql but might not for cassandra as we discussed in the
 irc channel)

 To support this for all dbs we could setup the jinja templates to add the
 configs to the end of the main config file (my.cnf for mysql example).


 -Craig Vyvial

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-11-13 Thread Denis Makogon
Craig, thanks for update. I'm going to re-review it tomorrow.

Best regards, Denis Makogon.


2013/11/13 Craig Vyvial cp16...@gmail.com

 I have updated the reviews related to configuration groups.

 Trove - https://review.openstack.org/#/c/53168/
 Python-TroveClient - https://review.openstack.org/#/c/53169/

 Please review at your leisure.

 TODOs:
 * add pagination support for configuration on instances
 * mark configuration groups as deleted instead of doing a hard delete in
 the db.


 Thanks,
 Craig Vyvial


 On Thu, Oct 3, 2013 at 3:48 PM, Craig Vyvial cp16...@gmail.com wrote:

 Oops forgot the link on BP for versioning templates.


 https://blueprints.launchpad.net/trove/+spec/configuration-templates-versionable


 On Thu, Oct 3, 2013 at 3:47 PM, Craig Vyvial cp16...@gmail.com wrote:

 I have been trying to figure out where a call for the default
 configuration should go. I just finished adding a method to get the
 [mysqld] section via an api call but not sure where this should go yet.

 Currently i made it:
 GET - /instance/{id}/configuration

 This kinda only half fits in the path here because it doesnt really
 describe that this is the default configuration on the instance. On the
 other hand, it shows that it is coupled to the instance because we need the
 instance flavor to give what the current values are in the template applied
 to the instance.

 Maybe other options could be:
 GET - /instance/{id}/configuration/default
 GET - /instance/{id}/defaultconfiguration
 GET - /instance/{id}/default-configuration
 GET - /configuration/default/instance/{id}

 Suggestions welcome on the path.

 There is some wonkiness showing this information to the user because of
 the difference in the values used. [1] This example shows that the template
 uses 50M as a value applied and the configuration-group would apply the
 value equivalent to 52428800. I dont think we should worry about this now
 but this could lead to confusion by a user. If they are a power-user type
 then they might understand compared to a entry level user.

 [1] https://gist.github.com/cp16net/6816691



  On Thu, Oct 3, 2013 at 2:36 PM, McReynolds, Auston 
 amcreyno...@ebay.com wrote:

 If User X's existing instance is isolated from the change, but there's
 no snapshot/clone/versioning of the current settings on X's instance
 (via the trove database or jinja template), then how will
 GET /configurations/:id return the correct/current settings? Unless
 you're planning on communicating with the guest? There's nothing
 wrong with that approach, it's just not explicitly noted anywhere in
 the blueprint. For some reason I inferred that it would be handled
 like trove security-groups.

 So this is a great point. There are talks about making the templating
 versioned in some form or fashion. ekonetzk(irc) said he would write up a
 BP around versioning.



 On a slightly different note: If the default template will not be
 represented as a default configuration-group from an api standpoint,
 then how will you support the ability for a user to enumerate the list
 of default configuration-group values for a service-type?
 GET /configurations/:id won't be applicable, so will it be
 something like GET /configurations/default?

 see above paragraph.





 From:  Craig Vyvial cp16...@gmail.com
 Reply-To:  OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date:  Thursday, October 3, 2013 11:17 AM
 To:  OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [trove] Configuration API BP


 inline.


 On Wed, Oct 2, 2013 at 1:03 PM, McReynolds, Auston
 amcreyno...@ebay.com wrote:

 Awesome! I only have one follow-up question:

 Regarding #6  #7, how will the clone behavior work given that the
 defaults are hydrated by a non-versioned jinja template?


 I am not sure i understand clone behavior because there is not really
 a
 concept of cloning here. The jinja template is created and passed in the
 prepare call to the guest to write to the default my.cnf file.

 When a configuration-group is removed the instance will return to the
 default state. This does not exactly act as a clone behavior.



 Scenario Timeline:

 T1) Cloud provider begins with the default jinja template, but changes
the values for properties 'a' and 'b'. (Template Version #1)
 T2) User X deploys a database instance
 T3) Cloud provider decides to update the existing template by modifying
property 'c'. (Template Version #2)
 T4) User Z deploys a database instance

 I think it goes without saying that User Z's instance gets Template
 Version #2 (w/ changes to a  b  c), but does User X?


 No User X does not get the changes. For User X to get the changes a
 maintenance may need to be scheduled.



 If it's a true clone, User X should be isolated from a change in
 defaults, no?


 User X will not see these default changes until a new instance is
 created.



 Come to think about it, this is eerily similar to 

Re: [openstack-dev] Using AD for keystone authentication only

2013-11-13 Thread Dolph Mathews
Yes, that's the preferred approach in Havana: Users and Groups via LDAP,
and everything else via SQL.

On Wednesday, November 13, 2013, Avi L wrote:

 Hi,

 I understand that the LDAP provider in keystone can be used for
 authenticating a user (i.e validate username and password) , and it also
 authorize it against roles and tenant. However this requires AD schema
 modification. Is it possible to use AD only for authentication and then use
 keystone's native database for roles and tenant lookup? The advantage is
 that then we don't need to touch the enterprise AD installation.

 Thanks
 Al



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-11-13 Thread Matt Riedemann



On 11/12/2013 5:04 PM, Chuck Short wrote:




On Tue, Nov 12, 2013 at 4:49 PM, Mark McLoughlin mar...@redhat.com
mailto:mar...@redhat.com wrote:

On Tue, 2013-11-12 at 16:42 -0500, Chuck Short wrote:
 
  Hi
 
 
  On Tue, Nov 12, 2013 at 4:24 PM, Mark McLoughlin
mar...@redhat.com mailto:mar...@redhat.com
  wrote:
  On Tue, 2013-11-12 at 13:11 -0800, Shawn Hartsock wrote:
   Maybe we should have some 60% rule... that is: If you
change
  more than
   half of a test... you should *probably* rewrite the test in
  Mock.
 
 
  A rule needs a reasoning attached to it :)
 
  Why do we want people to use mock?
 
  Is it really for Python3? If so, I assume that means we've
  ruled out the
  python3 port of mox? (Ok by me, but would be good to hear
why)
  And, if
  that's the case, then we should encourage whoever wants to
  port mox
  based tests to mock.
 
 
 
  The upstream maintainer is not going to port mox to python3 so we
have
  a fork of mox called mox3. Ideally, we would drop the usage of mox in
  favour of mock so we don't have to carry a forked mox.

Isn't that the opposite conclusion you came to here:

http://lists.openstack.org/pipermail/openstack-dev/2013-July/012474.html

i.e. using mox3 results in less code churn?

Mark.



Yes that was my original position but I though we agreed in thread
(further on) that we would use mox3 and then migrate to mock further on.

Regards
chuck


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So it sounds like we're good with using mox for new tests again? Given 
Chuck got it into global-requirements here:


https://github.com/openstack/requirements/commit/998dda263d7c7881070e3f16e4523ddcd23fc36d

We can stave off the need to transition everything from mox to mock?

I can't seem to find the nova blueprint to convert everything from mox 
to mock, maybe it was obsoleted already.


Anyway, if mox(3) is OK and we don't need to use mock, it seems like we 
could add something to the developer guide here because I think this 
question comes up frequently:


http://docs.openstack.org/developer/nova/devref/unit_tests.html

Does anyone disagree?

BTW, I care about this because I've been keeping in mind the mox/mock 
transition when doing code reviews and giving a -1 when new tests are 
using mox (since I thought that was a no-no now).

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS subteam meeting Thursday, 14, at 14-00 UTC

2013-11-13 Thread Eugene Nikanorov
Hi Sam,

Will Avishay be able to join?

Thanks,
Eugene.


On Wed, Nov 13, 2013 at 10:57 PM, Samuel Bercovici samu...@radware.comwrote:

  Hi,

  I will not be able to join the meeting this time.
 For item 1. We are starting to work on SSL termination and L7 based
 routing.

 Regards,
  -Sam.

 On Nov 12, 2013, at 9:30 PM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

   Hi folks,

  LBaaS subteam meeting will be held on Thursday, 14 at 14-00 UTC on
 #openstack-meeting irc channel on freenode, as specified in
 https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting

  The agenda is the following:
  1. Blueprint list to be proposed for the icehouse-1
  2. QA  third-party testing
 3. dev resources evaluation
 4. Additional features requested by users.

  Thanks,
 Eugene.

  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Congress Policy-as-a-Service Team Meeting

2013-11-13 Thread Pierre Ettori
Hi folks!

I Hope everyone had a safe trip back from Hong Kong.
Thanks so much for attending the Congress unconference talk on Wednesday
Afternoon.
One of the action items was to start to have a weekly IRC meeting to drive
out further definition of Congress and requirements
I've tentatively set the meeting for Tuesday at 1800 UTC on the
#openstack-meeting-alt IRC channel:
herehttps://wiki.openstack.org/wiki/Meetings#Congress_Team_Meeting.
Please speak up now, if there you have some some serious conflicts with
this time slot. Otherwise, we'll host our first meeting on Tuesday next
week, the 19th.

Thanks

Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Baremetal]: pxe image on hard disk

2013-11-13 Thread Vijay


Using baremetal provisioning, I was able to provision a physical server with an 
image. However, after I disconnected the server from openstack cluster and 
tried to boot the physical server with hard disk, it could not find the image. 
Is there a way to persist the pxe image on to the hard disk so that it could be 
used later by the server to boot from its hard disk?
Thanks,
-vj___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] disable/enable services and agent tests

2013-11-13 Thread Joe Gordon
On Wed, Nov 13, 2013 at 11:11 AM, Sean Dague s...@dague.net wrote:

 On 11/13/2013 02:06 PM, Joe Gordon wrote:
 
 
 
  On Wed, Nov 13, 2013 at 8:18 PM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  On 11/12/2013 10:25 PM, Robert Collins wrote:
   We shouldn't really be changing the config of the cloud we're
 testing
   - that stops us being run against actual prod clouds.
  
   Instead we should have some set of profiles we test against - and
 then
   run different jobs for different profiles, avoiding the problem
   entirely.
 
  The issue here is we've got an API that's exposed to the admin to do
  this, which means for some folks, it would be nice to test this
  functionality is working. Per our community mantra If it's not
 tested,
  it's assumed broken. -
  https://twitter.com/russellbryant/status/396889282155008000
 
  That being said, in this case in particular, we should probably be
 more
  careful about disabling our only nova-compute during the tests,
 because,
  you know, that might be bad. Today the calls happen so quickly, I
 think
  we've just avoided the race entirely (the enable_disable test has
 been
  in the gate since H2). I've honestly never seen a race that we can
 track
  down to this in the gate.
 
  For now, I'd agree that enable_disable should probably be removed
 from a
  normal tempest run. I think we've started to find that we have a
 class
  of APIs that impact our runtime in a dramatic enough way, that we
 need
  to be really careful about how we tickle them. I'd be interested in
  ideas about how we do that. Remember we also have locking in our bag
 of
  tricks, which is what we need to do around all the aggregate tests,
 as
  those are admin tests on global state.
 
 
  Why not just use the lock in this case?

 Well... the only issue is we'd actually have to also lock every compute
 call that would get to n-cpu, because the resource we are making go away
 is n-cpu. Which is a ton of lock metering for one test.


Good point.



 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Andrew Laski

On 11/13/13 at 12:01pm, Dolph Mathews wrote:

On Wed, Nov 13, 2013 at 9:47 AM, John Griffith
john.griff...@solidfire.comwrote:


On Wed, Nov 13, 2013 at 7:21 AM, Andrew Laski
andrew.la...@rackspace.com wrote:
 On 11/13/13 at 05:48am, Gary Kotton wrote:

 I recall a few cycles ago having str(uuid.uuid4()) replaced by
 generate_uuid(). There was actually a helper function in neutron (back
when
 it was called quantum) and it was replaced. So now we are going back…
 I am not in favor of this change.


 I'm also not really in favor of it.  Though it is a trivial method
having it
 in oslo implies that this is what uuids should look like across OpenStack
 projects.


And I'm in favor of consistency for uuids across the projects

 because the same parsers and checkers can then be used for input
validation
 or log parsing.



Parsers? UUID's should be treated as opaque strings once they're generated.


Right, I meant log parsers not UUID parsers.  If they're consistently 
formatted it's easier to pick them out.








 From: Zhongyue Luo zhongyue@intel.commailto:
zhongyue@intel.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:
openstack-dev@lists.openstack.org

 Date: Wednesday, November 13, 2013 8:07 AM
 To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.orgmailto:
openstack-dev@lists.openstack.org

 Subject: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

 Hi all,

 We had a discussion of the modules that are incubated in Oslo.


 https://etherpad.openstack.org/p/icehouse-oslo-status
https://urldefense.proofpoint.com/v1/url?u=https://etherpad.openstack.org/p/icehouse-oslo-statusk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=3ns0o3FRyS2%2Fg%2FTFIH7waZX1o%2FHdXvrJ%2FnH9XMCRy08%3D%0As=63eaa20d8c94217d86793a24379b4391179fbfa1fb2c961fb37a5512dbdff69a



 One of the conclusions we came to was to deprecate/remove uuidutils in
 this cycle.

 The first step into this change should be to remove generate_uuid() from
 uuidutils.

 The reason is that 1) generating the UUID string seems trivial enough to
 not need a function and 2) string representation of uuid4 is not what we
 want in all projects.



There's room for long term improvement such as decreasing string length,
increasing entropy, linearly distributed output, etc. I agree that the
current implementation is useless/trivial, but the work to build upon it
should happen in oslo to benefit all projects.




 To address this, a patch is now on gerrit.
 https://review.openstack.org/#/c/56152/
https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/56152/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=3ns0o3FRyS2%2Fg%2FTFIH7waZX1o%2FHdXvrJ%2FnH9XMCRy08%3D%0As=adb860d11d1ad02718e306b9408c603daa00970685a208db375a9ec011f13978



 Each project should directly use the standard uuid module or implement
its
 own helper function to generate uuids if this patch gets in.

 Any thoughts on this change? Thanks.

 --
 Intel SSG/STO/DCST/CIT
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
 China
 +862161166500


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Trivial or not, people use it and frankly I don't see any value at all
in removing it.  As far as the some projects want a different format
of UUID that doesn't make a lot of sense to me but if that's what
somebody wants they should write their own method.  I strongly agree
with others with respect to the comments around code-churn.  I see
little value in this.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--

-Dolph



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Weekly Team Meeting

2013-11-13 Thread Clayton Coleman
- Original Message -
 Hello,
 
 Solum meets Tuesdays at 1600 UTC in #openstack-meeting-alt (formerly in
 #solum)
 
 
 Note: Due to the Nov 3rd change in Daylight Savings Time, this now happens at
 08:00 US/Pacific (starts in about 45 minutes from now)
 
 
 Agenda: https://wiki.openstack.org/wiki/Meetings/Solum

In the meeting yesterday there was a mention of a gated source code flow 
(where a push might go to an external system, and the gate system 
github/gerritt/etc would control when the commit goes back to the primary 
repository).  I've added that flow to 
https://wiki.openstack.org/wiki/File:Solum_r01_flow.jpeg as well as a mention 
of the DNS abstraction (a deployed assembly may or may not have an assigned DNS 
identity).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Major change to tempest.conf.sample coming

2013-11-13 Thread David Kranz
This is a heads up that soon we will be auto-generating the 
tempest.conf.sample from the tempest code that uses oslo.config. 
Following in the footsteps of nova, this should reduce bugs around 
failures to keep the config code and the sample conf file in sync 
manually. So when you add a new item to the config code you will no 
longer have to make a corresponding change to the sample conf file. This 
change, along with some tooling, is in 
https://review.openstack.org/#/c/53870/ which is currently blocked on a 
rebase. Obviously once this merges, any pending patches with changes to 
the conf file will have to be updated to remove the changes to the conf 
file.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Post-summit status, blueprint clean-up

2013-11-13 Thread Devananda van der Veen
Hi all!

I'd like to thank everyone who attended Ironic's design track on Thursday
and added their perspective to the discussions. This was our project's
first summit and I believe that it went very well; we all put our heads
together and collectively came up with some concrete plans, highlighted
here:
https://etherpad.openstack.org/p/IcehouseIronicNextSteps
with some of the more complicated aspects written up in the other
etherpads. Those links are here:
https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Ironic

Several months ago, we tried using Launchpad blueprints to divvy up the
various chunks of work that were part of the initial project roadmap. Some
were either micro-tasks (which is terrible in launchpad) or vaguely defined
(which defeats the purpose, too). I've cleaned up the BP list, and left the
ones that I think are representative of efforts we all agreed upon.

If you proposed or volunteered for a specific feature this cycle, it should
be captured in the etherpad above, with your name on it. I would appreciate
if you could take a few minutes to file a blueprint to track that work.
Some of the topics we discussed already have blueprints filed, and not all
of them require one.

For example, Lucas' name is on the client library bullet point, but
that's more of a knowledge domain, and I don't think it requires a
blueprint. Conversely, I'd like to see a blueprint for serial console
access with a description of how it will be implemented, so that the core
reviewers can compare the implementation to the design we agreed upon. Use
your own discretion, and feel free to ping me if you're wondering whether
that thing you want to work on should have a BP or not.


Thanks!
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Configuration validation

2013-11-13 Thread Oleg Gelbukh
Doug,


On Wed, Nov 13, 2013 at 9:49 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Mon, Nov 11, 2013 at 6:08 PM, Mark McLoughlin mar...@redhat.comwrote:


 One thing worth trying would be to encode the validation rules in the
 config option declaration.

 Some rules could be straightforward, like:

 opts = [
   StrOpt('foo_url',
  validate_rule=cfg.MatchesRegexp('(git|http)://')),
 ]

 but the rule you describe is more complex e.g.

 def validate_proxy_url(conf, group, key, value):
 if not conf.vnc_enabled:
 return
 if conf.ssl_only and value.startswith(http://;):
 raise ValueError('ssl_only option detected, but ...')

 opts = [
   StrOpt('novncproxy_base_url',
  validate_rule=validate_proxy_url),
   ...
 ]

 I'm not sure I love this yet, but it's worth experimenting with.


 One thing to keep in mind with the move to calling register_opt() at
 runtime instead of import time is the service may run for a little while
 before it reaches the point in the code where the option validation code is
 triggered. So I like the idea, but we may want a shortcut for validation.

 We could add a small app to oslo.config that will load the options in the
 same way the conf generator and doc tool will, but then also read the
 configuration file and perform the validation.


We implement similar approach in Rubick [1]. Collector script generates
configuration schema from code [2], while generator script [3] allows to
have different versions of configuration schema:

[1] https://github.com/MirantisLabs/rubick/tree/master/rubick/schemas
[2]
https://github.com/MirantisLabs/rubick/blob/master/rubick/schemas/collector.py#L189
[3]
https://github.com/MirantisLabs/rubick/blob/master/rubick/schemas/generator.py

I think it would be useful to discuss pros and cons of contributing parts
of this code to oslo.config.

--
Best regards,
Oleg Gelbukh
Mirantis Labs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Straw man to start the incubation / graduation requirements discussion

2013-11-13 Thread Kurt Griffiths
 Also does it place a requirement that all projects wanting to request
incubation to be placed in stackforge ? That sounds like a harsh
requirement if we were to reject them.

I think that anything that encourages projects to get used to the
OpenStack development process sooner, rather than later, is a good thing.

If becoming incubated does not require the team to have a track record of
proper formatting of commit messages, using Gerrit for reviews, use of
Launchpad for bugs and blueprints, etc., we are setting ourselves up for a
lot of pain. Being lax on incubation will only encourage teams to code in
a slapdash, ³under the radar² fashion. IMO, it also makes it harder for
the TC to evaluate the team and project, since there are fewer artifacts
they can use as data points.

Perhaps it was overkill, but the incubation requirements (somewhat
self-imposed) for the Marconi team included:

1. Code against stackforge, follow the standard review process for patches
via Gerrit and at least 2 x +2s before approving patches, and require
OpenStack-standard commit messages.
2. Use Launchpad and the OpenStack wiki for specs and project management
(blueprints, blug tracking, milestones).
3. Hold regular team meetings in #openstack-meeting(-alt), following the
standard process there (i.g., using meetbot, publishing agenda on wiki and
mailing list before each meeting, archiving meeting notes on the wiki)
4. Create a comprehensive unit test suite.
5. Define and enforce a HACKING guide based on standards culled from
upstream OpenStack projects.
6. Demonstrate a pattern of consistent contribution (both code and
design) from multiple organizations/vendors
7. Solicit code reviews from TC members, address feedback to their
Satisfaction.
8. Solicit community feedback on the project¹s features, code, and overall
design--both early and often.

I guess I view the pre-incubation time period first of all as a ³practice
period² for the team to get used to developing things *together*,
engendering esprit de corps across company/vendor boundaries, and getting
used to the standard OpenStack tools and processes for those team members
not used to them already.

Second, pre-incubation is a time for getting the implementation up to
snuff so that during incubation you can focus on polishing off rough
edges and integrating with upstream projects. Incubation is probably not
the time to be rewriting massive amounts of code and/or redesigning your
API; otherwise you create a moving target for everyone involved.


 This is looking at raising the bar quite a bit along the way.

+1. I like the idea of an intermediary ³emerging² stage to help crystalize
what teams need to do in order to prepare for incubation, and to help
smooth the transition from bootstrapped --- incubated --- integrated.

@kgriffs


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Case sensitivity backend databases

2013-11-13 Thread Mark Washenberger
Resurrecting this thread. . .


I think I'm misunderstanding where we landed on this issue. On the one
hand, it seems like there are tests to assert that uniqueness of names is
case-sensitive. On the other, some folks have identified reasons why they
would want case-insensitivity on uniqueness checks for creating new users.
Still others I think have wisely pointed out that we should probably get
out of the business of creating users.

Trying to incorporate all of these perspectives, I propose the following:

1) We add a configuration option to just the keystone sql identity driver
to force case-sensitivity on uniqueness checks. I'm pretty sure there is a
way to do this in sqlalchemy, basically whatever is equivalent to 'SELECT *
FROM user WHERE BINARY name = %s'. This config option would only affect
create_user and update_user.
2) We always force case-sensitive comparison for get_user_by_name, using a
similar mechanism as above.

By focusing on changes to queries we needn't bother with a migration and
can make the behavior a deployer choice.

Is this a bad goal or approach?

IANADBA,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][qa]Tempest tests for Ceilometer

2013-11-13 Thread Doug Hellmann
On Tue, Nov 12, 2013 at 1:52 PM, Nadya Privalova nprival...@mirantis.comwrote:

 Hello, guys!

 I hope everybody has eventually got home after the summit and feeling ok
 :) So it's time to proceed thinking about integration, unit and performance
 testing in Ceilometer. First of all I'd like to appreciate your help in
 composing etherpad
 https://etherpad.openstack.org/p/icehouse-summit-ceilometer-integration-tests.
  If you didn't participate in design session about integration tests but
 have thoughts about it please add your comments.

 Here is a list of ceilometer-regarding cr in tempest (just a reminder):

1. https://review.openstack.org/#/c/39237/
2. https://review.openstack.org/#/c/55276/

 And even more but they are abandoned due to reviewers' inactivity (take a
 look in whiteboard):
 https://blueprints.launchpad.net/tempest/+spec/add-basic-ceilometer-tests. Is 
 there any reasons why cr were not reviewed?


I'm not sure how many of the ceilometer core reviewers also follow all of
the tempest repository. If you add ceilometer-core to the reviews in
gerrit, we will see them all. After we have established the right paths for
the files related to ceilometer, it will be possible for us to watch just
the relevant subdirectories so you won't have to add us explicitly.

Doug



 I guess the first step to be done is test plan. I've created a doc
 https://etherpad.openstack.org/p/ceilometer-test-plan and plan to start
 working on it. If you have any thoughts about the plan - you are welcome!

 Thanks,
 Nadya



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT software configuration refined after design summit discussions

2013-11-13 Thread Zane Bitter

On 13/11/13 18:29, Thomas Spatzier wrote:

It also doesn't support a list, but I think we can and should fix that
in HOT.

Doesn't DependsOn already support lists? I quickly checked the code and it
seems it does:
https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L288


Oh, cool. Looks like Angus added that last month. Thanks, I missed that 
one :)


- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Definition of template

2013-11-13 Thread John Speidel
I strongly agree that we should try to keep templates as flexible as
possible by allowing some values to be omitted and provided at a later
time.  But, in this case, we are talking about cluster templates without
any node groups being specified.  I think that at a minimum a cluster
template would contain node groups but could omit the node group counts
which could be provided at launch time.  This makes a lot of sense.  But,
in my opinion, without at least specifying the set of node groups in a
cluster template, configuration really wouldn't make sense and therefore
the template would not be of much/any value.


On Wed, Nov 13, 2013 at 10:08 AM, Alexander Ignatov
aigna...@mirantis.comwrote:

 Hi, Andrew

 Agreed with your opinion. Initially Savanna’s templates approach is the
 option 1 you are talking about.
 This was designed at the start of Savanna 0.2 release cycle. It was also
 documented here: https://wiki.openstack.org/wiki/Savanna/Templates .
 Maybe some points are outdated but the idea is the same as the option 1:
 user can create cluster template and don’t need to specify all fields, for
 example ’node_groups’ field. And these fields, both required and optional,
 can be overwritten in the cluster object even if it contains
 ‘cluster_template_id’.

 I see you raised this question because of patch
 https://review.openstack.org/#/c/56060/. I think it’s just a bug in the
 validation level not in api.

 I also agree that we should change UI part accordingly, at least add an
 ability for users to override fields set in cluster and node group
 templates during the cluster creation.

 Regards,
 Alexander Ignatov



 On 12 Nov 2013, at 23:20, Andrey Lazarev alaza...@mirantis.com wrote:

 Hi all,

 I want to raise the question What template is. Answer to this question
 could influence UI, validation and user experience significantly. I see two
 possible answers:
 1. Template is a simplification for object creation. It allows to keep
 common params in one place and not specify them each time.
 2. Template is a full description of object. User should be able to create
 object from template without specifying any params.

 As I see the current approach is the option 1, but UI is done mostly for
 option 2. This leads to situations when user creates incomplete template
 (backend allows it because of option 1), but can't use it later (UI doesn't
 allow to work with incomplete templates).

 Let's define common vision on how will we treat templates and document
 this somehow.

 My opinion is that we should proceed with the option 1 and change
 UI accordingly.

 Thanks,
 Andrew
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Gated Source Code Flow (was: Weekly Team Meeting)

2013-11-13 Thread Adrian Otto
Clayton,

On Nov 13, 2013, at 11:41 AM, Clayton Coleman ccole...@redhat.com
 wrote:

 - Original Message -
 Hello,
 
 Solum meets Tuesdays at 1600 UTC in #openstack-meeting-alt (formerly in
 #solum)
 
 
 Note: Due to the Nov 3rd change in Daylight Savings Time, this now happens at
 08:00 US/Pacific (starts in about 45 minutes from now)
 
 
 Agenda: https://wiki.openstack.org/wiki/Meetings/Solum
 
 In the meeting yesterday there was a mention of a gated source code flow 
 (where a push might go to an external system, and the gate system 
 github/gerritt/etc would control when the commit goes back to the primary 
 repository).  I've added that flow to 
 https://wiki.openstack.org/wiki/File:Solum_r01_flow.jpeg as well as a mention 
 of the DNS abstraction (a deployed assembly may or may not have an assigned 
 DNS identity).

Are the two source change notification abstraction flows really different? 
Could we express this with two lines converging on Notify Solum API … in a 
single flow with two similar entrances.

One key difference that I noticed between those two proposed flows are that the 
gate type uses the Solum API to test code, and the push one does not. 
Perhaps both should run unit tests in the same way with an option to bypass 
steps for those who don't want them?

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] [TripleO] scheduling flow with Ironic?

2013-11-13 Thread Devananda van der Veen
On Wed, Nov 13, 2013 at 8:02 AM, Alex Glikson glik...@il.ibm.com wrote:

 Hi,

 Is there a documentation somewhere on the scheduling flow with Ironic?

 The reason I am asking is because we would like to get virtualized and
 bare-metal workloads running in the same cloud (ideally with the ability to
 repurpose physical machines between bare-metal workloads and virtualized
 workloads), and would like to better understand where the gaps are (and
 potentially help bridging them).


Hi Alex,

Baremetal uses an alternative
scheduler, nova.scheduler.baremetal_host_manager.BaremetalHostManager, so
giving that a read may be helpful. It searches the available list of
baremetal nodes for one that matches the CPU, RAM, and disk capacity of the
requested flavor, and compares the node's extra_specs:cpu_arch to that of
the requested image, then consumes 100% of that node's available resources.
Otherwise, I believe the scheduling flow is basically the same: http
request to n-api, rpc passes to n-scheduler, which selects a node, and
calls to n-conductor  n-cpu to do the work of spawn()ing it.

As far as the gaps in running both baremetal and virtual -- I have been
told by several folks that it's possible to run both baremetal and virtual
hypervisors in the same cloud by using separate regions, or separate
host-aggregates, for the simple reason that these require distinct
nova-scheduler processes. A single scheduler, today, can't serve both. I
haven't seen any docs on how to do this, though.

As for moving a workload between them, the TripleO team has discussed this
and, afaik, decided to hold off working on it for now. It would be better
for them to fill in the details here -- my memory may be wrong, or things
may have changed.

Cheers,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Proposed Icehouse release schedule

2013-11-13 Thread Doug Hellmann
On Wed, Nov 13, 2013 at 1:14 PM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Wed, Nov 13, 2013 at 7:58 AM, Russell Bryant rbry...@redhat.comwrote:

 On 11/13/2013 08:15 AM, Thierry Carrez wrote:
  Two options are possible for that off week:
 
  * Week of April 21 - this one is just after release, and some people
  still have a lot to do during that week. On the plus side it's
  conveniently placed next to the Easter weekend.
  * Week of April 28 - that's the middle week, which sounds a bit weird...
  but for me that would be the less active week, so I have a slight
  preference for it.
 
  What would be your preference, if any ? I'm especially interested in
  opinions from people who have a hard time taking some time off (PTLs,
  infra and release management people).

 I think my preference is the second week.  Easter makes the first week
 tempting, but as you point out, realistically there is still going to be
 some amount of looking out for and potentially dealing with release
 aftermath.


 Conversely, there's likely to be less immediate feedback about the release
 since it occurs just before Easter weekend.


Good point. It seems like postponing the off week until a time after
everyone is definitely back from any travel around the holiday will cause
cascading lags in responsiveness.

Doug



 (I don't have a preference between the two weeks... yet)



 The second week everyone really should be able to relax.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Configuration validation

2013-11-13 Thread Doug Hellmann
On Wed, Nov 13, 2013 at 2:19 PM, Oleg Gelbukh ogelb...@mirantis.com wrote:

 Doug,


 On Wed, Nov 13, 2013 at 9:49 PM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




 On Mon, Nov 11, 2013 at 6:08 PM, Mark McLoughlin mar...@redhat.comwrote:


 One thing worth trying would be to encode the validation rules in the
 config option declaration.

 Some rules could be straightforward, like:

 opts = [
   StrOpt('foo_url',
  validate_rule=cfg.MatchesRegexp('(git|http)://')),
 ]

 but the rule you describe is more complex e.g.

 def validate_proxy_url(conf, group, key, value):
 if not conf.vnc_enabled:
 return
 if conf.ssl_only and value.startswith(http://;):
 raise ValueError('ssl_only option detected, but ...')

 opts = [
   StrOpt('novncproxy_base_url',
  validate_rule=validate_proxy_url),
   ...
 ]

 I'm not sure I love this yet, but it's worth experimenting with.


 One thing to keep in mind with the move to calling register_opt() at
 runtime instead of import time is the service may run for a little while
 before it reaches the point in the code where the option validation code is
 triggered. So I like the idea, but we may want a shortcut for validation.

 We could add a small app to oslo.config that will load the options in the
 same way the conf generator and doc tool will, but then also read the
 configuration file and perform the validation.


 We implement similar approach in Rubick [1]. Collector script generates
 configuration schema from code [2], while generator script [3] allows to
 have different versions of configuration schema:

 [1] https://github.com/MirantisLabs/rubick/tree/master/rubick/schemas
 [2]
 https://github.com/MirantisLabs/rubick/blob/master/rubick/schemas/collector.py#L189
 [3]
 https://github.com/MirantisLabs/rubick/blob/master/rubick/schemas/generator.py

 I think it would be useful to discuss pros and cons of contributing parts
 of this code to oslo.config.


There's definitely some overlap with the planned work from
https://etherpad.openstack.org/p/icehouse-oslo-config-import-side-effectsand
the existing sample file generator, so it would be good to coordinate.

Doug



 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Group-based Policy Sub-team Meetings

2013-11-13 Thread Kyle Mestery (kmestery)
On Nov 13, 2013, at 12:57 PM, Tim Hinrichs thinri...@vmware.com
 wrote:

 Are there plans for a concrete policy language (e.g. a grammar and semantics) 
 to be part of the proposal, or does each plugin to Neutron supply its own 
 policy language?
 
There are no concrete plans for this right now, though I suspected this would 
come up.

 I'm trying to envision how Heat would utilize the policy API.  If there's a 
 concrete policy language, then Heat can take an app template, extract the 
 networking-relevant policy, and express that policy in the concrete language. 
  Then whatever plugin we're using for Neutron can implement that policy in 
 any way it sees fit as long as it obeys the policy's semantics (according to 
 the language--the semantics Heat intended).
 
 But if there's no concrete policy language, how does Heat know which policy 
 statements to send?  It doesn't know which plugin is being used for Neutron.  
 So it doesn't even know which strings are valid policy statements.  Or are we 
 assuming that Heat knows which plugin is being used for Neutron?  Or am I 
 missing something?
 
The APIs alone provide a mechanism for utilizing the new constructs, but the 
specific policy intent is left to the underlying plugin. This would be a good 
thing to discuss at our meeting next week.

Thanks,
Kyle

 Thanks,
 Tim
 
 - Original Message -
 | From: Kyle Mestery (kmestery) kmest...@cisco.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Wednesday, November 13, 2013 9:57:54 AM
 | Subject: Re: [openstack-dev] [neutron] Group-based Policy Sub-team Meetings
 | 
 | On Nov 13, 2013, at 10:36 AM, Stephen Wong s3w...@midokura.com
 |  wrote:
 | 
 |  Hi Kyle,
 |  
 | So no meeting this Thursday?
 |  
 | I am inclined to skip this week's meeting due to the fact I haven't
 | heard many
 | replies yet. I think a good starting point for people would be to
 | review the
 | BP [1] and Design Document [2] and provide feedback where
 | appropriate.
 | We should start to formalize what the APIs will look like at next
 | week's meeting,
 | and the Design Document has a first pass at this.
 | 
 | Thanks,
 | Kyle
 | 
 | [1]
 | 
 https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
 | [2]
 | 
 https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?usp=sharing
 | 
 |  Thanks,
 |  - Stephen
 |  
 |  On Wed, Nov 13, 2013 at 7:11 AM, Kyle Mestery (kmestery)
 |  kmest...@cisco.com wrote:
 |  On Nov 13, 2013, at 8:58 AM, Stein, Manuel (Manuel)
 |  manuel.st...@alcatel-lucent.com wrote:
 |  
 |  Kyle,
 |  
 |  I'm afraid your meeting vanished from the Meetings page [2] when
 |  user amotiki reworked neutron meetings ^.^
 |  Is the meeting for Thu 1600 UTC still on?
 |  
 |  Ack, thanks for the heads up here! I have re-added the meeting. I
 |  only heard
 |  back from one other person other than yourself, so at this point
 |  I'm inclined
 |  to wait until next week to hold our first meeting unless I hear
 |  back from others.
 |  
 |  A few heads-up questions (couldn't attend the HK design summit
 |  Friday meeting):
 |  
 |  1) In the summit session Etherpad [3], ML2 implementation
 |  mentions insertion of arbitrary metadata to hint to underlying
 |  implementation. Is that (a) the plug-ing reporting its
 |  policy-bound realization? (b) the user further specifying what
 |  should be used? (c) both? Or (d) none of that but just some
 |  arbitrary message of the day?
 |  
 |  I believe that would be (a).
 |  
 |  2) Would policies _always_ map to the old Neutron entities?
 |  E.g. when I have policies in place, can I query related
 |  network/port, subnet/address, router elements on the API or are
 |  there no equivalents created? Would the logical topology created
 |  under the policies be exposed otherwise? for e.g.
 |  monitoring/wysiwyg/troubleshoot purposes.
 |  
 |  No, this is up to the plugin/MechanismDriver implementation.
 |  
 |  3) Do the chain identifier in your policy rule actions match to
 |  Service Chain UUID in Service Insertion, Chaining and API [4]
 |  
 |  That's one way to look at this, yes.
 |  
 |  4) Are you going to describe L2 services the way group policies
 |  work? I mean, why would I need a LoadBalancer or Firewall
 |  instance before I can insert it between two groups when all that
 |  load balancing/firewalling requires is nothing but a policy for
 |  group communication itself? - regardless the service instance
 |  used to carry out the service.
 |  
 |  These are things I'd like to discuss at the IRC meeting each week.
 |  The goal
 |  would be to try and come up with some actionable items we can
 |  drive towards
 |  in both Icehouse-1 and Icehouse-2. Given how close the closing of
 |  Icehouse-1
 |  is, we need to focus on this very fast if we want to have a
 |  measurable impact in
 |  Icehouse-1.
 |  
 |  Thanks,
 |  Kyle
 |  
 |  
 |  Best, Manuel
 |  
 |  [2]
 

Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Doug Hellmann
On Wed, Nov 13, 2013 at 1:07 AM, Zhongyue Luo zhongyue@intel.comwrote:

 Hi all,

 We had a discussion of the modules that are incubated in Oslo.

 https://etherpad.openstack.org/p/icehouse-oslo-status

 One of the conclusions we came to was to deprecate/remove uuidutils in
 this cycle.

 The first step into this change should be to remove generate_uuid() from
 uuidutils.

 The reason is that 1) generating the UUID string seems trivial enough to
 not need a function and 2) string representation of uuid4 is not what we
 want in all projects.

 To address this, a patch is now on gerrit.
 https://review.openstack.org/#/c/56152/

 Each project should directly use the standard uuid module or implement its
 own helper function to generate uuids if this patch gets in.

 Any thoughts on this change? Thanks.


Unfortunately it looks like that change went through before I caught up on
email. Shouldn't we have removed its use in the downstream projects (at
least integrated projects) before removing it from Oslo?

Doug




 --
 *Intel SSG/STO/DCST/CIT*
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
 China
 +862161166500

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Gated Source Code Flow (was: Weekly Team Meeting)

2013-11-13 Thread Georgy Okrokvertskhov
Hi Adrian,

It looks like that the final stage on all pictures is a Deploy stage.
What kind of process do you have in mind for CI\CD?
When you use gate system it is typical to have multiple gates. The usual
ones are: code review\approved, smoke test \ unit test pass,
integration test pass, performance\scalability test pass, accepted for
production. Each gate might be a quite complex process for the large
application including multiple deployment to different stage environments.
Also it is typical to have one build and then promote it between different
stages.

Will Solum API support flexible CI\CD flows where user can define specific
stages and gates and actions for each of them?

Thanks
Georgy


On Wed, Nov 13, 2013 at 12:27 PM, Adrian Otto adrian.o...@rackspace.comwrote:

 Clayton,

 On Nov 13, 2013, at 11:41 AM, Clayton Coleman ccole...@redhat.com
  wrote:

  - Original Message -
  Hello,
 
  Solum meets Tuesdays at 1600 UTC in #openstack-meeting-alt (formerly in
  #solum)
 
 
  Note: Due to the Nov 3rd change in Daylight Savings Time, this now
 happens at
  08:00 US/Pacific (starts in about 45 minutes from now)
 
 
  Agenda: https://wiki.openstack.org/wiki/Meetings/Solum
 
  In the meeting yesterday there was a mention of a gated source code
 flow (where a push might go to an external system, and the gate system
 github/gerritt/etc would control when the commit goes back to the primary
 repository).  I've added that flow to
 https://wiki.openstack.org/wiki/File:Solum_r01_flow.jpeg as well as a
 mention of the DNS abstraction (a deployed assembly may or may not have an
 assigned DNS identity).

 Are the two source change notification abstraction flows really
 different? Could we express this with two lines converging on Notify Solum
 API … in a single flow with two similar entrances.

 One key difference that I noticed between those two proposed flows are
 that the gate type uses the Solum API to test code, and the push one
 does not. Perhaps both should run unit tests in the same way with an option
 to bypass steps for those who don't want them?

 Adrian
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Configuration validation

2013-11-13 Thread Lorin Hochstein
On Wed, Nov 13, 2013 at 3:38 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Nov 13, 2013 at 2:19 PM, Oleg Gelbukh ogelb...@mirantis.comwrote:

 Doug,


 On Wed, Nov 13, 2013 at 9:49 PM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




  On Mon, Nov 11, 2013 at 6:08 PM, Mark McLoughlin mar...@redhat.comwrote:


 One thing worth trying would be to encode the validation rules in the
 config option declaration.

 Some rules could be straightforward, like:

 opts = [
   StrOpt('foo_url',
  validate_rule=cfg.MatchesRegexp('(git|http)://')),
 ]

 but the rule you describe is more complex e.g.

 def validate_proxy_url(conf, group, key, value):
 if not conf.vnc_enabled:
 return
 if conf.ssl_only and value.startswith(http://;):
 raise ValueError('ssl_only option detected, but ...')

 opts = [
   StrOpt('novncproxy_base_url',
  validate_rule=validate_proxy_url),
   ...
 ]

 I'm not sure I love this yet, but it's worth experimenting with.


 One thing to keep in mind with the move to calling register_opt() at
 runtime instead of import time is the service may run for a little while
 before it reaches the point in the code where the option validation code is
 triggered. So I like the idea, but we may want a shortcut for validation.

 We could add a small app to oslo.config that will load the options in
 the same way the conf generator and doc tool will, but then also read the
 configuration file and perform the validation.


 We implement similar approach in Rubick [1]. Collector script generates
 configuration schema from code [2], while generator script [3] allows to
 have different versions of configuration schema:

 [1] https://github.com/MirantisLabs/rubick/tree/master/rubick/schemas
 [2]
 https://github.com/MirantisLabs/rubick/blob/master/rubick/schemas/collector.py#L189
 [3]
 https://github.com/MirantisLabs/rubick/blob/master/rubick/schemas/generator.py

 I think it would be useful to discuss pros and cons of contributing parts
 of this code to oslo.config.


 There's definitely some overlap with the planned work from
 https://etherpad.openstack.org/p/icehouse-oslo-config-import-side-effectsand 
 the existing sample file generator, so it would be good to coordinate.


Also worth keeping in mind that we now generate documentation from config
files http://docs.openstack.org/havana/config-reference/content/. If
we're going to embed typechecking-like constraints into the options, I'd
like to have a way to transform these constraints into
(non-Python-programmer) human-readable text when generating the config
guide. Since that's tricky to do for arbitrary Python code, we may want to
come up with a scheme that can be used both for validation and that can be
easily transformed into a natural-language description.


Lorin

-- 
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Joe Gordon
On Wed, Nov 13, 2013 at 12:44 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Wed, Nov 13, 2013 at 1:07 AM, Zhongyue Luo zhongyue@intel.comwrote:

 Hi all,

 We had a discussion of the modules that are incubated in Oslo.

 https://etherpad.openstack.org/p/icehouse-oslo-status

 One of the conclusions we came to was to deprecate/remove uuidutils in
 this cycle.

 The first step into this change should be to remove generate_uuid() from
 uuidutils.

 The reason is that 1) generating the UUID string seems trivial enough to
 not need a function and 2) string representation of uuid4 is not what we
 want in all projects.

 To address this, a patch is now on gerrit.
 https://review.openstack.org/#/c/56152/

 Each project should directly use the standard uuid module or implement
 its own helper function to generate uuids if this patch gets in.

 Any thoughts on this change? Thanks.


 Unfortunately it looks like that change went through before I caught up on
 email. Shouldn't we have removed its use in the downstream projects (at
 least integrated projects) before removing it from Oslo?


++, good think gerrit has a revert button.


 Doug




 --
 *Intel SSG/STO/DCST/CIT*
 880 Zixing Road, Zizhu Science Park, Minhang District, 200241, Shanghai,
 China
 +862161166500

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Gated Source Code Flow (was: Weekly Team Meeting)

2013-11-13 Thread Clayton Coleman


- Original Message -
 Clayton,
 
 On Nov 13, 2013, at 11:41 AM, Clayton Coleman ccole...@redhat.com
  wrote:
 
  - Original Message -
  Hello,
  
  Solum meets Tuesdays at 1600 UTC in #openstack-meeting-alt (formerly in
  #solum)
  
  
  Note: Due to the Nov 3rd change in Daylight Savings Time, this now happens
  at
  08:00 US/Pacific (starts in about 45 minutes from now)
  
  
  Agenda: https://wiki.openstack.org/wiki/Meetings/Solum
  
  In the meeting yesterday there was a mention of a gated source code flow
  (where a push might go to an external system, and the gate system
  github/gerritt/etc would control when the commit goes back to the primary
  repository).  I've added that flow to
  https://wiki.openstack.org/wiki/File:Solum_r01_flow.jpeg as well as a
  mention of the DNS abstraction (a deployed assembly may or may not have an
  assigned DNS identity).
 
 Are the two source change notification abstraction flows really different?
 Could we express this with two lines converging on Notify Solum API … in a
 single flow with two similar entrances.

I think you hit on something fundamental - I reswizzled the diagram to show the 
gate flow moving into the normal source push flow after tests pass. 
https://wiki.openstack.org/w/images/7/72/Solum_r01_flow.jpeg

 
 One key difference that I noticed between those two proposed flows are that
 the gate type uses the Solum API to test code, and the push one does
 not. Perhaps both should run unit tests in the same way with an option to
 bypass steps for those who don't want them?

Yeah - this also highlights that an input to the build flow might be the 
desired outcome - possibly no deploy, deploy, deploy as temporary assembly 
X, or deploy as temporary assembly X without tests. There may be consumers 
who wish to make Solum the end result of a flow, but if the tools and 
abstractions Solum offers for build and deploy are compelling, we should expect 
to want to let external systems utilize Solum as much as possible.  Another 
point of discussion is whether test is part of both build and deploy, or 
just part of deploy.  If it's part of both, perhaps deploy and build need 
to have similar ways of letting someone run their tests at the right 
opportunities.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using AD for keystone authentication only

2013-11-13 Thread Avi L
Oh ok so in this case how does the Active Directory user gets a id , and
how do you map the user to a role? Is there any example you can point me
to?


On Wed, Nov 13, 2013 at 11:24 AM, Dolph Mathews dolph.math...@gmail.comwrote:

 Yes, that's the preferred approach in Havana: Users and Groups via LDAP,
 and everything else via SQL.


 On Wednesday, November 13, 2013, Avi L wrote:

 Hi,

 I understand that the LDAP provider in keystone can be used for
 authenticating a user (i.e validate username and password) , and it also
 authorize it against roles and tenant. However this requires AD schema
 modification. Is it possible to use AD only for authentication and then use
 keystone's native database for roles and tenant lookup? The advantage is
 that then we don't need to touch the enterprise AD installation.

 Thanks
 Al



 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Flavio Percoco

On 13/11/13 12:49 -0800, Joe Gordon wrote:




On Wed, Nov 13, 2013 at 12:44 PM, Doug Hellmann doug.hellm...@dreamhost.com
wrote:




   On Wed, Nov 13, 2013 at 1:07 AM, Zhongyue Luo zhongyue@intel.com
   wrote:

   Hi all,

   We had a discussion of the modules that are incubated in Oslo.

   https://etherpad.openstack.org/p/icehouse-oslo-status

   One of the conclusions we came to was to deprecate/remove uuidutils in
   this cycle.

   The first step into this change should be to remove generate_uuid()
   from uuidutils.

   The reason is that 1) generating the UUID string seems trivial enough
   to not need a function and 2) string representation of uuid4 is not
   what we want in all projects.

   To address this, a patch is now on gerrit. https://review.openstack.org
   /#/c/56152/

   Each project should directly use the standard uuid module or implement
   its own helper function to generate uuids if this patch gets in.

   Any thoughts on this change? Thanks.


   Unfortunately it looks like that change went through before I caught up on
   email. Shouldn't we have removed its use in the downstream projects (at
   least integrated projects) before removing it from Oslo?
  



++, good think gerrit has a revert button.



Yeah, plus we should've let this discussion warm up a little bit more.
My bad there.

Revert patch here:

https://review.openstack.org/#/c/56286/


Cheers,
FF




   Doug


  
  
   --

   Intel SSG/STO/DCST/CIT
   880 Zixing Road, Zizhu Science Park, Minhang District, 200241,
   Shanghai, China
   +862161166500
  
   ___

   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to recognize indirect contributions to our code base

2013-11-13 Thread Stangel, Dan
On Mon, 2013-11-11 at 15:20 +0100, Nicolas Barcet wrote:

 To enable this, we are proposing that the commit text of a patch may
 include a 
sponsored-by: sponsorname
 line which could be used by various tools to report on these commits.
  Sponsored-by should not be used to report on the name of the company
 the contributor is already affiliated to.
 
 We would appreciate to see your comments on the subject and eventually
 get your approval for it's use.

Rather than including this sponsor information directly in commit logs,
the metrics tools could attribute specific changesets to a different
organization.  This would override the normal attribution that the
metrics tools would otherwise make based solely on the committer's own
affiliation.

gitdm already special-cases some commits. For example, we do this to
completely omit changesets that should not be counted towards
contribution metrics, such as automated commits from Jenkins or
translations [1].   Stackalytics has a similar mechanism [2], and
activity.openstack.org (metrics-grimoire) may also provide similar
functionality.

This approach moves the recognition completely out of band from the git
commit, and closer to where (presumably) it will be recognized by the
community and the sponsor.  Yet it would allows special attributions to
be transparently documented and maintained by, and within, the
community.

Dan

[1]
https://github.com/openstack-infra/gitdm/blob/master/openstack-config/grizzly - 
the list of commit IDs in parentheses are omitted from metrics totals
[2]
https://github.com/stackforge/stackalytics/blob/master/etc/corrections.json - 
stackalytics provides for finer-grained corrections to specific changesets.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing generate_uuid() from uuidutils

2013-11-13 Thread Eric Windisch
 Each project should directly use the standard uuid module or implement its
 own helper function to generate uuids if this patch gets in.

 Any thoughts on this change? Thanks.


 Unfortunately it looks like that change went through before I caught up on
 email. Shouldn't we have removed its use in the downstream projects (at
 least integrated projects) before removing it from Oslo?

I don't think it is a problem to remove the code in oslo first, as
long as no other oslo-incubator code uses it. Projects don't have to
sync the code and could always revert should that they do.

However, like Mark, I'm inclined to consider the value of
is_uuid_like. While undoubtedly useful, is one method sufficient to
warrant creating a new top-level module. Waiting for it to hit the
standard library will take quite a long time...

There are other components of oslo that are terse and questionable as
standalone libraries. For these, it might make sense to aggressively
consider rolling some modules together?

One clear example would be log.py and log_handler.py, another would be
periodic_task.py and loopingcall.py

-- 
Regards,
Eric Windisch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Baremetal]: pxe image on hard disk

2013-11-13 Thread Robert Collins
Sure - in the image you can install a boot block and boot loader -
that should work today.

Installing boot loaders is an interesting problem because it's an
interaction between the OS and the hardware: the right loader to use
depends on the hardware (BIOS/UEFI) and OS (32-bit/64-bit...). I Don't
think that we have a good consensus yet on how to divide up the work
to preserve abstraction layers and handle this super nicely.

-Rob

On 14 November 2013 08:34, Vijay vija...@yahoo.com wrote:

 Using baremetal provisioning, I was able to provision a physical server with
 an image. However, after I disconnected the server from openstack cluster
 and tried to boot the physical server with hard disk, it could not find the
 image. Is there a way to persist the pxe image on to the hard disk so that
 it could be used later by the server to boot from its hard disk?
 Thanks,
 -vj

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] [qa][nova] The document for the changes from Nova v2 api to v3

2013-11-13 Thread David Kranz

On 11/13/2013 08:30 AM, Alex Xu wrote:

Hi, guys

This is the document for the changes from Nova v2 api to v3:
https://wiki.openstack.org/wiki/NovaAPIv2tov3
I will appreciate if anyone can help for review it.

Another problem comes up - how to keep the doc updated. So can we ask 
people, who change
something of api v3, update the doc accordingly? I think it's a way to 
resolve it.


Thanks
Alex



___
openstack-qa mailing list
openstack...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa
Thanks, this is great. I fixed a bug in the os-services section. BTW, 
openstack...@lists.openstack.org list is obsolete. openstack-dev with 
subject starting with [qa] is the current qa list. About updating, I 
think this will have to be heavily socialized in the nova team. The 
initial review should happen by those reviewing the tempest v3 api 
changes. That is how I found the os-services bug.


 -David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] question about DB migration difficulty

2013-11-13 Thread Michael Still
On Thu, Nov 14, 2013 at 5:18 AM, Mike Spreitzer mspre...@us.ibm.com wrote:
 This is a follow-up to the design summit discussion about DB migrations.
 There was concern about the undo-ability of some migrations.  The specific
 example cited was removal of a column.  Could that be done with the
 following three migrations, each undo-able?  First, change the code to keep
 writing the column but no longer read the column.  Second migration changes
 the code to neither read nor write the column.  Third migration physically
 removes the column.

This was actually discussed in the session as an example of how other
projects handle these problems. Our concerns (IIRC) were that it would
take even more patches to land, and each of those patches is quite
hard to land in nova these days. Additionally, it increases the
complexity of our code a lot, because we have to handle databases in
all possible states because of how our continuous deployment model
works.

Objects bring us closer to being able to do this, but we need objects
finished first.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Create a source repo for the API specification?

2013-11-13 Thread Clayton Coleman


- Original Message -
 
 I like this idea. I'd also propose that the format of the specification be
 something machine-readable, such as API-Blueprint (a simple subset of
 markdown, apiblueprint.org , also what apiary uses, if you've ever seen
 that) or RAML (a more structured YAML-based syntax, raml.org ).
 API-Blueprint is closer to what the keystone document uses.
 
 Making the documentation machine-readable means that it's much easier to
 verify that, in practice, the implementation of an API matches its
 specification and documentation, which is a problem that plagues many
 OpenStack projects right now.
 

At the meeting today we discussed putting together an initial seed based around 
the proposed API and based mostly on the identity-api work.  Are folks ok with 
api-blueprint [1]

[1] http://apiblueprint.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] team meeting Friday 15 November @ 14:00 UTC

2013-11-13 Thread Doug Hellmann
The Oslo team will meet in #openstack-meeting this Friday at 1400 UTC to
discuss the i18n work and planning based on the outcome of the summit. See
https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting for a
more detailed agenda.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova SSL Apache2 Question

2013-11-13 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
I finally found a set of web pages that has a working set of configuration 
files for the major OpenStack services  
http://andymc-stack.co.uk/2013/07/apache2-mod_wsgi-openstack-pt-2-nova-api-os-compute-nova-api-ec2/
  by Andy Mc. I skipped ceilometer and have the rest of the services working 
except quantum with self-signed certificates on a Grizzly-3 OpenStack instance. 
Now I am stuck trying to figure out how to get quantum to accept self-signed 
certificates.

My goal is to harden my Grizzly-3 OpenStack instance using SSL and self-signed 
certificates. Later I will do the same for Havana bits and use real/valid 
certificates.

Mark

 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com]
 Sent: Wednesday, November 13, 2013 10:27 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Nova SSL Apache2 Question
 
 On 11/06/2013 07:20 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis)
 wrote:
  Hello,
 
  I am trying to front all of the Grizzly OpenStack services with
  Apache2 in order to enable SSL. I've got Horizon and Keystone working
  but am struggling with Nova. The only documentation I have been able
  to find is at URL
  http://www.rackspace.com/blog/enabling-ssl-for-the-openstack-api/
 
  However, the Nova sample osapi.wsgi and osapi files are not working
 with Grizzly. Does anyone have a set of these files for Nova?
 
  Thanks,
 
  Mark Miller
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 This was on my To Do list, but for Icehouse.  What are you seeing as the
 failure?
 
 The original article was written a while ago, so I am not surprised things 
 have
 changed out from underneath it.  In particular, there are some times where
 Eventlet code gets monkey patched in that you won't want when working in
 HTTPD.  In Keystone, we isolated the Monkeypatching into a single function,
 to ensure the same logic was done in both starting the App and the unit
 tests.  I suspect we'll need to something comparable in Nova.
 
 There are also potential SELinux issues.  I'd run with SELinux in Permissive
 mode until you get things sorted.
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >