Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Flavio Percoco

On 05/12/13 15:36 -0800, Mark Washenberger wrote:

On Thu, Dec 5, 2013 at 3:11 PM, Randall Burt randall.b...@rackspace.com
wrote:
   On Dec 5, 2013, at 4:45 PM, Steve Baker sba...@redhat.com
wrote:
   On 12/06/2013 10:46 AM, Mark Washenberger wrote:
   On Thu, Dec 5, 2013 at 1:05 PM, Vishvananda Ishaya 
   vishvana...@gmail.com wrote:


[snip]


   This is not completely correct. Glance already supports
   something akin to templates. You can create an image with
   metadata properties that specifies a complex block device
   mapping which would allow for multiple volumes and images to
   connected to the vm at boot time. This is functionally a
   template for a single vm.

   Glance is pretty useless if is just an image storage service,
   we already have other places that can store bits (swift,
   cinder). It is much more valuable as a searchable repository of
   bootable templates. I don't see any reason why this idea
   couldn't be extended to include more complex templates that
   could include more than one vm.


   FWIW I agree with all of this. I think Glance's real role in
   OpenStack is as a helper and optionally as a gatekeeper for the
   category of stuff Nova can boot. So any parameter that affects
   what Nova is going to boot should in my view be something Glance
   can be aware of. This list of parameters *could* grow to include
   multiple device images, attached volumes, and other things that
   currently live in the realm of flavors such as extra hardware
   requirements and networking aspects.

   Just so things don't go too crazy, I'll add that since Nova is
   generally focused on provisioning individual VMs, anything above
   the level of an individual VM should be out of scope for Glance.

   I think Glance should alter its approach to be less generally
   agnostic about the contents of the objects it hosts. Right now, we
   are just starting to do this with images, as we slowly advance on
   offering server side format conversion. We could find similar use
   cases for single vm templates.

   The average heat template would provision more than one VM, plus any
   number of other cloud resources.

   An image is required to provision a single nova server;
   a template is required to provision a single heat stack.

   Hopefully the above single vm policy could be reworded to be agnostic
   to the service which consumes the object that glance is storing.

   To add to this, is it that Glance wants to be *more* integrated and geared
   towards vm or container images or that Glance wishes to have more intimate
   knowledge of the things its cataloging *regardless of what those things
   actually might be*? The reason I ask is that Glance supporting only single
   vm templates when Heat orchestrates the entire (or almost entire) spectrum
   of core and integrated projects means that its suitability as a candidate
   for a template repository plummets quite a bit.



Yes, I missed the boat a little bit there. I agree Glance could operate as a
repo for these kinds of templates. I don't know about expanding much further
beyond the Nova / Heat stack. But within that stack, I think the use cases are
largely the same.

It seems like heat templates likely have built-in relationships with vm
templates / images that would be really nice track closely in the Glance data
model--for example if you wanted something like a notification when deleting an
image would invalidate a template you've created. Another advantage is the
sharing model--Glance is still aiming to become something of an image
marketplace, and that kind of sharing is something that I see being very useful
for Heat as well.

Does this response sound more in line? Sorry I'm still catching up on the
thread from before it was tagged with [Glance].


FWIW, during last week's meeting, we discussed a bit about image
templates and what they should do. The discussion was oriented to
having support for things like OVF. This sounds like a great
opportunity to expand that concept to something that won't be useful
just for nova but for Heat as well. As Mark mentioned, it's more about
making glance aware about the difference between a template and an
image and their types.

None of this is written on stone. The discussion came out at the
summit and we brought it up at one of our meetings. So, lets make sure
we can cover the required needs.

There's still no blueprint but I created this[0] etherpad where we
could start writing the needs for Heat and other templates.

I agree with Mark. I don't think Glance should expand much further
beyond the Nova / Heat stack and akin.

[0] https://etherpad.openstack.org/p/glance-templates


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Jaromir Coufal


On 2013/06/12 00:33, Robert Collins wrote:

There isn't a plan yet, it's just discussion so far. I don't have a
strong feeling of consensus. Lets discuss it more real-time at the
coming TripleO meeting; and I suggest that the Horizon meeting should
also do that, and we can loop back to email with any new ideas or
concerns that that raised.

-Rob


+1, I added this topic to both agendas.

-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Ladislav Smola

On 12/05/2013 03:01 PM, James Slagle wrote:

On Wed, Dec 4, 2013 at 2:10 PM, Robert Collins
robe...@robertcollins.net wrote:

On 5 December 2013 06:55, James Slagle james.sla...@gmail.com wrote:

On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins

Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
to TripleO and OpenStack, but I don't think they are tracking /
engaging in the code review discussions enough to stay in -core: I'd
be delighted if they want to rejoin as core - as we discussed last
time, after a shorter than usual ramp up period if they get stuck in.

What's the shorter than usual ramp up period?

You know, we haven't actually put numbers on it. But I'd be
comfortable with a few weeks of sustained involvement.

+1.  Sounds reasonable.


In general, I agree with your points about removing folks from core.

We do have a situation though where some folks weren't reviewing as
frequently when the Tuskar UI/API development slowed a bit post-merge.
  Since that is getting ready to pick back up, my concern with removing
this group of folks, is that it leaves less people on core who are
deeply familiar with that code base.  Maybe that's ok, especially if
the fast track process to get them back on core is reasonable.

Well, I don't think we want a situation where when a single org
decides to tackle something else for a bit, that noone can comfortably
fix bugs in e.g. Tuskar / or worse the whole thing stalls - thats why
I've been so keen to get /everyone/ in Tripleo-core familiar with the
entire collection of codebases we're maintaining.

So I think after 3 months that other cores should be reasonably familiar too ;).

Well, it's not so much about just fixing bugs.  I'm confident our set
of cores could fix bugs in almost any OpenStack related project, and
in fact most do.  It was more just a comment around people who worked
on the initial code being removed from core.  But, if others don't
share that concern, and in fact Ladislav's comment about having
confidence in the number of tuskar-ui guys still on core pretty much
mitigates my concern :).


Well if it would be possible, I would rather keep guys who want to be 
more active in core. It's true that most of us worked on Horizon till 
now, preparing libraries we will need.
And in next couple months, we will be implementing that in Tuskar-UI. So 
having more core who understands that will be beneficial.


Basically me and tzumainn are working on Tuskar-UI fulltime. And ifarkas 
and tomas-8c8 are familiar enough with the code, but will be working on 
other projects. So that seems
to me like a minimal number of cores to keep us rolling(if nobody gets 
sick, etc.).


We will need to get patches in at certain cadence to keep the 
deadlines(patches will also depend on each other blocking other people), 
so in certain cases a +1 one from a non core
guy I have a confidence in, regarding e.g. deep knowledge of Angular.js 
or Horizon will be enough for me to approve the patch.



That said, perhaps we should review these projects.

Tuskar as an API to drive deployment and ops clearly belongs in
TripleO - though we need to keep pushing features out of it into more
generalised tools like Heat, Nova and Solum. TuskarUI though, as far
as I know all the other programs have their web UI in Horizon itself -
perhaps TuskarUI belongs in the Horizon program as a separate code
base for now, and merge them once Tuskar begins integration?

IMO, I'd like to see Tuskar UI stay in tripleo for now, given that we
are very focused on the deployment story.  And our reviewers are
likely to have strong opinions on that :).  Not that we couldn't go
review in Horizon if we wanted to, but I don't think we need the churn
of making that change right now.

So, I'll send my votes on the other folks after giving them a little
more time to reply.

Thanks.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Jaromir Coufal


On 2013/04/12 08:12, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

Hey there,

thanks Rob for keeping eye on this. Speaking for myself, as current 
non-coder it was very hard to keep pace with others, especially when UI 
was on hold and I was designing future views. I'll continue working on 
designs much more, but I will also keep an eye on code which is going 
in. I believe that UX reviews will be needed before merging so that we 
assure keeping the vision. That's why I would like to express my will to 
stay within -core even when I don't deliver that big amount of reviews 
as other engineers. However if anybody feels that I should be just +1, I 
completely understand and I will give up my +2 power.


-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-06 Thread Joe Gordon
On Dec 6, 2013 9:57 AM, Maru Newby ma...@redhat.com wrote:


 On Dec 6, 2013, at 1:09 AM, John Dickinson m...@not.mn wrote:

 
  On Dec 5, 2013, at 1:36 AM, Maru Newby ma...@redhat.com wrote:
 
 
  On Dec 3, 2013, at 12:18 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
 
  On Sun, Dec 1, 2013 at 7:04 PM, John Dickinson m...@not.mn wrote:
  Just to add to the story, Swift uses X-Trans-Id and generates it in
the outer-most catch_errors middleware.
 
  Swift's catch errors middleware is responsible for ensuring that the
transaction id exists on each request, and that all errors previously
uncaught, anywhere in the pipeline, are caught and logged. If there is not
a common way to do this, yet, I submit it as a great template for solving
this problem. It's simple, scalable, and well-tested (ie tests and running
in prod for years).
 
 
https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py
 
  Leaving aside error handling and only focusing on the transaction id
(or request id) generation, since OpenStack services are exposed to
untrusted clients, how would you propose communicating the appropriate
transaction id to a different service? I can see great benefit to having a
glance transaction ID carry through to Swift requests (and so on), but how
should the transaction id be communicated? It's not sensitive info, but I
can imagine a pretty big problem when trying to track down errors if a
client application decides to set eg the X-Set-Transaction-Id header on
every request to the same thing.
 
  -1 to cross service request IDs, for the reasons John mentions above.
 
 
  Thanks for bringing this up, and I'd welcome a patch in Swift that
would use a common library to generate the transaction id, if it were
installed. I can see that there would be huge advantage to operators to
trace requests through multiple systems.
 
  Another option would be for each system that calls an another
OpenStack system to expect and log the transaction ID for the request that
was given. This would be looser coupling and be more forgiving for a
heterogeneous cluster. Eg when Glance makes a call to Swift, Glance cloud
log the transaction id that Swift used (from the Swift response). Likewise,
when Swift makes a call to Keystone, Swift could log the Keystone
transaction id. This wouldn't result in a single transaction id across all
systems, but it would provide markers so an admin could trace the request.
 
  There was a session on this at the summit, and although the notes are
a little scarce this was the conclusion we came up with.  Every time a
cross service call is made, we will log and send a notification for
ceilometer to consume, with the request-ids of both request ids.  One of
the benefits of this approach is that we can easily generate a tree of all
the API calls that are made (and clearly show when multiple calls are made
to the same service), something that just a cross service request id would
have trouble with.
 
  Is wise to trust anything a client provides to ensure traceability?
 If a user receives a request id back from Nova, then submits that request
id in an unrelated request to Neutron, the traceability would be
effectively corrupted.  If the consensus is that we don't want to securely
deliver request ids for inter-service calls, how about requiring a service
to log its request id along with the request id returned from a call to
another service to achieve the a similar result?
 
  Yes, this is what I was proposing. I think this is the best path
forward.

Nova does this for glance client today.  Glance client logs the out glances
request Id and that message is wrapped with novas request id.

Ceilometer wanted notifications of these events as well so it could track
things better.


 Ok, great.  And as per your suggestion, a middleware-based error handler
will soon be proposed for oslo that will secondarily ensure that a request
id is added to the request.

 
 
  The catch is that every call point (or client instantiation?) would
have to be modified to pass the request id instead of just logging at one
place in each service.  Is that a cost worth paying?
 
  Perhaps this is my ignorance of how other projects work today, but does
this not already happen? Is it possible to get a response from an API call
to an OpenStack project that doesn't include a request id?

 We'll have it in Neutron real-soon-now, and then I think the answer will
be 'yes'.

 On reflection, it should be easy enough for a given service to ensure
that calls to other services are automatically instrumented to log request
id pairs.  Again, probably something for oslo.


 m.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Ladislav Smola

On 12/05/2013 11:40 AM, Jan Provaznik wrote:

On 12/04/2013 08:12 AM, Robert Collins wrote:

And the 90 day not-active-enough status:

|   jprovazn **|  220   5  10   7   177.3% | 2 (  
9.1%)  |
|jomara ** |  210   2   4  15  1190.5% | 2 (  
9.5%)  |
|mtaylor **|  173   6   0   8   847.1% | 0 (  
0.0%)  |
|   jtomasek **|  100   0   2   8  10   100.0% | 1 ( 
10.0%)  |
|jcoufal **|   53   1   0   1   320.0% | 0 (  
0.0%)  |


Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
to TripleO and OpenStack, but I don't think they are tracking /
engaging in the code review discussions enough to stay in -core: I'd
be delighted if they want to rejoin as core - as we discussed last
time, after a shorter than usual ramp up period if they get stuck in.



I will put more attention to reviews in future. Only a nit, it's quite 
a challenge to find something to review - most of the mornings I 
check pending patches everything is already reviewed ;).


Jan



Agreed. The policy of one review per day is not really what I do. I do 
go through all of the reviews and review as much as I can, because I 
can't be sure there will be something to review for me the

next day.
That takes me little more time than I would like. Though reviews are 
needed and I do learn new stuff.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-dev Digest, Vol 20, Issue 13

2013-12-06 Thread Abbass MAROUNI
Hi Paul,

Thanks for your answer, I read the blueprint and am aware of what it will
add to the whole resource and scheduling bits of OpenStack.

I guess I'll just continue with what I did and wait for the blueprint to
get implemented, unless there's a quick way to add it to Havana without
waiting for the next release.

Abbass.


 Hi Abbass,
 I guess you read the blueprint Russell referred to. I think you actually
 are saying the same - but please read steps below and tell me if they don't
 cover what you want.
 This is what it will do:
 1.   Add a column to the compute_nodes table for a JSON blob
 2.   Add plug-in framework for additional resources in resource_tacker
 (just like filters in filter scheduler)
 3.   Resource plugin classes will implement things like:
 a.   Claims test method
 b.  add your data here method (so it can populate the JSON blob)
 4.   Additional column is available in host_state at filter scheduler
 You will then be able to do any or all of the following:
 1.   Add new parameters to requests in extra_specs
 2.   Add new filter/weight classes as scheduler plugins
 a.   Will have access to filter properties (including extra_specs)
 b.  Will have access to extra resource data (from compute node)
 c.   Can generate limits
 3.   Add new resource classes as scheduler plugins
 a.   Will have access to filter properties (including extra specs)
 b.  Will have access to limits (from scheduler)
 c.   Can generate extra resource data to go to scheduler
 Does this match your needs?
 There are also plans to change how data goes from compute nodes to
 scheduler (i.e. not through the database). This will remove the database
 from the equation. But that can be kept as a separate concern.
 Paul.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Ladislav Smola

On 12/06/2013 09:56 AM, Jaromir Coufal wrote:


On 2013/04/12 08:12, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

Hey there,

thanks Rob for keeping eye on this. Speaking for myself, as current 
non-coder it was very hard to keep pace with others, especially when 
UI was on hold and I was designing future views. I'll continue working 
on designs much more, but I will also keep an eye on code which is 
going in. I believe that UX reviews will be needed before merging so 
that we assure keeping the vision. That's why I would like to express 
my will to stay within -core even when I don't deliver that big amount 
of reviews as other engineers. However if anybody feels that I should 
be just +1, I completely understand and I will give up my +2 power.




I wonder whether there can be a sort of honorary core title. jcoufal is 
contributing a lot, but not that much with code or reviews.


I vote +1 to that, if it is possible


-- Jarda


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-06 Thread David Chadwick
Another alternative is to change role name into role display name,
indicating that the string is only to be used in GUIs, is not guaranteed
to be unique, is set by the role creator, can be any string in any
character set, and is not used by the system anywhere. Only role ID is
used by the system, in policy evaluation, in user-role assignments, in
permission-role assignments etc.

regards

David

On 05/12/2013 16:21, Tiwari, Arvind wrote:
 Hi David,
 
 Let me capture these details in ether pad. I will drop an email after adding 
 these details in etherpad.
 
 Thanks,
 Arvind
 
 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
 Sent: Thursday, December 05, 2013 4:15 AM
 To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List (not for 
 usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 Hi Arvind
 
 we are making good progress, but what I dont like about your proposal
 below is that the role name is not unique. There can be multiple roles
 with the same name, but different IDs, and different scopes. I dont like
 this, and I think it would be confusing to users/administrators. I think
 the role names should be different as well. This is not difficult to
 engineer if the names are hierarchically structured based on the name of
 the role creator. The creator might be owner of the resource that is
 being scoped, but it need not necessarily be. Assuming it was, then in
 your examples below we might have role names of NovaEast.admin and
 NovaWest.admin. Since these are strings, policies can be easily adapted
 to match on NovaWest.admin instead of admin.
 
 regards
 
 david
 
 On 04/12/2013 17:21, Tiwari, Arvind wrote:
 Hi Adam,

 I have added my comments in line. 

 As per my request yesterday and David's proposal, following role-def data 
 model is looks generic enough and seems innovative to accommodate future 
 extensions.

 {
   role: {
 id: 76e72a,
 name: admin, (you can give whatever name you like)
 scope: {
   id: ---id--, (ID should be  1 to 1 mapped with resource in type 
 and must be immutable value)
   type: service | file | domain etc., (Type can be any type of 
 resource which explains the scoping context)
   interface:--interface--  (We are still need working on this field. 
 My idea of this optional field is to indicate the interface of the resource 
 (endpoint for service, path for File,) for which the role-def is 
created and can be empty.)
 }
   }
 }

 Based on above data model two admin roles for nova for two separate region 
 wd be as below

 {
   role: {
 id: 76e71a,
 name: admin,
 scope: {
   id: 110, (suppose 110 is Nova serviceId)
   interface: 1101, (suppose 1101 is Nova region East endpointId)
   type: service
 }
   }
 }

 {
   role: {
 id: 76e72a,
 name: admin,
 scope: {
   id: 110, 
   interface: 1102,(suppose 1102 is Nova region West endpointId)
   type: service
 }
   }
 }

 This way we can keep role-assignments abstracted from resource on which the 
 assignment is created. This also open doors to have service and/or endpoint 
 scoped token as I mentioned in https://etherpad.openstack.org/p/1Uiwcbfpxq.

 David, I have updated 
 https://etherpad.openstack.org/p/service-scoped-role-definition line #118 
 explaining the rationale behind the field.
 I wd also appreciate your vision on 
 https://etherpad.openstack.org/p/1Uiwcbfpxq too which is support 
 https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens BP.


 Thanks,
 Arvind

 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Tuesday, December 03, 2013 6:52 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage 
 questions)
 Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 I've been thinking about your comment that nested roles are confusing
 AT: Thanks for considering my comment about nested role-def.

 What if we backed off and said the following:


 Some role-definitions are owned by services.  If a Role definition is 
 owned by a service, in role assignment lists in tokens, those roles we 
 be prefixd by the service name.  / is a reserved cahracter and weill be 
 used as the divider between segments of the role definition 

 That drops arbitrary nesting, and provides a reasonable namespace.  Then 
 a role def would look like:

 glance/admin  for the admin role on the glance project.

 AT: It seems this approach is not going to help, service rename would impact 
 all the role-def for a particular service. And we are back to the same 
 problem.

 In theory, we could add the domain to the namespace, but that seems 
 unwieldy.  If we did, a role def would then look like this


 default/glance/admin  for the admin role on 

Re: [openstack-dev] How to best make User Experience a priority in every project

2013-12-06 Thread Jaromir Coufal

Hi OpenStackers,

I am replying to this thread with a smaller delay. I was keeping very 
close attention to it but I wanted to let the discussion flow without me 
interfering, so I see the community opinion on the UX effort.


First of all, thanks Thierry to starting this thread and the whole 
discussion. I believe it was/is very valuable.


From the discussion I see some hesitations of approving UX as 
independent program and lot of (strong) opinions that this is important 
to happen. I appreciate both because from concerns we can learn and from 
the positive feedback we get a lot of support and also a lot of 
suggestions where we can continue helping. (BTW: Huge thanks for this, 
all the listed areas are very important and I am happy that I could have 
added some new items to the list of future spots where we can help.)


As for me, I am (of course) on the side which fights for UX to be an 
individual program from various reasons.


I share the same opinion that we (UX) shouldn't be completely separated 
team which is 'dictating'. Our purpose is to be strongly integrated with 
other projects. But on the same time be cross-project wise and one leg 
out. We should organize and prioritize our efforts as well as very 
tightly communicate with related project team members (coordinate on 
planning features, assigning priorities, etc). And that's what we are 
doing. We started with UIs which is the most obvious output. We have 
have limited resources, but getting more contributors on board, getting 
more people interested, we can spread to other areas which were 
mentioned here.


We are growing. At the moment we are 4 core members and others are 
coming in. But honestly, contributors are not coming to specific 
projects - they go to reach UX community in a sense - OK this is awesome 
effort, how can I help? What can I work on? And it is more and more 
companies investing in the UX resources. Because it is important. We are 
in the time when not just functionality matters for project to become 
successful. And showing publicly that OpenStack cares about UX will make 
our message stronger - we are not delivering just features, we care and 
invest in usability as well.


Contributors who might get interested in UX can be largely from other 
OpenStack projects, but on the other hand they might be completely 
outside the project experts. Experts in cloud-solutions, they can have 
huge amount of feedback from OpenStack users, they can be experts in 
testing... usability in general. This group of people is not interested 
in particular project, but in global effort of moving OpenStack closer 
to users. If we don't have special program about this - whom are they 
going to reach? Where can they start? How can they be recognized? Their 
input is as valuable as input from all other contributors. Just a bit 
different.


I don't agree much with the argument that everybody should keep UX in 
mind and care about it. Well to be more accurate, I agree with it, but 
this is very ideal case which is very very hard to achieve. We can't say 
- OK folks, from now on everybody will care about UX. What should they 
care about specifically? This is area where engineers are not 
specialized. It takes a lot of time for everybody to do their own search 
for resources and figuring out how somebody else does that, how it 
should work for user, etc. And instead of focusing on the architecture 
or implementation part, people will have to invest big amount of time to 
research other sources. Yes, it is part of responsibility, but... If 
there is anybody else helping with this effort, focusing cross-project, 
thinking user way and proposing solutions it's so big help and support 
of others work. Of course we can do UIs, APIs, CLIs without specialized 
group of people, but each engineer thinks a bit differently, each can 
have different perception of what is valuable for user and the lack of 
unification will raise. And that's actually what is happening.


At the moment we are not the biggest group of people, so I understand 
the concerns. Anyway, getting the blessing for UX is not a question of 
us continuing in the effort, but supporting us and spreading out the 
message - that we as OpenStack care.


I am not trying to convince anybody here, I accept the decision 'no' (at 
least for this moment). I just feel that it was not consensus that most 
of people thinks that this is nonsense. I don't see any strong reasons 
why not. In time, I believe more people will see how important it is and 
hopefully OpenStack will recognize UX efforts officially.


Anyway... I want to encourage anybody interested in the UX (any area) - 
reach us and help us to make OpenStack more usable. Everybody's hand is 
welcome.


Thanks all for contributing to this thread and expressing your opinions. 
I really appreciate that.


-- Jarda

--- Jaromir Coufal (jcoufal)
--- OpenStack User Experience
--- IRC: #openstack-ux (at FreeNode)
--- Forum: 

Re: [openstack-dev] [UX] Topics Summary

2013-12-06 Thread Jaromir Coufal


On 2013/05/12 23:14, Mark McLoughlin wrote:

On Tue, 2013-12-03 at 09:36 +0100, Jaromir Coufal wrote:

Hey OpenStackers,

based on the latest discussions, it was asked if we can try to post
regular updates of what is happening in our community (mostly on Askbot
forum: http://ask-openstackux.rhcloud.com).

In this e-mail, I'd like to summarize ongoing issues and try to cover
updates weekly (or each 14 days - based on content).

Just catching up on this now and ... wow, that's a lot of really
exciting stuff!

The full summary is great, but it could be really great to pull out a
couple of the most interesting wireframes or mockups as a highlight to
whet the appetite of casual followers like me :)

Thanks for all this,
Mark.



Hey Mark,

thanks a lot for the feedback. That's exactly what I would like to do in 
next rounds. I just wanted to list out all our efforts which are 
happening lately (as the first kick) and in time regularly pop out the 
most important stuff or some interesting sources.


Stay tuned, more focused info will come out.

Cheers
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable/havana 2013.2.1 freeze tomorrow

2013-12-06 Thread Alan Pevec
2013/12/4 Alan Pevec ape...@gmail.com:
 first stable/havana release 2013.2.1 is scheduled[1] to be released
 next week on December 12th, so freeze on stable/havana goes into
 effect tomorrow EOD, one week before the release.

We're behind with reviewing so we'll be doing soft-freeze today:
stable-maint members can review and approve currently open stable
reviews during Friday, but any new reviews coming in will be blocked.
Remaining open reviews will get temporary automatic -2 at EOD today
when call for testing will be posted.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes - close to implementation start

2013-12-06 Thread Jaromir Coufal

Hey Matt,

thanks for the comments, I'll try to reply below:

On 2013/05/12 20:32, Matt Wagner wrote:

On Tue Dec  3 06:53:04 2013, Jaromir Coufal wrote:

I've somehow overlooked the 'Node tags' previously. I'm curious what
format these would take, or if this is something we've discussed. I
remember hearing us kick around an idea for key-value pairs for storing
arbitrary information, maybe ram=64g or rack=c6. Is that what the tags
you have are intended for?
Not exactly. This is not key-value approach but more like an arbitrary 
information (or grouping), what user can enter in. It can be very 
efficient way for the user to express various meta-information about the 
node if he cares. For example from the beginning, when we are missing 
some functionality from Ironic (like location, rack information, etc), 
we can use manual tagging instead. This might be already part of Ironic, 
so we just need to check if that's correct




One thing I notice here -- and I really hope I'm not opening a can of
worms -- is that this seems to require that you manage many nodes. I
know that's our focus, and I entirely agree with it. But with the way
things are represented in this, it doesn't seem like it would be
possible for a user to set up an all-in-one system (say, for testing)
that ran compute and storage on the same node.

I think it would be very fair to say that's just something we're not
focusing on at this point, and that to start out we're just going to
handle the simple case of wanting to install many nodes, each with only
one distinct type. But I just wanted to clarify that we are, indeed,
making that decision?
I am convinced that this was never scope of our efforts. We are focusing 
not just about installation of overcloud but also about monitoring and 
furthermore easy *scaling*. Mixing compute and storage services at one 
node would be very inefficient and for vast majority of deployments 
unrealistic scenario.


Though, in the future if we find out that there are multiple cases of 
this being a way how people setup their deployments, we might reconsider 
to support this approach. But I have never heard about that so far and I 
don't think that it will happen.


Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate / Check jobs fail fast mode - don't start tempest until docs/pep8/unittests pass?

2013-12-06 Thread Sean Dague
I actually don't, for the reasons Clark brought up.

All this does is optimizes for people that don't run unit tests locally,
and make all the jobs take longer. Everything except the tempest jobs
should be easy to run locally.

So this would effectively penalize the people that do the right thing,
to advantage people that aren't.

-Sean

On 12/05/2013 07:17 PM, Michael Still wrote:
 I like this idea.
 
 Michael
 
 On Thu, Dec 5, 2013 at 4:37 PM, Peter Portante
 peter.a.porta...@gmail.com wrote:
 Has anybody considered changing how check and gate jobs work such that
 the tempest and grenade checks only run once the docs/pep8/unittests
 jobs all succeed?

 It seems like they complete so much quicker that folks can fix those
 bugs before having to wait hours for the other jobs.

 Thoughts, concerns, problems with that?

 Sincerely, -peter

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-12-06 Thread Gabriel pettier
Hi

I've just recently started wetting my feet in openstack, so my opinion 
is mostly external, and maybe naive, but I think that everybody should 
think about UX is not in opposition with having a team that focuses on 
it, just like it's nice to have people who focus on security, although 
obviously everybody should focus about it, and same goes for 
performances, for documentation, for community, or for any aspect of the 
project, if there are improvements to be done, it's always effective to 
have a focused effort.

UX is a broad topic, and of course people will focus on different parts 
of it anyway (subjects from logging, to web UI, to cli, to config files, 
etc have been talked about in the discussion), but thinking about it as a 
whole, as a concept, is important for the project to go forward. Of 
course a bug can (and will) ruin the user experience, no matter how much 
effort went into making a nice UI, but UX is not just a matter of being 
bug-free, so helping people design a global UX so openstack doesn't feel 
like a dozen of loosely-related projects packaged together, should be an 
important target, imho, and it's not something easy for people working 
in each project to think about, because there is a lot happening in all 
of them, and the project is kind of big.

Cheers

On Fri, Dec 06, 2013 at 11:19:57AM +0100, Jaromir Coufal wrote:
 Hi OpenStackers,
 
 I am replying to this thread with a smaller delay. I was keeping
 very close attention to it but I wanted to let the discussion flow
 without me interfering, so I see the community opinion on the UX
 effort.
 
 First of all, thanks Thierry to starting this thread and the whole
 discussion. I believe it was/is very valuable.
 
 From the discussion I see some hesitations of approving UX as
 independent program and lot of (strong) opinions that this is
 important to happen. I appreciate both because from concerns we can
 learn and from the positive feedback we get a lot of support and
 also a lot of suggestions where we can continue helping. (BTW: Huge
 thanks for this, all the listed areas are very important and I am
 happy that I could have added some new items to the list of future
 spots where we can help.)
 
 As for me, I am (of course) on the side which fights for UX to be an
 individual program from various reasons.
 
 I share the same opinion that we (UX) shouldn't be completely
 separated team which is 'dictating'. Our purpose is to be strongly
 integrated with other projects. But on the same time be
 cross-project wise and one leg out. We should organize and
 prioritize our efforts as well as very tightly communicate with
 related project team members (coordinate on planning features,
 assigning priorities, etc). And that's what we are doing. We started
 with UIs which is the most obvious output. We have have limited
 resources, but getting more contributors on board, getting more
 people interested, we can spread to other areas which were mentioned
 here.
 
 We are growing. At the moment we are 4 core members and others are
 coming in. But honestly, contributors are not coming to specific
 projects - they go to reach UX community in a sense - OK this is
 awesome effort, how can I help? What can I work on? And it is more
 and more companies investing in the UX resources. Because it is
 important. We are in the time when not just functionality matters
 for project to become successful. And showing publicly that
 OpenStack cares about UX will make our message stronger - we are not
 delivering just features, we care and invest in usability as well.
 
 Contributors who might get interested in UX can be largely from
 other OpenStack projects, but on the other hand they might be
 completely outside the project experts. Experts in cloud-solutions,
 they can have huge amount of feedback from OpenStack users, they can
 be experts in testing... usability in general. This group of people
 is not interested in particular project, but in global effort of
 moving OpenStack closer to users. If we don't have special program
 about this - whom are they going to reach? Where can they start? How
 can they be recognized? Their input is as valuable as input from all
 other contributors. Just a bit different.
 
 I don't agree much with the argument that everybody should keep UX
 in mind and care about it. Well to be more accurate, I agree with
 it, but this is very ideal case which is very very hard to achieve.
 We can't say - OK folks, from now on everybody will care about UX.
 What should they care about specifically? This is area where
 engineers are not specialized. It takes a lot of time for everybody
 to do their own search for resources and figuring out how somebody
 else does that, how it should work for user, etc. And instead of
 focusing on the architecture or implementation part, people will
 have to invest big amount of time to research other sources. Yes, it
 is part of responsibility, but... If there is anybody else helping
 with this effort, 

Re: [openstack-dev] [Keystone] Store quotas in Keystone

2013-12-06 Thread Joe Gordon
I just tried to read the full spec for this blueprint

https://blueprints.launchpad.net/keystone/+spec/store-quota-data

https://wiki.openstack.org/wiki/KeystoneCentralizedQuotaManagement

And nothing explains why this blueprint is needed or what it is trying to
accomplish, all it has is a design for keystone.  Both the introduction and
user stories sections just say 'TBD'.  How can we have  proper discussion
of this blueprint without that information?

best,
Joe

sent on the go
On Dec 3, 2013 7:35 PM, Joe Gordon joe.gord...@gmail.com wrote:


 On Dec 3, 2013 6:49 PM, John Dickinson m...@not.mn wrote:
 
 
  On Dec 3, 2013, at 8:05 AM, Jay Pipes jaypi...@gmail.com wrote:
 
   On 12/03/2013 10:04 AM, John Dickinson wrote:
   How are you proposing that this integrate with Swift's account and
 container quotas (especially since there may be hundreds of thousands of
 accounts and millions (billions?) of containers in a single Swift cluster)?
 A centralized lookup for quotas doesn't really seem to be a scalable
 solution.
  
   From reading below, it does not look like a centralized lookup is what
 the design is. A push-change strategy is what is described, where the quota
 numbers themselves are stored in a canonical location in Keystone, but when
 those numbers are changed, Keystone would send a notification of that
 change to subscribing services such as Swift, which would presumably have
 one or more levels of caching for things like account and container
 quotas...
 
  Yes, I get that, and there are already methods in Swift to support that.
 The trick, though, is either (1) storing all the canonical info in Keystone
 and scaling that or (2) storing some boiled down version, if possible,
 and fanning that out to all of the resources in Swift. Both are difficult
 and require storing the information in the central Keystone store.

 If I remember correctly the motivation for using keystone for quotas is so
 there is one easy place to set quotas across all projects.  Why not hide
 this complexity with the unified client instead?  That has been the answer
 we have been using for pulling out assorted proxy APIs in nova (nova
 image-list volume-list) etc.

 
  
   Best,
   -jay
  
   --John
  
  
   On Dec 3, 2013, at 6:53 AM, Oleg Gelbukh ogelb...@mirantis.com
 wrote:
  
   Chmouel,
  
   We reviewed the design of this feature at the summit with CERN and
 HP teams. Centralized quota storage in Keystone is an anticipated feature,
 but there are concerns about adding quota enforcement logic for every
 service to Keystone. The agreed solution is to add quota numbers storage to
 Keystone, and add mechanism that will notify services about change to the
 quota. Service, in turn, will update quota cache and apply the new quota
 value according to its own enforcement rules.
  
   More detailed capture of the discussion on etherpad:
   https://etherpad.openstack.org/p/CentralizedQuotas
  
   Re this particular change, we plan to reuse this API extension code,
 but extended to support domain-level quota as well.
  
   --
   Best regards,
   Oleg Gelbukh
   Mirantis Labs
  
  
   On Mon, Dec 2, 2013 at 5:39 PM, Chmouel Boudjnah 
 chmo...@enovance.com wrote:
   Hello,
  
   I was wondering what was the status of Keystone being the central
 place across all OpenStack projects for quotas.
  
   There is already an implementation from Dmitry here :
  
   https://review.openstack.org/#/c/40568/
  
   but hasn't seen activities since october waiting for icehouse
 development to be started and a few bits to be cleaned and added (i.e: the
 sqlite migration).
  
   It would be great if we can get this rekicked to get that for
 icehouse-2.
  
   Thanks,
   Chmouel.
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Mark McLoughlin
Hi Julien,

On Mon, 2013-12-02 at 16:45 +0100, Julien Danjou wrote:
 On Mon, Nov 18 2013, Julien Danjou wrote:
 
https://blueprints.launchpad.net/oslo/+spec/messaging-decouple-cfg
 
 So I've gone through the code and started to write a plan on how I'd do
 things:
 
   https://wiki.openstack.org/wiki/Oslo/blueprints/messaging-decouple-cfg
 
 I don't think I missed too much, though I didn't run into all tiny
 details.
 
 Please feel free to tell me if I miss anything obvious, otherwise I'll
 try to start submitting patches, one at a time, to get this into shape
 step by step.

Thanks for writing this up, I really appreciate it.

I would like to spend more time getting to the bottom of what we're
trying to solve here.

If the goal is allow applications to use oslo.messaging without using
oslo.config, then what's driving this? I'm guessing some possible
answers:

  1) I don't want to use a global CONF object

 This is a strawman - I think we all realize the conf object you 
 pass to oslo.messaging doesn't have to be cfg.CONF. Just putting 
 it here to make sure no-one's confused about that.

  2) I don't want to have configuration files or command line options in
 order to use oslo.messaging

 But, even now, you don't actually have to parse the command line or
 any config files. See e.g. https://gist.github.com/markmc/7823230

  3) Ok, but I want to be able to specify values for things like 
 rpc_conn_pool_size without using any config files.

 We've talked about allowing the use of query parameters for stuff 
 like this, but I don't love that. I think I'd restrict query 
 parameters to those options which are fundamental to how you 
 connect to a given transport.

 We could quite easily provide any API which would allow 
 constructing a ConfigOpts object with a bunch of options set and 
 without anyone having to use config files. Here's a PoC of how
 that might look:

   https://gist.github.com/markmc/7823420

 (Now, if your reaction is OMG, you're using temporary config
 files on disk, that's awful then just bear with me an ignore the 
 implementation details of get_config_from_dict(). We could very 
 easily make oslo.config support a mode like this without needing
 to ever write anything to disk)

 The point of this example is that we could add an oslo.messaging
 API which takes a dict of config values and you never even know
 that oslo.config is involved.

  4) But I want the API to be explicit about what config options are 
 supported by the API

 This could be something useful to discuss, because right now the 
 API hides configuration options rather than encoding them into the 
 API. This is to give us a bit more flexibility about changing 
 these over time (e.g. keeping backwards compat for old options for 
 a short time than other aspects of the API).

 But actually, I'm assuming this isn't what you're thinking since 
 your patch adds an free-form executor_kwargs dict.

  5) But I want to avoid any dependency on oslo.config

 This could be fundamentally what we're talking about here, but I 
 struggle to understand it - oslo.config is pretty tiny and it only 
 requires argparse, so if it's just an implementation detail that 
 you don't even notice if you're not using config files then what 
 exactly is the problem?


Basically, my thinking is that something like this example:

  https://gist.github.com/markmc/7823420

where you can use oslo.messaging with just a dict of config values
(rather than having to parse config files) should handle any reasonable
concern that I've understood so far ... without having to change much at
all.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-06 Thread Jay Dobies
Along the same lines and while we're talking crazy ideas, one use case 
where a user might want to allocate entire nodes would be if TripleO 
were used to manage an ARM rack. The use cases aren't identical between 
cloud and ARM, but they are similar.


So for a rack of 1000 nodes, there is benefit in certain cases for a 
user not only taking an entire node, but a collection of nodes 
co-located in the same rack to take advantage of the rack fabric.


Again, crazy ideas and probably outside of the scope of things we want 
to bite off immediately. But as we're in the early stages of the Tuskar 
data and security models, it might make sense to at least keep in mind 
how we could play in this area as well.


On 12/05/2013 08:11 PM, Fox, Kevin M wrote:

I think the security issue can be handled by not actually giving the underlying 
resource to the user in the first place.
So, for example, if I wanted a bare metal node's worth of resource for my own 
containering, I'd ask for a bare metal node and use a blessed image that 
contains docker+nova bits that would hook back to the cloud. I wouldn't be able to login 
to it, but containers started on it would be able to access my tenant's networks. All 
access to it would have to be through nova suballocations. The bare resource would count 
against my quotas, but nothing run under it.

Come to think of it, this sounds somewhat similar to what is planned for 
Neutron service vm's. They count against the user's quota on one level but not 
all access is directly given to the user. Maybe some of the same implementation 
bits could be used.

Thanks,
Kevin

From: Mark McLoughlin [mar...@redhat.com]
Sent: Thursday, December 05, 2013 1:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][TripleO] Nested resources

Hi Kevin,

On Mon, 2013-12-02 at 12:39 -0800, Fox, Kevin M wrote:

Hi all,

I just want to run a crazy idea up the flag pole. TripleO has the
concept of an under and over cloud. In starting to experiment with
Docker, I see a pattern start to emerge.

  * As a User, I may want to allocate a BareMetal node so that it is
entirely mine. I may want to run multiple VM's on it to reduce my own
cost. Now I have to manage the BareMetal nodes myself or nest
OpenStack into them.
  * As a User, I may want to allocate a VM. I then want to run multiple
Docker containers on it to use it more efficiently. Now I have to
manage the VM's myself or nest OpenStack into them.
  * As a User, I may want to allocate a BareMetal node so that it is
entirely mine. I then want to run multiple Docker containers on it to
use it more efficiently. Now I have to manage the BareMetal nodes
myself or nest OpenStack into them.

I think this can then be generalized to:
As a User, I would like to ask for resources of one type (One AZ?),
and be able to delegate resources back to Nova so that I can use Nova
to subdivide and give me access to my resources as a different type.
(As a different AZ?)

I think this could potentially cover some of the TripleO stuff without
needing an over/under cloud. For that use case, all the BareMetal
nodes could be added to Nova as such, allocated by the services
tenant as running a nested VM image type resource stack, and then made
available to all tenants. Sys admins then could dynamically shift
resources from VM providing nodes to BareMetal Nodes and back as
needed.

This allows a user to allocate some raw resources as a group, then
schedule higher level services to run only in that group, all with the
existing api.

Just how crazy an idea is this?


FWIW, I don't think it's a crazy idea at all - indeed I mumbled
something similar a few times in conversation with random people over
the past few months :)

With the increasing interest in containers, it makes a tonne of sense -
you provision a number of VMs and now you want to carve them up by
allocating containers on them. Right now, you'd need to run your own
instance of Nova for that ... which is far too heavyweight.

It is a little crazy in the sense that it's a tonne of work, though.
There's not a whole lot of point in discussing it too much until someone
shows signs of wanting to implement it :)

One problem is how the API would model this nesting, another problem is
making the scheduler aware that some nodes are only available to the
tenant which owns them but maybe a bigger problem is the security model
around allowing a node managed by an untrusted become a compute node.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list

[openstack-dev] [oslo] middleware to ensure request-id (Re: request-id in API response)

2013-12-06 Thread Akihiro Motoki
Hi,

According to the thread of request-id [1], there seems a need for
a wsgi middleware which generates request-id for each REST
request and ensures it in a corresponding response.
To do this, the middlware is located outer-most of a wsgi pipeline
and needs to catch all kind of exceptions.
It is the concept swift already has as catch_error middleware.

How about adding such middlware to oslo?
Ensuring request-id in log and API response is a common requirement
across OpenStack projects but each project implements in different ways now.
I was suggested from Maru on IRC too.

The feature of the proposed middleware is simple:
  * Catch all exceptions in wsgi pipeline
  * Generate request-id for each REST request
  * Set the generated request-id to request environment
  * Add X-OpenStack-Request-Id header to API response to return request-id
  * Expected to place outer-most (i.e., first) in wsgi pipeline

The initial implemetation proposed to neutron is available at
https://review.openstack.org/#/c/58270/
(The name of middleware is not good)

Regarding implementation,
a question is whether webob can be used or not.


Now request-id is generated in Context class and Context is instantiated
in auth middlware in most projects like nova, cinder, neutron.
We need to change auth middlware so that request-id will be retreived from 
request env.
Context class and auth middlware are also similar but different acorss projects.
It may time to update/add these classes in oslo.

Regards,
Akihiro

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-November/thread.html#20683

(2013/12/02 23:47), Jay Pipes wrote:
 On 12/01/2013 10:04 PM, John Dickinson wrote:
 Just to add to the story, Swift uses X-Trans-Id and generates it in
 the outer-most catch_errors middleware.

 Swift's catch errors middleware is responsible for ensuring that the
 transaction id exists on each request, and that all errors previously
 uncaught, anywhere in the pipeline, are caught and logged. If there
 is not a common way to do this, yet, I submit it as a great template
 for solving this problem. It's simple, scalable, and well-tested (ie
 tests and running in prod for years).


 https://github.com/openstack/swift/blob/master/swift/common/middleware/cat
 ch_errors.py

 ++

 If there's prior art here, might as well use it. I'm not a huge fan of
 using the term transaction within things that do not have a
 transactional safety context... but that's just because of my background
 in RDBMS stuff. If X-Trans-Id is already in use by another OpenStack
 project, it should probably take precedence over something new unless
 there is a really good reason otherwise (and my personal opinion about
 the semantics of transactions ain't a good reason!).

   Leaving aside error handling and only focusing on the transaction id
 (or request id) generation, since OpenStack services are exposed to
 untrusted clients, how would you propose communicating the
 appropriate transaction id to a different service? I can see great
 benefit to having a glance transaction ID carry through to Swift
 requests (and so on), but how should the transaction id be
 communicated? It's not sensitive info, but I can imagine a pretty big
 problem when trying to track down errors if a client application
 decides to set eg the X-Set-Transaction-Id header on every request to
 the same thing.

 I suppose if this were really a problem (and I'm not sold on the idea
 that it is a problem), one solution might be to store a checksum
 somewhere for the transaction ID and some other piece of data. But I
 don't really see that as super useful, and it would slow things down.
 Glance already stores a checksum for important things like the data in
 an image. If a service needs to be absolutely sure that a piece of data
 hasn't been messed with, this cross-service request ID probably isn't
 the thing to use...

 Thanks for bringing this up, and I'd welcome a patch in Swift that
 would use a common library to generate the transaction id, if it were
 installed. I can see that there would be huge advantage to operators
 to trace requests through multiple systems.

 Hmm, so does that mean that you'd be open to (gradually) moving to an
 x-openstack-request-id header to replace x-trans-id?

 Another option would be for each system that calls an another
 OpenStack system to expect and log the transaction ID for the request
 that was given. This would be looser coupling and be more forgiving
 for a heterogeneous cluster. Eg when Glance makes a call to Swift,
 Glance cloud log the transaction id that Swift used (from the Swift
 response). Likewise, when Swift makes a call to Keystone, Swift could
 log the Keystone transaction id. This wouldn't result in a single
 transaction id across all systems, but it would provide markers so an
 admin could trace the request.

 Sure, this is a perfectly fine option, but doesn't really provide the
 single traceable ID value that I think we're looking for here.

 Best,
 

[openstack-dev] [TripleO] UI Wireframes for Resource Management - ready for implementation

2013-12-06 Thread Jaromir Coufal

Hey everybody,

based on feedback, I updated wireframes for resource management and 
summarized changes in Askbot tool:


http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/?answer=110#post-id-110

Feel free to follow the discussion there.

I am passing these wireframes to devel team, so that we can start 
working on that.


Thanks all for contribution
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-12-06 Thread Thierry Carrez
Jaromir Coufal wrote:
 [...]
 I am not trying to convince anybody here, I accept the decision 'no' (at
 least for this moment). I just feel that it was not consensus that most
 of people thinks that this is nonsense. I don't see any strong reasons
 why not. In time, I believe more people will see how important it is and
 hopefully OpenStack will recognize UX efforts officially.
 [...]

It's certainly not consensus, and I don't think anybody said this was
nonsense. It's just a delicate balance, and trying to find the most
sustainable and efficient way to bring UX concerns within projects. Like
I said, the last thing we want is a fight between UX folks on one side
asking for stuff to get done and on the other side nobody in projects
actually caring about getting it done.

That said, I think you made great arguments for keeping a leg out and
organize in a cross-project way. After all we have other projects (like
QA) which do that very successfully, so I'm definitely willing to
consider UX as a separate program.

My main concern would be that the UX team is relatively new (the
launchpad tracker for example was created on Oct 20) and that we haven't
seen you around enough to see how you would interact with projects and
get your priorities across. There is no weekly UX team meetings listed
on https://wiki.openstack.org/wiki/Meetings, either.

Programs are about blessing existing teams and efforts (which already
obtain results), *not* to bootstrap new ones. We look into the team's
work and results and decide those are essential to the production of
OpenStack. So my advice to you would be to organize yourselves as a
team, engage with projects, deliver clear results, communicate around
those... and then apply again to be a Program if you think that's
still relevant.

Does that make sense ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-12-06 Thread Anne Gentle
On Fri, Dec 6, 2013 at 8:12 AM, Thierry Carrez thie...@openstack.orgwrote:

 Jaromir Coufal wrote:
  [...]
  I am not trying to convince anybody here, I accept the decision 'no' (at
  least for this moment). I just feel that it was not consensus that most
  of people thinks that this is nonsense. I don't see any strong reasons
  why not. In time, I believe more people will see how important it is and
  hopefully OpenStack will recognize UX efforts officially.
  [...]

 It's certainly not consensus, and I don't think anybody said this was
 nonsense. It's just a delicate balance, and trying to find the most
 sustainable and efficient way to bring UX concerns within projects. Like
 I said, the last thing we want is a fight between UX folks on one side
 asking for stuff to get done and on the other side nobody in projects
 actually caring about getting it done.

 That said, I think you made great arguments for keeping a leg out and
 organize in a cross-project way. After all we have other projects (like
 QA) which do that very successfully, so I'm definitely willing to
 consider UX as a separate program.

 My main concern would be that the UX team is relatively new (the
 launchpad tracker for example was created on Oct 20) and that we haven't
 seen you around enough to see how you would interact with projects and
 get your priorities across. There is no weekly UX team meetings listed
 on https://wiki.openstack.org/wiki/Meetings, either.

 Programs are about blessing existing teams and efforts (which already
 obtain results), *not* to bootstrap new ones. We look into the team's
 work and results and decide those are essential to the production of
 OpenStack. So my advice to you would be to organize yourselves as a
 team, engage with projects, deliver clear results, communicate around
 those... and then apply again to be a Program if you think that's
 still relevant.


I too would really like you all to organize as a team very similar to how
the Security team organizes itself - very much like what you are doing now.

It's interesting, the Security team ended up with a book sprint that
produced a book deliverable that is a very valuable piece of documentation,
and that wasn't a goal going in. So I think that the more you look for
opportunities for UX across projects the more deliverables you could find
that we haven't identified yet. So I really want to encourage you to keep
looking for those areas where we have need for good experiences.

Thanks,
Anne


 Does that make sense ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][tempest] Bug triage and monitoring process

2013-12-06 Thread Adalberto Medeiros

Hello all!

Yesterday, during the QA meeting, I volunteer myself to help the team 
handling bugs and defining a better process to triage them.


Investigating the current bug list, I checked we have:

* 7 critical and high bugs. From those, 3 critical non-assigned:
https://bugs.launchpad.net/tempest/+bugs?search=Searchfield.importance%3Alist=CRITICALfield.importance%3Alist=HIGHassignee_option=none
* 113 new bugs
* 253 open bugs

The first step here is to triage those NEW bugs and try to verify as 
much as possible the OPEN bugs are being addressed. One goal is to check 
duplicates, find assignees, confirm if the bugs are still valid and 
prioritize them. Another one is to ensure recheck bugs are marked 
correctly (critical or high) and that they have the right people looking 
at them. Finally, it's important to revisit old bugs in order to check 
they are still valid and re-prioritize them.


To accomplish that, I would like to suggest a Bug Triage Day for next 
week on Thursday, 12th (yup, before people leave to end-of-year holidays 
:) ).


The second step, after getting a concise and triaged bug list, is to 
ensure we have a defined process to constant revisit the list to avoid 
the issues we have now. I'm would like to hear suggestions here.


Please, send any thoughts about those steps and any other points you 
think we should address for monitoring the bugs. We may as well define 
in this thread what is needed for the bug triage day.


Regards,

--
Adalberto Medeiros
Linux Technology Center
Openstack and Cloud Development
IBM Brazil
Email: adal...@linux.vnet.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-06 Thread Akihiro Motoki

(2013/12/06 17:57), Joe Gordon wrote:

 On Dec 6, 2013 9:57 AM, Maru Newby ma...@redhat.com 
 mailto:ma...@redhat.com wrote:
  
  
   On Dec 6, 2013, at 1:09 AM, John Dickinson m...@not.mn 
 mailto:m...@not.mn wrote:
  
   
On Dec 5, 2013, at 1:36 AM, Maru Newby ma...@redhat.com 
 mailto:ma...@redhat.com wrote:
   
   
On Dec 3, 2013, at 12:18 AM, Joe Gordon joe.gord...@gmail.com 
 mailto:joe.gord...@gmail.com wrote:
   
   
   
   
On Sun, Dec 1, 2013 at 7:04 PM, John Dickinson m...@not.mn 
 mailto:m...@not.mn wrote:
Just to add to the story, Swift uses X-Trans-Id and generates it in 
 the outer-most catch_errors middleware.
   
Swift's catch errors middleware is responsible for ensuring that the 
 transaction id exists on each request, and that all errors previously 
 uncaught, anywhere in the pipeline, are caught and logged. If there is not a 
 common way to do this, yet, I submit it as a great template for solving this
 problem. It's simple, scalable, and well-tested (ie tests and running in prod 
 for years).
   

 https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py
   
Leaving aside error handling and only focusing on the transaction id 
 (or request id) generation, since OpenStack services are exposed to untrusted 
 clients, how would you propose communicating the appropriate transaction id 
 to a different service? I can see great benefit to having a glance
 transaction ID carry through to Swift requests (and so on), but how should 
 the transaction id be communicated? It's not sensitive info, but I can 
 imagine a pretty big problem when trying to track down errors if a client 
 application decides to set eg the X-Set-Transaction-Id header on every request
 to the same thing.
   
-1 to cross service request IDs, for the reasons John mentions above.
   
   
Thanks for bringing this up, and I'd welcome a patch in Swift that 
 would use a common library to generate the transaction id, if it were 
 installed. I can see that there would be huge advantage to operators to trace 
 requests through multiple systems.
   
Another option would be for each system that calls an another 
 OpenStack system to expect and log the transaction ID for the request that 
 was given. This would be looser coupling and be more forgiving for a 
 heterogeneous cluster. Eg when Glance makes a call to Swift, Glance cloud log 
 the
 transaction id that Swift used (from the Swift response). Likewise, when 
 Swift makes a call to Keystone, Swift could log the Keystone transaction id. 
 This wouldn't result in a single transaction id across all systems, but it 
 would provide markers so an admin could trace the request.
   
There was a session on this at the summit, and although the notes are 
 a little scarce this was the conclusion we came up with.  Every time a cross 
 service call is made, we will log and send a notification for ceilometer to 
 consume, with the request-ids of both request ids.  One of the
 benefits of this approach is that we can easily generate a tree of all the 
 API calls that are made (and clearly show when multiple calls are made to the 
 same service), something that just a cross service request id would have 
 trouble with.
   
Is wise to trust anything a client provides to ensure traceability?  If 
 a user receives a request id back from Nova, then submits that request id in 
 an unrelated request to Neutron, the traceability would be effectively 
 corrupted.  If the consensus is that we don't want to securely deliver
 request ids for inter-service calls, how about requiring a service to log its 
 request id along with the request id returned from a call to another service 
 to achieve the a similar result?
   
Yes, this is what I was proposing. I think this is the best path forward.

I think Logging returned request-id at client side is the best way.
We can track each API call even when multiple API calls are invoked in one API 
request.
 
  Nova does this for glance client today.  Glance client logs the out glances 
  request Id and that message is wrapped with novas request id.

When I ran devstack, glance client log (with glance request-id) is not wrapped
with nova request-id: http://paste.openstack.org/show/54590/
It is because glanceclient uses standard logging instead of 
openstack.common.log.
It is common to all client libraries and I am not sure why client libraries
do not use openstack.common.log.

 Ceilometer wanted notifications of these events as well so it could track 
 things better.

Should this feature a part of client libraries or server side which calls 
client libraries?
At now client libraries do not return information about response headers 
including request-id.
If we support it in server side, a standard way to return request-id in a 
response to a caller.
I don't have good idea on it at the moment and it is just a question.

Thanks,
Akihiro


  
   Ok, great.  And as per your suggestion, a 

Re: [openstack-dev] [heat] Heat API v2 - Removal of template_url?

2013-12-06 Thread Steven Hardy
On Fri, Dec 06, 2013 at 11:38:03AM +1100, Angus Salkeld wrote:
 On 05/12/13 17:00 +, Steven Hardy wrote:
 On Thu, Dec 05, 2013 at 04:11:37PM +, ELISHA, Moshe (Moshe) wrote:
 Hey,
 
 I really liked the v2 Heat API (as proposed in Create a new v2 Heat 
 APIhttps://blueprints.launchpad.net/heat/+spec/v2api) and I think it 
 makes a lot of sense.
 
 One of the proposed changes is to Remove template_url from the request 
 POST, so the template will be passed using the template parameter in the 
 request body.
 
 Could someone please elaborate how exactly Heat Orchestration Templates 
 written in YAML will be embedded in the body?
 
 In exactly the same way they are now, try creating a stack using a HOT yaml
 template, with --debug enabled via python heatclient and you'll see what I
 mean:
 
 wget 
 https://raw.github.com/openstack/heat-templates/master/hot/F18/WordPress_Native.yaml
 
 heat --debug stack-create -P key_name=userkey -f ./WordPress_Native.yaml 
 wp1
 
 This works fine now, so there's nothing to do to support this.
 
 Given the talks about Heater/appstore/glance-templates it would be
 good to be able to pass a references to one of those systems so
 the user doesn't have to upload to glance, then send a copy to heat.
 
 so just pass a template_id like how nova.boot() takes an image_id.

Sure, that sounds like a good idea, when the template service exists.

That's kinda orthogonal from this issue though, which is removing
arbitrary user-provided URLs; I'm perfectly happy if we add template_id in
future.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Julien Danjou
On Fri, Dec 06 2013, Mark McLoughlin wrote:

Hi Mark,

 If the goal is allow applications to use oslo.messaging without using
 oslo.config, then what's driving this? I'm guessing some possible
 answers:

   5) But I want to avoid any dependency on oslo.config

I think that's the more important one to me.

  This could be fundamentally what we're talking about here, but I 
  struggle to understand it - oslo.config is pretty tiny and it only 
  requires argparse, so if it's just an implementation detail that 
  you don't even notice if you're not using config files then what 
  exactly is the problem?

 Basically, my thinking is that something like this example:

   https://gist.github.com/markmc/7823420

 where you can use oslo.messaging with just a dict of config values
 (rather than having to parse config files) should handle any reasonable
 concern that I've understood so far ... without having to change much at
 all.

I definitely agree with your arguments. There's a large number of
technical solutions that can be used to bypass the usage of oslo.config
and make it work with whatever you're using..

I just can't stop thinking that a library shouldn't impose any use of a
configuration library. I can pick any library on PyPI, and, fortunately,
most of them don't come with a dependency on the favorite configuration
library of their author or related project, and its usage spread all
over the code base.

While I do respect the fact that this is a library to be consumed mainly
in OpenStack (and I don't want to break that), I think we're also trying
to not be the new Zope and contribute in a sane way to the Python
ecosystem. And I think oslo.messaging doesn't do that right.

Now if the consensus is to leave it that way, I honestly won't fight it
over and over. As Mark proved, there's a lot of way to circumvent the
oslo.config usage anyway.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-06 Thread Joe Gordon
On Dec 6, 2013 4:26 PM, Akihiro Motoki mot...@da.jp.nec.com wrote:


 (2013/12/06 17:57), Joe Gordon wrote:
 
  On Dec 6, 2013 9:57 AM, Maru Newby ma...@redhat.com mailto:
ma...@redhat.com wrote:
   
   
On Dec 6, 2013, at 1:09 AM, John Dickinson m...@not.mn mailto:
m...@not.mn wrote:
   

 On Dec 5, 2013, at 1:36 AM, Maru Newby ma...@redhat.com mailto:
ma...@redhat.com wrote:


 On Dec 3, 2013, at 12:18 AM, Joe Gordon joe.gord...@gmail.commailto:
joe.gord...@gmail.com wrote:




 On Sun, Dec 1, 2013 at 7:04 PM, John Dickinson m...@not.mnmailto:
m...@not.mn wrote:
 Just to add to the story, Swift uses X-Trans-Id and generates
it in the outer-most catch_errors middleware.

 Swift's catch errors middleware is responsible for ensuring that
the transaction id exists on each request, and that all errors previously
uncaught, anywhere in the pipeline, are caught and logged. If there is not
a common way to do this, yet, I submit it as a great template for solving
this
  problem. It's simple, scalable, and well-tested (ie tests and running
in prod for years).


https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py

 Leaving aside error handling and only focusing on the
transaction id (or request id) generation, since OpenStack services are
exposed to untrusted clients, how would you propose communicating the
appropriate transaction id to a different service? I can see great benefit
to having a glance
  transaction ID carry through to Swift requests (and so on), but how
should the transaction id be communicated? It's not sensitive info, but I
can imagine a pretty big problem when trying to track down errors if a
client application decides to set eg the X-Set-Transaction-Id header on
every request
  to the same thing.

 -1 to cross service request IDs, for the reasons John mentions
above.


 Thanks for bringing this up, and I'd welcome a patch in Swift
that would use a common library to generate the transaction id, if it were
installed. I can see that there would be huge advantage to operators to
trace requests through multiple systems.

 Another option would be for each system that calls an another
OpenStack system to expect and log the transaction ID for the request that
was given. This would be looser coupling and be more forgiving for a
heterogeneous cluster. Eg when Glance makes a call to Swift, Glance cloud
log the
  transaction id that Swift used (from the Swift response). Likewise,
when Swift makes a call to Keystone, Swift could log the Keystone
transaction id. This wouldn't result in a single transaction id across all
systems, but it would provide markers so an admin could trace the request.

 There was a session on this at the summit, and although the
notes are a little scarce this was the conclusion we came up with.  Every
time a cross service call is made, we will log and send a notification for
ceilometer to consume, with the request-ids of both request ids.  One of the
  benefits of this approach is that we can easily generate a tree of all
the API calls that are made (and clearly show when multiple calls are made
to the same service), something that just a cross service request id would
have trouble with.

 Is wise to trust anything a client provides to ensure
traceability?  If a user receives a request id back from Nova, then submits
that request id in an unrelated request to Neutron, the traceability would
be effectively corrupted.  If the consensus is that we don't want to
securely deliver
  request ids for inter-service calls, how about requiring a service to
log its request id along with the request id returned from a call to
another service to achieve the a similar result?

 Yes, this is what I was proposing. I think this is the best path
forward.

 I think Logging returned request-id at client side is the best way.
 We can track each API call even when multiple API calls are invoked in
one API request.
  
   Nova does this for glance client today.  Glance client logs the out
glances request Id and that message is wrapped with novas request id.

 When I ran devstack, glance client log (with glance request-id) is not
wrapped
 with nova request-id: http://paste.openstack.org/show/54590/
 It is because glanceclient uses standard logging instead of
openstack.common.log.
 It is common to all client libraries and I am not sure why client
libraries
 do not use openstack.common.log.

Woops you are correct, we log both request ids just not on the same line

 
  Ceilometer wanted notifications of these events as well so it could
track things better.

 Should this feature a part of client libraries or server side which calls
client libraries?

Server side

 At now client libraries do not return information about response headers
including request-id.
 If we support it in server side, a standard way to return request-id in a
response to a caller.
 I 

Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread James Slagle
On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins
robe...@robertcollins.net wrote:
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Four responded and said they'd try to be more active in reviews, or at
least would like/try to be.

However, given that:

- Robert seems to be very consistent in reviewing core every month,
including giving folks plenty of heads up about removal.
- There is now a defined shorter ramp up period to get back on core
- the *average* of 1 review/day is a very low bar

I think it's prudent and can't really object to removing these
individuals from core, so +1 for the removal.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-12-06 Thread Jaromir Coufal


On 2013/06/12 15:16, Anne Gentle wrote:




On Fri, Dec 6, 2013 at 8:12 AM, Thierry Carrez thie...@openstack.org 
mailto:thie...@openstack.org wrote:


Jaromir Coufal wrote:
 [...]
 I am not trying to convince anybody here, I accept the decision
'no' (at
 least for this moment). I just feel that it was not consensus
that most
 of people thinks that this is nonsense. I don't see any strong
reasons
 why not. In time, I believe more people will see how important
it is and
 hopefully OpenStack will recognize UX efforts officially.
 [...]

It's certainly not consensus, and I don't think anybody said this was
nonsense. It's just a delicate balance, and trying to find the most
sustainable and efficient way to bring UX concerns within
projects. Like
I said, the last thing we want is a fight between UX folks on one side
asking for stuff to get done and on the other side nobody in projects
actually caring about getting it done.

That said, I think you made great arguments for keeping a leg out and
organize in a cross-project way. After all we have other projects
(like
QA) which do that very successfully, so I'm definitely willing to
consider UX as a separate program.

My main concern would be that the UX team is relatively new (the
launchpad tracker for example was created on Oct 20) and that we
haven't
seen you around enough to see how you would interact with projects and
get your priorities across. There is no weekly UX team meetings listed
on https://wiki.openstack.org/wiki/Meetings, either.

Programs are about blessing existing teams and efforts (which already
obtain results), *not* to bootstrap new ones. We look into the team's
work and results and decide those are essential to the production of
OpenStack. So my advice to you would be to organize yourselves as a
team, engage with projects, deliver clear results, communicate around
those... and then apply again to be a Program if you think that's
still relevant.


Sure Thierry, it all makes sense and I understand that. I am very happy 
to continue in our efforts and that's actually what I wanted to express 
that we might ask later.


Only thing what I want to make sure though is that we have enough space 
at design summits, but I will revive this topic closer to the summit itself.


I too would really like you all to organize as a team very similar to 
how the Security team organizes itself - very much like what you are 
doing now.


It's interesting, the Security team ended up with a book sprint that 
produced a book deliverable that is a very valuable piece of 
documentation, and that wasn't a goal going in. So I think that the 
more you look for opportunities for UX across projects the more 
deliverables you could find that we haven't identified yet. So I 
really want to encourage you to keep looking for those areas where we 
have need for good experiences.


Thanks,
Anne
Thanks Anne, what the security team ended up is all awesome. I am eager 
to find out all the areas where we can help and jump in - hopefully we 
will get enough resources in time to cover more and more issues.


Cheers
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Mark McLoughlin
On Fri, 2013-12-06 at 15:41 +0100, Julien Danjou wrote:
 On Fri, Dec 06 2013, Mark McLoughlin wrote:
 
 Hi Mark,
 
  If the goal is allow applications to use oslo.messaging without using
  oslo.config, then what's driving this? I'm guessing some possible
  answers:
 
5) But I want to avoid any dependency on oslo.config
 
 I think that's the more important one to me.
 
   This could be fundamentally what we're talking about here, but I 
   struggle to understand it - oslo.config is pretty tiny and it only 
   requires argparse, so if it's just an implementation detail that 
   you don't even notice if you're not using config files then what 
   exactly is the problem?
 
  Basically, my thinking is that something like this example:
 
https://gist.github.com/markmc/7823420
 
  where you can use oslo.messaging with just a dict of config values
  (rather than having to parse config files) should handle any reasonable
  concern that I've understood so far ... without having to change much at
  all.
 
 I definitely agree with your arguments. There's a large number of
 technical solutions that can be used to bypass the usage of oslo.config
 and make it work with whatever you're using..
 
 I just can't stop thinking that a library shouldn't impose any use of a
 configuration library. I can pick any library on PyPI, and, fortunately,
 most of them don't come with a dependency on the favorite configuration
 library of their author or related project, and its usage spread all
 over the code base.
 
 While I do respect the fact that this is a library to be consumed mainly
 in OpenStack (and I don't want to break that), I think we're also trying
 to not be the new Zope and contribute in a sane way to the Python
 ecosystem. And I think oslo.messaging doesn't do that right.
 
 Now if the consensus is to leave it that way, I honestly won't fight it
 over and over. As Mark proved, there's a lot of way to circumvent the
 oslo.config usage anyway.

Ok, let's say oslo.messaging didn't use oslo.config at all and just took
a free-form dict of configuration values. Then you'd have this
separation whereby you can write code to retrieve those values from any
number of possible configuration sources and pass them down to
oslo.messaging. I think that's what you're getting at?

However, what you lose with that is a consistent way of defining a
schema for those configuration options in oslo.messaging. Should a given
option be an int, bool or a list? What should it's default be? etc. etc.
That stuff would live in the integration layer that maps from
oslo.config to a dict, even though it's totally useful when you just
supply a dict.

I guess there's two sides to oslo.config - the option schemas and the
code to retrieve values from various sources (command line, config files
or overrides/defaults). I think the option schemas is a useful
implementation detail in oslo.messaging, even if the values don't come
from the usual oslo.config sources.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystoneclient] [Keystone] [Solum] Last released version of keystoneclient does not work with python33

2013-12-06 Thread Dolph Mathews
On Wed, Dec 4, 2013 at 7:48 PM, David Stanek dsta...@dstanek.com wrote:

 On Wed, Dec 4, 2013 at 6:44 PM, Adrian Otto adrian.o...@rackspace.comwrote:

 Jamie,

 Thanks for the guidance here. I am checking to see if any of our
 developers might take an interest in helping with the upstream work. At the
 very least, it might be nice to have some understanding of how much work
 there is to be done in HTTPretty.


 (Dolph correct me if I am wrong, but...)

 I don't think that there is much work to be done beyond getting that pull
 request merged upstream.  Dolph ran the tests using the code from the pull
 request somewhat successfully.  The errors that we saw were just in
 keystoneclient code.


++ and the other errors I was hitting all have open patches in gerrit to
see them fixed. It didn't seem like we were far off, but I haven't tested
all these patches together yet to find out if they're just hiding even more
problems. Either way, a py33 test run for keystoneclient will look very
different very soon.


 --
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Questions around Development Process

2013-12-06 Thread Tzu-Mainn Chen
Hey all,

We're starting to work on the UI for tuskar based on Jarda's wireframes, and as 
we're doing so, we're realizing that
we're not quite sure what development methodology is appropriate.  Some 
questions:

a) Because we're essentially doing a tear-down and re-build of the whole 
architecture (a lot of the concepts in tuskar
will simply disappear), it's difficult to do small incremental patches that 
support existing functionality.  Is it okay
to have patches that break functionality?  Are there good alternatives?

b) In the past, we allowed parallel development of the UI and API by having 
well-documented expectations of what the API
would provide.  We would then mock those calls in the UI, replacing them with 
real API calls as they became available.  Is
this acceptable?

If there are precedents for this kind of stuff, we'd be more than happy to 
follow them!

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Docker] What are the plans or thoughts about providing volumes aka folder mounts

2013-12-06 Thread Daniel Kuffner
Hi All,

We are using in our company for a prototype the docker hypervisor on
openstack.  We have the need to mount a folder inside of a container.
To achieve this goal I have implemented a hack which allows to specify
a folder mount via nova metadata. For example a heat template could
look like:

 my-container:
Type: OS::Nova::Server
Properties:
  flavor: m1.large
  image: my-image:latest
  metadata:
 Volumes: /host/path:/guest/path

This approach is of course not perfect and even a security risk (which
is in our case no issue since we are not going to provide a public
cloud).
Any other ideas or plans how to provide the volume/folder mount in the future?

regards,
Daniel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] The document for the changes from Nova v2 api to v3

2013-12-06 Thread David Kranz

On 11/13/2013 06:09 PM, Christopher Yeoh wrote:
On Thu, Nov 14, 2013 at 7:52 AM, David Kranz dkr...@redhat.com 
mailto:dkr...@redhat.com wrote:


On 11/13/2013 08:30 AM, Alex Xu wrote:

Hi, guys

This is the document for the changes from Nova v2 api to v3:
https://wiki.openstack.org/wiki/NovaAPIv2tov3
I will appreciate if anyone can help for review it.

Another problem comes up - how to keep the doc updated. So can we
ask people, who change
something of api v3, update the doc accordingly? I think it's a
way to resolve it.

Thanks
Alex



___
openstack-qa mailing list
openstack...@lists.openstack.org  mailto:openstack...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa

Thanks, this is great. I fixed a bug in the os-services section.
BTW, openstack...@lists.openstack.org
mailto:openstack...@lists.openstack.org list is obsolete.
openstack-dev with subject starting with [qa] is the current qa
list. About updating, I think this will have to be heavily
socialized in the nova team. The initial review should happen by
those reviewing the tempest v3 api changes. That is how I found
the os-services bug.


While reviewing https://review.openstack.org/#/c/59939/ I found that a 
lot of the flavors changes are missing from this doc. Hopefully some one 
closer to the code changes can update it.


 -David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Incubation Request for Barbican

2013-12-06 Thread Monty Taylor


On 12/06/2013 08:35 AM, John Wood wrote:
 Hello folks,
 
 Just an FYI that I've submitted a pull request [1] to replace Celery
 with oslo.messaging.

wow. That was quick!

/me is impressed

Since you jumped on that - I went ahead and jumped on a pbr-ification
patch for you. It may not work yet - I'm on weird network and having
trouble install python things into virtualenvs:

  https://review.openstack.org/60551
  https://review.openstack.org/60552


 I've tagged it as a work in progress per this note:
 
 Please review this CR, which replaces Celery with oslo.messaging
 components. I've verified that this works in my local environment,
 but I still need to add unit testing. I also need to verify that it
 works correctly with an HA Rabbit MQ cluster, as that is a hard
 requirement for Barbican.
 
 Special thanks to Mark McLoughlin and Sylvain Bauza for pointing me
 to very useful links here [2] and here [3] respectively.
 
 [1] https://review.openstack.org/#/c/60427/ [2]
 https://review.openstack.org/#/c/39929 [3]
 https://review.openstack.org/#/c/57880
 
 Thanks, John
 
  From: Monty Taylor
 [mord...@inaugust.com] Sent: Thursday, December 05, 2013 8:35 PM To:
 Mark McLoughlin; Douglas Mendizabal Cc: OpenStack Development Mailing
 List (not for usage questions); openstack...@lists.openstack.org;
 barbi...@lists.rackspace.com Subject: Re: [openstack-tc]
 [openstack-dev] Incubation Request for Barbican
 
 On 12/06/2013 01:53 AM, Mark McLoughlin wrote:
 On Thu, 2013-12-05 at 23:37 +, Douglas Mendizabal wrote:
 
 I agree that this is concerning. And that what's concerning
 isn't so much that the project did something different, but
 rather that choice was apparently made because the project
 thought it was perfectly fine for them to ignore what other
 OpenStack projects do and go off and do its own thing.
 
 We can't make this growth in the number of OpenStack projects
 work if each project goes off randomly and does its own thing
 without any concern for the difficulties that creates.
 
 Mark.
 
 Hi Mark,
 
 You may have missed it, but barbican has added a blueprint to
 change our queue to use oslo.messaging [1]
 
 I just wanted to clarify that we didn’t choose Celery because we
 thought that “it was perfectly fine to ignore what other
 OpenStack projects do”. Incubation has been one of our goals
 since the project began.  If you’ve taken the time to look at our
 code, you’ve seen that we have been using oslo.config this whole
 time.  We chose Celery because it was
 
 a) Properly packaged like any other python library, so we could
 just pip-install it. b) Well documented c) Well tested in
 production environments
 
 At the time none of those were true for oslo.messaging.  In
 fact, oslo.messgaging still cannot be pip-installed as of today.
 Obviously, had we know that using oslo.messaging is hard
 requirement in advance, we would have chosen it despite its poor
 distribution story.
 
 I do sympathise, but it's also true is that all other projects
 were using the oslo-incubator RPC code at the time you chose
 Celery.
 
 I think all the verbiage in this thread about celery is just to 
 reinforce that we need to be very sure that new projects feel a 
 responsibility to fit closely in with the rest of OpenStack. It's
 not about technical requirements so much as social responsibility.
 
 But look - I think you've reacted well to the concern and hopefully
 if it feels like there was an overreaction that you can understand
 the broader thing we're trying to get at here.
 
 I agree. I think you've done an excellent job in responding to it -
 and I appreciate that. We're trying to be clearer about expectations
 moving forward, which I hope this thread in some part helps with.
 
 Monty
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] What are the plans or thoughts about providing volumes aka folder mounts

2013-12-06 Thread Russell Bryant
On 12/06/2013 10:54 AM, Daniel Kuffner wrote:
 Hi All,
 
 We are using in our company for a prototype the docker hypervisor on
 openstack.  We have the need to mount a folder inside of a container.
 To achieve this goal I have implemented a hack which allows to specify
 a folder mount via nova metadata. For example a heat template could
 look like:
 
  my-container:
 Type: OS::Nova::Server
 Properties:
   flavor: m1.large
   image: my-image:latest
   metadata:
  Volumes: /host/path:/guest/path
 
 This approach is of course not perfect and even a security risk (which
 is in our case no issue since we are not going to provide a public
 cloud).
 Any other ideas or plans how to provide the volume/folder mount in the future?

I think *directly* specifying a host path isn't very cloudy.  We don't
really have an abstraction appropriate for this yet.  Manila [1]
(filesystem aaS) seems to be the closest thing.  Perhaps some work in
Manila and some Nova+Manila integration would be the right direction here.

Thoughts?

[1] https://wiki.openstack.org/wiki/Manila_Overview

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-12-06 Thread Matt Riedemann



On Wednesday, December 04, 2013 7:22:23 AM, Joe Gordon wrote:

TL;DR: Gate is failing 23% of the time due to bugs in nova, neutron
and tempest. We need help fixing these bugs.


Hi All,

Before going any further we have a bug that is affecting gate and
stable, so its getting top priority here. elastic-recheck currently
doesn't track unit tests because we don't expect them to fail very
often. Turns out that assessment was wrong, we now have a nova py27
unit test bug in gate and stable gate.

https://bugs.launchpad.net/nova/+bug/1216851
Title: nova unit tests occasionally fail migration tests for mysql and
postgres
Hits
  FAILURE: 74
The failures appear multiple times for a single job, and some of those
are due to bad patches in the check queue.  But this is being seen in
stable and trunk gate so something is definitely wrong.

===


Its time for another edition of of 'Top Gate Bugs.'  I am sending this
out now because in addition to our usual gate bugs a few new ones have
cropped up recently, and as we saw a few weeks ago it doesn't take
very many new bugs to wedge the gate.

Currently the gate has a failure rate of at least 23%! [0]

Note: this email was generated with
http://status.openstack.org/elastic-recheck/ and
'elastic-recheck-success' [1]

1) https://bugs.launchpad.net/bugs/1253896
Title: test_minimum_basic_scenario fails with SSHException: Error
reading SSH protocol banner
Projects:  neutron, nova, tempest
Hits
  FAILURE: 324
This one has been around for several weeks now and although we have
made some attempts at fixing this, we aren't any closer at resolving
this then we were a few weeks ago.

2) https://bugs.launchpad.net/bugs/1251448
Title: BadRequest: Multiple possible networks found, use a Network ID
to be more specific.
Project: neutron
Hits
  FAILURE: 141

3) https://bugs.launchpad.net/bugs/1249065
Title: Tempest failure: tempest/scenario/test_snapshot_pattern.py
Project: nova
Hits
  FAILURE: 112
This is a bug in nova's neutron code.

4) https://bugs.launchpad.net/bugs/1250168
Title: gate-tempest-devstack-vm-neutron-large-ops is failing
Projects: neutron, nova
Hits
  FAILURE: 94
This is an old bug that was fixed, but came back on December 3rd. So
this is a recent regression. This may be an infra issue.

5) https://bugs.launchpad.net/bugs/1210483
Title: ServerAddressesTestXML.test_list_server_addresses FAIL
Projects: neutron, nova
Hits
  FAILURE: 73
This has had some attempts made at fixing it but its still around.


In addition to the existing bugs, we have some new bugs on the rise:

1) https://bugs.launchpad.net/bugs/1257626
Title: Timeout while waiting on RPC response - topic: network, RPC
method: allocate_for_instance info: unknown
Project: nova
Hits
  FAILURE: 52
large-ops only bug. This has been around for at least two weeks, but
we have seen this in higher numbers starting around December 3rd. This
may  be an infrastructure issue as the neutron-large-ops started
failing more around the same time.

2) https://bugs.launchpad.net/bugs/1257641
Title: Quota exceeded for instances: Requested 1, but already used 10
of 10 instances
Projects: nova, tempest
Hits
  FAILURE: 41
Like the previous bug, this has been around for at least two weeks but
appears to be on the rise.



Raw Data: http://paste.openstack.org/show/54419/


best,
Joe


[0] failure rate = 1-(success rate gate-tempest-dsvm-neutron)*(success
rate ...) * ...

gate-tempest-dsvm-neutron = 0.00
gate-tempest-dsvm-neutron-large-ops = 11.11
gate-tempest-dsvm-full = 11.11
gate-tempest-dsvm-large-ops = 4.55
gate-tempest-dsvm-postgres-full = 10.00
gate-grenade-dsvm = 0.00

(I hope I got the math right here)

[1]
http://git.openstack.org/cgit/openstack-infra/elastic-recheck/tree/elastic_recheck/cmd/check_success.py


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Let's add bug 1257644 [1] to the list.  I'm pretty sure this is due to 
some recent code [2][3] in the nova libvirt driver that is 
automatically disabling the host when the libvirt connection drops.


Joe said there was a known issue with libvirt connection failures so 
this could be duped against that, but I'm not sure where/what that one 
is - maybe bug 1254872 [4]?


Unless I just don't understand the code, there is some funny logic 
going on in the libvirt driver when it's automatically disabling a host 
which I've documented in bug 1257644.  It would help to have some 
libvirt-minded people helping to look at that, or the authors/approvers 
of those patches.


Also, does anyone know if libvirt will pass a 'reason' string to the 
_close_callback function?  I was digging through the libvirt code this 
morning but couldn't figure out where the callback is actually called 
and with what parameters.  The code in nova seemed to just be based on 
the patch that danpb had in libvirt [5].


This bug is going to raise a bigger long-term question 

[openstack-dev] [qa] Changes slipping through the nova v2-v3 port

2013-12-06 Thread David Kranz
I have been trying to review all of the nova v3 changes. Many of these 
patches have been around for awhile and have not kept up with changes 
that were made to the v2 tests after a v2 test file was copied to v3. I 
think any one submitting a patch to the nova v2 test code needs to file 
a bug ticket that it needs to be ported to v3 if they have not also 
changed existing v3 tests. We need to do that until the v3 queue is 
cleared. Reviewers should also keep this issue in mind when reviewing 
changes to nova v2 tests.


 -David



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Joshua Harlow
I really have to agree with this. It's especially important if oslo.messaging 
is also used in libraries like taskflow. If oslo.messaging imposes that users 
of it must use oslo.config then by using it in taskflow, taskflow then imposes 
the same oslo.config usage. This makes all libraries that use it inherently 
only useable in the openstack ecosystem which I think is very bad opensource 
behavior (not exactly open). There are other reasons to, a configuration dict 
means u can have many different active instances being simultaneously used 
(each with its own config), with oslo.config since it is a static configuration 
object u get 1 simultaneous instance. So this is yet another behavior that I as 
a library provider thing is very unhealthy restriction to impose on people that 
use taskflow.

Sent from my really tiny device...

 On Dec 6, 2013, at 6:46 AM, Julien Danjou jul...@danjou.info wrote:
 
 On Fri, Dec 06 2013, Mark McLoughlin wrote:
 
 Hi Mark,
 
 If the goal is allow applications to use oslo.messaging without using
 oslo.config, then what's driving this? I'm guessing some possible
 answers:
 
  5) But I want to avoid any dependency on oslo.config
 
 I think that's the more important one to me.
 
 This could be fundamentally what we're talking about here, but I 
 struggle to understand it - oslo.config is pretty tiny and it only 
 requires argparse, so if it's just an implementation detail that 
 you don't even notice if you're not using config files then what 
 exactly is the problem?
 
 Basically, my thinking is that something like this example:
 
  https://gist.github.com/markmc/7823420
 
 where you can use oslo.messaging with just a dict of config values
 (rather than having to parse config files) should handle any reasonable
 concern that I've understood so far ... without having to change much at
 all.
 
 I definitely agree with your arguments. There's a large number of
 technical solutions that can be used to bypass the usage of oslo.config
 and make it work with whatever you're using..
 
 I just can't stop thinking that a library shouldn't impose any use of a
 configuration library. I can pick any library on PyPI, and, fortunately,
 most of them don't come with a dependency on the favorite configuration
 library of their author or related project, and its usage spread all
 over the code base.
 
 While I do respect the fact that this is a library to be consumed mainly
 in OpenStack (and I don't want to break that), I think we're also trying
 to not be the new Zope and contribute in a sane way to the Python
 ecosystem. And I think oslo.messaging doesn't do that right.
 
 Now if the consensus is to leave it that way, I honestly won't fight it
 over and over. As Mark proved, there's a lot of way to circumvent the
 oslo.config usage anyway.
 
 -- 
 Julien Danjou
 ;; Free Software hacker ; independent consultant
 ;; http://julien.danjou.info
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Behavior change around the PyPI mirror and openstack/requirements

2013-12-06 Thread Monty Taylor
Hey all!

Things keep getting more complex around here, so we keep doing more stuffs.

Up until today, a project's forced participation on the OpenStack PyPI
Mirror (pypi.openstack.org) while in the gate was controllled by the
project being prefixed with openstack/. Well, that's clearly not rich
enough semantics. What if you want to make sure that your project is
ready for incubation, but you're still in stackforge?

Anywho - gate selection will now be tied to the projects.txt file in
openstack/requirements. Essentially, if you receive automatic
requirements sync commits, your commits will be tested with the mirror.
If you are tied to the mirror, you will receive the commits. (It's the
same thing, get it)

Most of you will probably not notice this. Unless something goes
horribly horribly wrong.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Mark Washenberger
On Thu, Dec 5, 2013 at 9:32 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/05/2013 04:25 PM, Clint Byrum wrote:

 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:

 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:

 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
   wrote:

  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:

 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.


 Fitting is one thing, optimizations around particular assumptions
 about the size of data and the frequency of reads/writes might be an 
 issue,
 but I admit to ignorance about those details in Glance.


 Optimizations can be improved for various use cases. The design,
 however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.


 I think we are getting out into the weeds a little bit here. It is
 important to think about these apis in terms of what they actually do,
 before the decision of combining them or not can be made.

 I think of HeatR as a template storage service, it provides extra data
 and operations on templates. HeatR should not care about how those
 templates are stored.
 Glance is an image storage service, it provides extra data and
 operations on images (not blobs), and it happens to use swift as a backend.

 If HeatR and Glance were combined, it would result in taking two very
 different types of data (template metadata vs image metadata) and mashing
 them into one service. How would adding the complexity of HeatR benefit
 Glance, when they are dealing with conceptually two very different types of
 data? For instance, should a template ever care about the field minRam
 that is stored with an image? Combining them adds a huge development
 complexity with a very small operations payoff, and so Openstack is already
 so operationally complex that HeatR as a separate service would be
 knowledgeable. Only clients of Heat will ever care about data and
 operations on templates, so I move that HeatR becomes it's own service, or
 becomes part of Heat.


 I spoke at length via G+ with Randall and Tim about this earlier today.
 I think I understand the impetus for all of this a little better now.

 Basically what I'm suggesting is that Glance is only narrow in scope
 because that was the only object that OpenStack needed a catalog for
 before now.

 However, the overlap between a catalog of images and a catalog of
 templates is quite comprehensive. The individual fields that matter to
 images are different than the ones that matter to templates, but that
 is a really minor detail isn't it?

 I would suggest that Glance be slightly expanded in scope to be an
 object catalog. Each object type can have its own set of fields that
 matter to it.

 This doesn't have to be a minor change to glance to still have many
 advantages over writing something from scratch and asking people to
 deploy another service that is 99% the same as Glance.


 My suggestion for long-term architecture would be to use Murano for
 catalog/metadata information (for images/templates/whatever) and move the
 block-streaming drivers into Cinder, and get rid of the Glance project
 entirely. Murano would then become the catalog/registry of objects in the
 OpenStack world, Cinder would be the thing that manages and streams blocks
 of data or block devices, and Glance could go away. Imagine it... OpenStack
 actually *reducing* the number of projects instead of expanding! :)


I think it is good to mention the idea of shrinking the overall OpenStack
code base. The fact that the best code offers a lot of features without a
hugely expanded codebase often seems forgotten--perhaps because it is
somewhat incompatible with our low-barrier-to-entry model of development.

However, as a mild defense of Glance's place in the OpenStack ecosystem,
I'm not sure yet that a general catalog/metadata service would be a proper
replacement. There are two key distinctions between Glance and a
catalog/metadata service. One is that Glance *owns* the reference to the
underlying data--meaning Glance can control the consistency of its
references. I.e. you should not be able to delete the image data out from
underneath Glance while the Image entry exists, in order to avoid a
terrible user experience. Two is that Glance understands and coordinates
the meaning and relationships of Image metadata. Without these
distinctions, I'm not sure we need any OpenStack project at all--we should
probably just publish an LDAP schema for 

Re: [openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-06 Thread Kurt Griffiths
That’s a good question. IMO, this is an important use case, and should be 
considered within scope of the project.

Rackspace uses a precursor to Marconi for its Cloud Backup product, and it has 
worked out well for showing semi-realtime updates, e.g., progress on an active 
backup jobs. We have a large number of backup agents posting events at any 
given time. The web-based control panel polls every few seconds for updates, 
but the message service was optimized for frequent, low-traffic requests like 
that, so it hasn’t been a real problem.

I’ve tried to promote a performance-oriented mindset from the beginning of the 
Marconi project, and I would like to give a shout-out to the team for the fine 
work they’ve done in this area to date; queues scale quite well, and benchmarks 
have shown promising throughput and latency numbers that will only improve as 
we continue to tune the existing code (and add transport and storage drivers 
designed for ultra-high-throughput use cases).

That being said, we definitely need to consider the load on the various 
OpenStack components, themselves, for generating events (i.e., pushing events 
to a queue). I would love to learn more about the requirements of individual 
project teams in this respect (those who are interested in surfacing events to 
end users).

From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, December 4, 2013 at 8:30 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ceilometer] [marconi] Notifications brainstorming 
session tomorrow @ 1500 UTC

How frequent do you imagine these notifications being?  There's a wide 
variation here between the 'blue moon' case where disk space is low and 
frequent notifications of things like OS performance, which you might want to 
display in Horizon or another monitoring tool on an every-few-seconds basis, or 
instance state change, which is usually driven by polling at present.

I'm not saying that we should necessarily design notifications for the latter 
cases, because it introduces potentially quite a lot of user-demanded load on 
the Openstack components, I'm just asking for a statement of intent.
--
Ian.


On 4 December 2013 16:09, Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com wrote:
Thanks! We touched on this briefly during the chat yesterday, and I will
make sure it gets further attention.

On 12/3/13, 3:54 AM, Julien Danjou 
jul...@danjou.infomailto:jul...@danjou.info wrote:

On Mon, Dec 02 2013, Kurt Griffiths wrote:

 Following up on some conversations we had at the summit, I¹d like to get
 folks together on IRC tomorrow to crystalize the design for a
notifications
 project under the Marconi program. The project¹s goal is to create a
service
 for surfacing events to end users (where a user can be a cloud app
 developer, or a customer using one of those apps). For example, a
developer
 may want to be notified when one of their servers is low on disk space.
 Alternatively, a user of MyHipsterApp may want to get a text when one of
 their friends invites them to listen to That Band You¹ve Never Heard Of.

 Interested? Please join me and other members of the Marconi team
tomorrow,
 Dec. 3rd, for a brainstorming session in #openstack-marconi at 1500

UTChttp://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=0se
c=0.
 Your contributions are crucial to making this project awesome.

 I¹ve seeded an etherpad for the discussion:

 https://etherpad.openstack.org/p/marconi-notifications-brainstorm

This might (partially) overlap with what Ceilometer is doing with its
alarming feature, and one of the blueprint our roadmap for Icehouse:

  https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification

While it doesn't solve the use case at the same level, the technical
mechanism is likely to be similar.

--
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] list of negative tests that need to be separated from other tests.

2013-12-06 Thread Adalberto Medeiros

Hi!

In the QA meeting yesterday, we decide to create a blueprint specific 
for the negative tests in a separate file: 
https://blueprints.launchpad.net/tempest/+spec/negative-test-files and 
use it to track the patches.


I added the etherpad link Ken'ichi pointed to this bp. Ken'ichi, should 
you be the owner of this bp?


Giulio, could you mark it  In Progress?

Regards,

Adalberto Medeiros
Linux Technology Center
Openstack and Cloud Development
IBM Brazil
Email: adal...@linux.vnet.ibm.com

On Mon 02 Dec 2013 10:23:07 PM BRST, Christopher Yeoh wrote:


On Tue, Dec 3, 2013 at 9:43 AM, Kenichi Oomichi
oomi...@mxs.nes.nec.co.jp mailto:oomi...@mxs.nes.nec.co.jp wrote:


Hi Sean, David, Marc

I have one question about negative tests.
Now we are in moratorium on new negative tests in Tempest:
http://lists.openstack.org/pipermail/openstack-dev/2013-November/018748.html

Is it OK to consider this kind of patch(separating negative tests from
positive test file, without any additional negative tests) as an
exception?


I don't have a strong opinion on this, but I think it's ok given it
will make the eventual removal of
hand coded negative tests in the future easier even though it costs us
a bit of churn now.

Chris


Thanks
Ken'ichi Ohmichi

---

 -Original Message-
 From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com
mailto:adal...@linux.vnet.ibm.com]
 Sent: Monday, December 02, 2013 8:33 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [tempest] list of negative tests
that need to be separated from other tests.

 Thanks Ken'ichi. I added my name to a couple of them in that list.

 Adalberto Medeiros
 Linux Technology Center
 Openstack and Cloud Development
 IBM Brazil
 Email: adal...@linux.vnet.ibm.com
mailto:adal...@linux.vnet.ibm.com

 On Mon 02 Dec 2013 07:36:38 AM BRST, Kenichi Oomichi wrote:
 
  Hi Adalberto,
 
  -Original Message-
  From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com
mailto:adal...@linux.vnet.ibm.com]
  Sent: Saturday, November 30, 2013 11:29 PM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [tempest] list of negative tests
that need to be separated from other tests.
 
  Hi!
 
  I understand that one action toward negative tests, even before
  implementing the automatic schema generation, is to move them
to their
  own file (.py), thus separating them from the 'positive'
tests. (See
  patch https://review.openstack.org/#/c/56807/ as an example).
 
  In order to do so, I've got a list of testcases that still
have both
  negative and positive tests together, and listed them in the
following
  etherpad link:
https://etherpad.openstack.org/p/bp_negative_tests_list
 
  The idea here is to have patches for each file until we get
all the
  negative tests in their own files. I also linked the etherpad
to the
  specific blueprint created by Marc for negative tests in icehouse
  (https://blueprints.launchpad.net/tempest/+spec/negative-tests ).
 
  Please, send any comments and whether you think this is the right
  approach to keep track on that task.
 
  We have already the same etherpad, and we are working on it.
  Please check the following:
  https://etherpad.openstack.org/p/TempestTestDevelopment
 
 
  Thanks
  Ken'ichi Ohmichi
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-06 Thread Ben Nemec

On 2013-12-05 21:38, Jay Pipes wrote:

On 12/04/2013 12:10 PM, Russell Bryant wrote:

On 12/04/2013 11:16 AM, Nikola Đipanov wrote:
Resurrecting this thread because of an interesting review that came 
up

yesterday [1].

It seems that our lack of a firm decision on what to do with the 
mocking
framework has left people confused. In hope to help - I'll give my 
view

of where things are now and what we should do going forward, and
hopefully we'll reach some consensus on this.

Here's the breakdown:

We should abandon mox:
* It has not had a release in over 3 years [2] and a patch upstream 
for 2

* There are bugs that are impacting the project with it (see above)
* It will not be ported to python 3

Proposed path forward options:
1) Port nova to mock now:
   * Literally unmanageable - huge review overhead and regression 
risk

for not so much gain (maybe) [1]

2) Opportunistically port nova (write new tests using mock, when 
fixing

tests, move them to mock):
  * Will take a really long time to move to mock, and is not really a
solution since we are stuck with mock for an undetermined period of 
time

- it's what we are doing now (kind of).

3) Same as 2) but move current codebase to mox3
  * Buys us py3k compat, and fresher code
  * Mox3 and mox have diverged and we would need to backport mox 
fixes
onto the mox3 three and become de-facto active maintainers (as per 
Peter

Feiner's last email - that may not be so easy).

I think we should follow path 3) if we can, but we need to:

1) Figure out what is the deal with mox3 and decide if owning it will
really be less trouble than porting nova. To be hones - I was unable 
to

even find the code repo for it, only [3]. If anyone has more info -
please weigh in. We'll also need volunteers

2) Make better testing guidelines when using mock, and maybe add some
testing helpers (like we do already have for mox) that will make 
porting

existing tests easier. mreidem already put this on this weeks nova
meeting agenda - so that might be a good place to discuss all the 
issues

mentioned here as well.

We should really take a stronger stance on this soon IMHO, as this 
comes

up with literally every commit.


I think option 3 makes the most sense here (pending anyone saying we
should run away screaming from mox3 for some reason).  It's actually
what I had been assuming since this thread a while back.


What precisely is the benefit of moving the existing code to mox3
versus moving the existing code to mock? Is mox3 so similar to mox
that the transition would be minimal?


This means that we don't need to *require* that tests get converted if
you're changing one.  It just gets you bonus imaginary internet 
points.


Requiring mock for new tests seems fine.  We can grant exceptions in
specific cases if necessary.  In general, we should be using mock for
new tests.


My vote would be to use mock for everything new (no brainer), keep old
mox stuff around and slowly port it to mock. I see little value in
bringing in another mox3 library, especially if we'd end up having to
maintain it.


My understanding is that mox3 is a drop-in, Python 3 compatible version 
of mox.


I agree that spending any significant time maintaining mox3 is a bad 
thing at this point.  Mock is part of the stdlib in Python 3 and I don't 
think we should put a lot of time into reinventing the wheel.  That 
said, as long as mox3 works right now I don't think we should be 
rewriting mox test cases just to move them to mock either.  That's a 
whole lot of code churn for basically no benefit.


So my preference would be to:
1) Use mock for new test cases, with possible exceptions for adding to 
test classes that already use mox
2) Leave the existing mox test cases alone as long as they work fine 
with mox3.

3) If any test cases don't work in mox3, rewrite them in mock

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Ladislav Smola

On 12/06/2013 05:36 PM, Ben Nemec wrote:


On 2013-12-06 03:22, Ladislav Smola wrote:


On 12/06/2013 09:56 AM, Jaromir Coufal wrote:


On 2013/04/12 08:12, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

Hey there,

thanks Rob for keeping eye on this. Speaking for myself, as current 
non-coder it was very hard to keep pace with others, especially when 
UI was on hold and I was designing future views. I'll continue 
working on designs much more, but I will also keep an eye on code 
which is going in. I believe that UX reviews will be needed before 
merging so that we assure keeping the vision. That's why I would 
like to express my will to stay within -core even when I don't 
deliver that big amount of reviews as other engineers. However if 
anybody feels that I should be just +1, I completely understand and 
I will give up my +2 power.




I wonder whether there can be a sort of honorary core title. jcoufal 
is contributing a lot, but not that much with code or reviews.


What purpose would this serve?  The only thing core gives you is the 
ability to +2 in Gerrit.  If you're not reviewing, core is 
meaningless.  It's great to contribute to the mailing list, but being 
core shouldn't have any influence on that one way or another.  This is 
a meritocracy where suggestions are judged based on their value, not 
whether the suggester has +2 ability (which honorary core wouldn't 
provide anyway, I assume).  At least that's the ideal.  I think 
everyone following the project is aware of Jaromir's contributions and 
a title isn't going to change that one way or another.




Well. It's true. The only thing that comes to my mind is a swell dinner 
at summit. :-D



-Ben



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-12-03 23:12:39 -0800:
 Hi,
 like most OpenStack projects we need to keep the core team up to
 date: folk who are not regularly reviewing will lose context over
 time, and new folk who have been reviewing regularly should be trusted
 with -core responsibilities.
 
 In this months review:
  - Ghe Rivero for -core

+1, We've been getting good reviews from Ghe for a while now. :)

  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

I suggest we delay this removal for 30 days. I know it is easy to add
them back in, but I hesitate to disrupt the flow if these people all
are willing to pick up the pace again. They may not have _immediate_
code knowledge but they should have enough historical knowledge that
has not gone completely stale in just the last 30-60 days.

What I'm suggesting is that review velocity will benefit from core being
a little more sticky, especially for sustained contributors who have
just had their attention directed elsewhere briefly.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync (was: creating a default for oslo config variables within a project?)

2013-12-06 Thread Sean Dague
So it still seems that we are at an impasse here on getting new olso
lockutils into cinder because it doesn't come with a working default.

As a recap - https://review.openstack.org/#/c/48935/ (that sync)

is blocked by failing upgrade testing, because lock_path has no default,
so it has to land config changes simultaneously on the commit otherwise
explode cinder on startup (as not setting that variable explodes as a
fatal error). I consider that an upgrade blocker, and am not comfortable
with the work around - https://review.openstack.org/#/c/52070/3

I've proposed an oslo patch that would give us a default plus an ERROR
log message if you used it - https://review.openstack.org/#/c/60274/

The primary concern here is that it opens up a local DOS attack because
it's a well known directory. This is a valid concern. My feeling is you
are lost anyway if you have malicious users on your system, and if we've
narrowed them down to only DOSing (which there other ways they could do
that), I think we've narrowed the surface enough to make this acceptable
at the ERROR log level. However there are objections, so at this point
it seems like we needed to summarize the state of the world, get this
back onto the list with a more descriptive subject, and see who else
wants to weigh in.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Joshua Harlow
Previous not precious, ha, durn autocorrect, lol.

Sent from my really tiny device...

 On Dec 6, 2013, at 9:50 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 
 Forgive me for not understanding your precious email (which I guess was 
 confusing for me to understand). This one clears that up. If only we all had 
 Vulcan mind meld capabilities, haha.
 
 Thanks for helping me understand, no need to get frustrated. Not everyone is 
 able to decipher your email in the same way u wrote it, part of this ML 
 should be about teaching others your viewpoints, not getting frustrated over 
 simple things like misunderstandings...
 
 Sent from my really tiny device...
 
 On Dec 6, 2013, at 9:16 AM, Mark McLoughlin mar...@redhat.com wrote:
 
 On Fri, 2013-12-06 at 16:55 +, Joshua Harlow wrote:
 I really have to agree with this. It's especially important if
 oslo.messaging is also used in libraries like taskflow. If
 oslo.messaging imposes that users of it must use oslo.config then by
 using it in taskflow, taskflow then imposes the same oslo.config
 usage.
 
 You know, I think you either didn't read my (carefully considered) email
 or didn't take the time to understand it. That's incredibly frustrating.
 
 My proposal would mean that oslo.messaging could be used like this:
 
 from oslo import messaging
 
 conf = messaging.get_config_from_dict(dict(rpc_conn_pool_size=100))
 
 transport = messaging.get_transport(conf, 'qpid:///test')
 
 server = Server(transport)
 server.start()
 server.wait()
 
 oslo.config is nothing but an implementation detail if you used
 oslo.messaging in this way.
 
 (Julien had a more subtle concern about this which I can actually relate
 more to)
 
 This makes all libraries that use it inherently only useable in the
 openstack ecosystem which I think is very bad opensource behavior (not
 exactly open).
 
 bad open-source behaviour? Seriously?
 
 Yeah, like gtk+ is only usable in the GNOME ecosystem because it uses
 glib and gtk+ authors are bad open-source people because they didn't
 allow an alternative to glib to be used. Bizarre statement, frankly.
 
 There are other reasons to, a configuration dict means u can have
 many different active instances being simultaneously used (each with
 its own config), with oslo.config since it is a static configuration
 object u get 1 simultaneous instance. So this is yet another behavior
 that I as a library provider thing is very unhealthy restriction to
 impose on people that use taskflow.
 
 1 simultaneous instance ... you mean the cfg.CONF object?
 
 There's no requirement to use that and I explained that in my email
 too ... even though I actually thought it should need no explaining at
 this point.
 
 Mark.
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Joshua Harlow
Forgive me for not understanding your precious email (which I guess was 
confusing for me to understand). This one clears that up. If only we all had 
Vulcan mind meld capabilities, haha.

Thanks for helping me understand, no need to get frustrated. Not everyone is 
able to decipher your email in the same way u wrote it, part of this ML should 
be about teaching others your viewpoints, not getting frustrated over simple 
things like misunderstandings...

Sent from my really tiny device...

 On Dec 6, 2013, at 9:16 AM, Mark McLoughlin mar...@redhat.com wrote:
 
 On Fri, 2013-12-06 at 16:55 +, Joshua Harlow wrote:
 I really have to agree with this. It's especially important if
 oslo.messaging is also used in libraries like taskflow. If
 oslo.messaging imposes that users of it must use oslo.config then by
 using it in taskflow, taskflow then imposes the same oslo.config
 usage.
 
 You know, I think you either didn't read my (carefully considered) email
 or didn't take the time to understand it. That's incredibly frustrating.
 
 My proposal would mean that oslo.messaging could be used like this:
 
  from oslo import messaging
 
  conf = messaging.get_config_from_dict(dict(rpc_conn_pool_size=100))
 
  transport = messaging.get_transport(conf, 'qpid:///test')
 
  server = Server(transport)
  server.start()
  server.wait()
 
 oslo.config is nothing but an implementation detail if you used
 oslo.messaging in this way.
 
 (Julien had a more subtle concern about this which I can actually relate
 more to)
 
 This makes all libraries that use it inherently only useable in the
 openstack ecosystem which I think is very bad opensource behavior (not
 exactly open).
 
 bad open-source behaviour? Seriously?
 
 Yeah, like gtk+ is only usable in the GNOME ecosystem because it uses
 glib and gtk+ authors are bad open-source people because they didn't
 allow an alternative to glib to be used. Bizarre statement, frankly.
 
 There are other reasons to, a configuration dict means u can have
 many different active instances being simultaneously used (each with
 its own config), with oslo.config since it is a static configuration
 object u get 1 simultaneous instance. So this is yet another behavior
 that I as a library provider thing is very unhealthy restriction to
 impose on people that use taskflow.
 
 1 simultaneous instance ... you mean the cfg.CONF object?
 
 There's no requirement to use that and I explained that in my email
 too ... even though I actually thought it should need no explaining at
 this point.
 
 Mark.
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] only one subnet_id is allowed behind a router for vpnservice object

2013-12-06 Thread Nachi Ueno
Thanks!
Commented on bp whiteboard.

2013/12/5 Yongsheng Gong gong...@unitedstack.com:
 ok, My pleasure to help,
 I created a bp for it:
 https://blueprints.launchpad.net/neutron/+spec/vpn-multiple-subnet


 On Fri, Dec 6, 2013 at 2:11 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Yong

 Yes, to support multiple subnet is on the roadmap.
 I'll definitely welcome your help :P

 2013/12/5 Yongsheng Gong gong...@unitedstack.com:
  I think we should allow more than subnet_id in one vpnservice object.
  but the model below limits only one subnet_id is used.
 
  https://github.com/openstack/neutron/blob/master/neutron/extensions/vpnaas.py
  RESOURCE_ATTRIBUTE_MAP = {
 
  'vpnservices': {
  'id': {'allow_post': False, 'allow_put': False,
 'validate': {'type:uuid': None},
 'is_visible': True,
 'primary_key': True},
  'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
  'name': {'allow_post': True, 'allow_put': True,
   'validate': {'type:string': None},
   'is_visible': True, 'default': ''},
  'description': {'allow_post': True, 'allow_put': True,
  'validate': {'type:string': None},
  'is_visible': True, 'default': ''},
  'subnet_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
  'router_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
  'admin_state_up': {'allow_post': True, 'allow_put': True,
 'default': True,
 'convert_to': attr.convert_to_boolean,
 'is_visible': True},
  'status': {'allow_post': False, 'allow_put': False,
 'is_visible': True}
  },
 
  with such limit, I don't think there is a way to allow other subnets
  behind
  the router be vpn exposed!
 
  thoughts?
 
  Thanks
  Yong Sheng Gong
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Compute meter names prefaced by instance:

2013-12-06 Thread Pendergrass, Eric
Hi, I've been out for nearly 3 weeks and noticed Compute meter names are now
prefaced by instance:  

 

http://docs.openstack.org/developer/ceilometer/measurements.html

 

Not sure when this happened but I was wondering if the change applies across
all OpenStack.  Will Nova use the change for its events?

 

Also, is the purpose of the change to identify that instance types are
undefined and may vary by installation?

 

Many thanks,

Eric Pendergrass



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-06 Thread Joshua Harlow
Could jsonschema[1] be used here to do the options schema part? It works on 
dictionaries (and really isn't tied to json). But maybe I am missing some 
greater context/understanding (see other emails).

[1] https://pypi.python.org/pypi/jsonschema

Sent from my really tiny device...

 On Dec 6, 2013, at 7:12 AM, Mark McLoughlin mar...@redhat.com wrote:
 
 On Fri, 2013-12-06 at 15:41 +0100, Julien Danjou wrote:
 On Fri, Dec 06 2013, Mark McLoughlin wrote:
 
 Hi Mark,
 
 If the goal is allow applications to use oslo.messaging without using
 oslo.config, then what's driving this? I'm guessing some possible
 answers:
 
  5) But I want to avoid any dependency on oslo.config
 
 I think that's the more important one to me.
 
 This could be fundamentally what we're talking about here, but I 
 struggle to understand it - oslo.config is pretty tiny and it only 
 requires argparse, so if it's just an implementation detail that 
 you don't even notice if you're not using config files then what 
 exactly is the problem?
 
 Basically, my thinking is that something like this example:
 
  https://gist.github.com/markmc/7823420
 
 where you can use oslo.messaging with just a dict of config values
 (rather than having to parse config files) should handle any reasonable
 concern that I've understood so far ... without having to change much at
 all.
 
 I definitely agree with your arguments. There's a large number of
 technical solutions that can be used to bypass the usage of oslo.config
 and make it work with whatever you're using..
 
 I just can't stop thinking that a library shouldn't impose any use of a
 configuration library. I can pick any library on PyPI, and, fortunately,
 most of them don't come with a dependency on the favorite configuration
 library of their author or related project, and its usage spread all
 over the code base.
 
 While I do respect the fact that this is a library to be consumed mainly
 in OpenStack (and I don't want to break that), I think we're also trying
 to not be the new Zope and contribute in a sane way to the Python
 ecosystem. And I think oslo.messaging doesn't do that right.
 
 Now if the consensus is to leave it that way, I honestly won't fight it
 over and over. As Mark proved, there's a lot of way to circumvent the
 oslo.config usage anyway.
 
 Ok, let's say oslo.messaging didn't use oslo.config at all and just took
 a free-form dict of configuration values. Then you'd have this
 separation whereby you can write code to retrieve those values from any
 number of possible configuration sources and pass them down to
 oslo.messaging. I think that's what you're getting at?
 
 However, what you lose with that is a consistent way of defining a
 schema for those configuration options in oslo.messaging. Should a given
 option be an int, bool or a list? What should it's default be? etc. etc.
 That stuff would live in the integration layer that maps from
 oslo.config to a dict, even though it's totally useful when you just
 supply a dict.
 
 I guess there's two sides to oslo.config - the option schemas and the
 code to retrieve values from various sources (command line, config files
 or overrides/defaults). I think the option schemas is a useful
 implementation detail in oslo.messaging, even if the values don't come
 from the usual oslo.config sources.
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Behavior change around the PyPI mirror and openstack/requirements

2013-12-06 Thread Jeremy Stanley
On 2013-12-06 18:58:36 +0200 (+0200), Monty Taylor wrote:
[...]
 Anywho - gate selection will now be tied to the projects.txt file in
 openstack/requirements. Essentially, if you receive automatic
 requirements sync commits, your commits will be tested with the mirror.
 If you are tied to the mirror, you will receive the commits. (It's the
 same thing, get it)
[...]

Also, Clark fixed[1] these (thanks!!!) so they will start getting
updates again. Correct ones this time, unless we're really, really
wrong about something there.

[1] https://review.openstack.org/59855

-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Georgy Okrokvertskhov
Hi,

I am really inspired by this thread. Frankly saying, Glance for Murano was
a kind of sacred entity, as it is a service with a long history in
OpenStack.  We even did not think in the direction of changing Glance.
Spending a night with these ideas, I am kind of having a dream about
unified catalog where the full range of different entities are presented.
Just imagine that we have everything as  first class citizens of catalog
treated equally: single VM (image), Heat template (fixed number of VMs\
autoscaling groups), Murano Application (generated Heat templates), Solum
assemblies

Projects like Solum will highly benefit from this catalog as it can use all
varieties of VM configurations talking with one service.

This catalog will be able not just list all possible deployable entities
but can be also a registry for already deployed configurations. This is
perfectly aligned with the goal for catalog to be a kind of market place
which provides billing information too.

OpenStack users also will benefit from this as they will have the unified
approach for manage deployments and deployable entities.

I doubt that it could be done by a single team. But if all teams join this
effort we can do this. From my perspective, this could be a part of Glance
program and it is not necessary to add a new program for that. As it was
mentioned earlier in this thread an idea of market place for images in
Glance was here for some time. I think we can extend it to the idea of
creating a marketplace for a deployable entity regardless of the way of
deployment. As Glance is a core project which means it always exist in
OpenStack deployment it makes sense to as a central catalog for everything.

Thanks
Georgy


On Fri, Dec 6, 2013 at 8:57 AM, Mark Washenberger 
mark.washenber...@markwash.net wrote:




 On Thu, Dec 5, 2013 at 9:32 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/05/2013 04:25 PM, Clint Byrum wrote:

 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:

 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:

 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
   wrote:

  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:

 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.


 Fitting is one thing, optimizations around particular assumptions
 about the size of data and the frequency of reads/writes might be an 
 issue,
 but I admit to ignorance about those details in Glance.


 Optimizations can be improved for various use cases. The design,
 however,
 has no assumptions that I know about that would invalidate storing
 blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.


 I think we are getting out into the weeds a little bit here. It is
 important to think about these apis in terms of what they actually do,
 before the decision of combining them or not can be made.

 I think of HeatR as a template storage service, it provides extra data
 and operations on templates. HeatR should not care about how those
 templates are stored.
 Glance is an image storage service, it provides extra data and
 operations on images (not blobs), and it happens to use swift as a backend.

 If HeatR and Glance were combined, it would result in taking two very
 different types of data (template metadata vs image metadata) and mashing
 them into one service. How would adding the complexity of HeatR benefit
 Glance, when they are dealing with conceptually two very different types of
 data? For instance, should a template ever care about the field minRam
 that is stored with an image? Combining them adds a huge development
 complexity with a very small operations payoff, and so Openstack is already
 so operationally complex that HeatR as a separate service would be
 knowledgeable. Only clients of Heat will ever care about data and
 operations on templates, so I move that HeatR becomes it's own service, or
 becomes part of Heat.


 I spoke at length via G+ with Randall and Tim about this earlier today.
 I think I understand the impetus for all of this a little better now.

 Basically what I'm suggesting is that Glance is only narrow in scope
 because that was the only object that OpenStack needed a catalog for
 before now.

 However, the overlap between a catalog of images and a catalog of
 templates is quite comprehensive. The individual fields that matter to
 images are different than the ones that matter to templates, but that
 is a really minor detail isn't it?

 I would suggest that Glance be slightly expanded in scope to be an
 object catalog. Each object 

Re: [openstack-dev] [Cinder] Cloning vs copying images

2013-12-06 Thread Dmitry Borodaenko
Dear All,

The consensus in comments to both patches seems to be that the
decision to clone an image based on disk format should be made in each
driver, instead of being imposed on all drivers by the flow. Edward
has updated his patch to follow the same logic as my patch, and I have
updated my patch to include additional unit test improvements and
better log messages lifted from Edward's version. The only difference
between the patches now is that my patch passes the whole image_meta
dictionary into clone_image while Edward's patch only passes the
image_format string.

Please review the patches once again and provide feedback on which
should be merged. I naturally favor my version, which came up first,
is consistent with other driver methods which also pass image_meta
dictionary around, and prevents further refactoring down the road if
any driver comes up with a reason to consider other fields of
image_meta (e.g. size) when deciding whether an image can be cloned.

Thanks,
Dmitry Borodaenko

On Mon, Dec 2, 2013 at 11:29 AM, Dmitry Borodaenko
dborodae...@mirantis.com wrote:
 Hi OpenStack, particularly Cinder backend developers,

 Please consider the following two competing fixes for the same problem:

 https://review.openstack.org/#/c/58870/
 https://review.openstack.org/#/c/58893/

 The problem being fixed is that some backends, specifically Ceph RBD,
 can only boot from volumes created from images in a certain format, in
 RBD's case, RAW. When an image in a different format gets cloned into
 a volume, it cannot be booted from. Obvious solution is to refuse
 clone operation and copy/convert the image instead.

 And now the principal question: is it safe to assume that this
 restriction applies to all backends? Should the fix enforce copy of
 non-RAW images for all backends? Or should the decision whether to
 clone or copy the image be made in each backend?

 The first fix puts this logic into the RBD backend, and makes changes
 necessary for all other backends to have enough information to make a
 similar decision if necessary. The problem with this approach is that
 it's relatively intrusive, because driver clone_image() method
 signature has to be changed.

 The second fix has significantly less code changes, but it does
 prevent cloning non-RAW images for all backends. I am not sure if this
 is a real problem or not.

 Can anyone point at a backend that can boot from a volume cloned from
 a non-RAW image? I can think of one candidate: GPFS is a file-based
 backend, while GPFS has a file clone operation. Is GPFS backend able
 to boot from, say, a QCOW2 volume?

 Thanks,

 --
 Dmitry Borodaenko



-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Questions around Development Process

2013-12-06 Thread Jay Dobies

a) Because we're essentially doing a tear-down and re-build of the
whole architecture (a lot of the concepts in tuskar
will simply disappear), it's difficult to do small incremental patches
that support existing functionality.  Is it okay
to have patches that break functionality?  Are there good alternatives?


This is an incubating project, so there are no api stability promises.
If a patch breaks some functionality that we've decided to not support
going forward I don't see a problem with it.  That said, if a patch
breaks some functionality that we _do_ plan to keep, I'd prefer to see
it done as a series of dependent commits that end with the feature in a
working state again, even if some of the intermediate commits are not
fully functional.  Hopefully that will both keep the commit sizes down
and provide a definite path back to functionality.


Is there any sort of policy or convention of sending out a warning 
before that sort of thing is merged in so that people don't accidentally 
blindly pull master and break something they were using?



b) In the past, we allowed parallel development of the UI and API by
having well-documented expectations of what the API


Are these expectations documented yet? I'm new to the project and still 
finding my way around. I've seen the wireframes and am going through 
Chen's icehouse requirements, but I haven't stumbled on too much talk 
about the APIs specifically (not suggesting they don't exist, more 
likely that I haven't found them yet).



would provide.  We would then mock those calls in the UI, replacing
them with real API calls as they became available.  Is
this acceptable?


This sounds reasonable to me.



-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TransportURL and virtualhost/exchnage (was Re: [Oslo] Layering olso.messaging usage of config)

2013-12-06 Thread Gordon Sim

On 11/18/2013 04:44 PM, Mark McLoughlin wrote:

On Mon, 2013-11-18 at 11:29 -0500, Doug Hellmann wrote:

IIRC, one of the concerns when oslo.messaging was split out was
maintaining support for existing deployments with configuration files that
worked with oslo.rpc. We had said that we would use URL query parameters
for optional configuration values (with the required values going into
other places in the URL)

[...]

I hadn't ever considered exposing all configuration options via the URL.
We have a lot of fairly random options, that I don't think you need to
configure per-connection if you have multiple connections in the one
application.


I certainly agree that not all configuration options may make sense in a 
URL. However if you will forgive me for hijacking this thread 
momentarily on a related though tangential question/suggestion...


Would it make sense to (and/or even be possible to) take the 'exchange' 
option out of the API, and let transports deduce their implied 
scope/namespace purely from the transport URL in perhaps transport 
specific ways?


E.g. you could have rabbit://my-host/my-virt-host/my-exchange or 
rabbit://my-host/my-virt-host or rabbit://my-host//my-exchange, and the 
rabbit driver would ensure that the given virtualhost and or exchange 
was used.


Alternatively you could have zmq://my-host:9876 or zmq://my-host:6789 
to 'scope' 0MQ communication channels. and hypothetically 
something-new://my-host/xyz, where xyz would be interpreted by the 
driver in question in a relevant way to scope the interactions over that 
transport.


Applications using RPC would then assume they were using a namespace 
free from the danger of collisions with other applications, but this 
would all be driven through transport specific configuration.


Just a suggestion based on my initial confusion through ignorance on the 
different scoping mechanisms described in the API docs. It may not be 
feasible or may have negative consequences I have not in my naivety 
foreseen.


--Gordon.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800:
 On 12/05/2013 04:25 PM, Clint Byrum wrote:
  Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
  Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
  On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
wrote:
 
  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
 
 
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
 
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not users
 
  My responses:
 
  1: Irrelevant. Smaller things will fit in it just fine.
 
  Fitting is one thing, optimizations around particular assumptions about 
  the size of data and the frequency of reads/writes might be an issue, 
  but I admit to ignorance about those details in Glance.
 
 
  Optimizations can be improved for various use cases. The design, however,
  has no assumptions that I know about that would invalidate storing blobs
  of yaml/json vs. blobs of kernel/qcow2/raw image.
 
  I think we are getting out into the weeds a little bit here. It is 
  important to think about these apis in terms of what they actually do, 
  before the decision of combining them or not can be made.
 
  I think of HeatR as a template storage service, it provides extra data and 
  operations on templates. HeatR should not care about how those templates 
  are stored.
  Glance is an image storage service, it provides extra data and operations 
  on images (not blobs), and it happens to use swift as a backend.
 
  If HeatR and Glance were combined, it would result in taking two very 
  different types of data (template metadata vs image metadata) and mashing 
  them into one service. How would adding the complexity of HeatR benefit 
  Glance, when they are dealing with conceptually two very different types 
  of data? For instance, should a template ever care about the field 
  minRam that is stored with an image? Combining them adds a huge 
  development complexity with a very small operations payoff, and so 
  Openstack is already so operationally complex that HeatR as a separate 
  service would be knowledgeable. Only clients of Heat will ever care about 
  data and operations on templates, so I move that HeatR becomes it's own 
  service, or becomes part of Heat.
 
 
  I spoke at length via G+ with Randall and Tim about this earlier today.
  I think I understand the impetus for all of this a little better now.
 
  Basically what I'm suggesting is that Glance is only narrow in scope
  because that was the only object that OpenStack needed a catalog for
  before now.
 
  However, the overlap between a catalog of images and a catalog of
  templates is quite comprehensive. The individual fields that matter to
  images are different than the ones that matter to templates, but that
  is a really minor detail isn't it?
 
  I would suggest that Glance be slightly expanded in scope to be an
  object catalog. Each object type can have its own set of fields that
  matter to it.
 
  This doesn't have to be a minor change to glance to still have many
  advantages over writing something from scratch and asking people to
  deploy another service that is 99% the same as Glance.
 
 My suggestion for long-term architecture would be to use Murano for 
 catalog/metadata information (for images/templates/whatever) and move 
 the block-streaming drivers into Cinder, and get rid of the Glance 
 project entirely. Murano would then become the catalog/registry of 
 objects in the OpenStack world, Cinder would be the thing that manages 
 and streams blocks of data or block devices, and Glance could go away. 
 Imagine it... OpenStack actually *reducing* the number of projects 
 instead of expanding! :)
 

Have we not learned our lesson with Nova-Net/Neutron yet? Rewrites of
existing functionality are painful.

The Murano-concerned people have already stated they are starting over
on that catalog.

I suggest they start over by expanding Glance's catalog. If the block
streaming bits of Glance need to move somewhere else, that sounds like a
completely separate concern that distracts from this point.

And to be clear, (I think I will just stop talking as I think I've
made this point), my point is, we have a catalog, let's make it better.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Vishvananda Ishaya

On Dec 6, 2013, at 10:07 AM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi,
 
 I am really inspired by this thread. Frankly saying, Glance for Murano was a 
 kind of sacred entity, as it is a service with a long history in OpenStack.  
 We even did not think in the direction of changing Glance. Spending a night 
 with these ideas, I am kind of having a dream about unified catalog where the 
 full range of different entities are presented. Just imagine that we have 
 everything as  first class citizens of catalog treated equally: single VM 
 (image), Heat template (fixed number of VMs\ autoscaling groups), Murano 
 Application (generated Heat templates), Solum assemblies
 
 Projects like Solum will highly benefit from this catalog as it can use all 
 varieties of VM configurations talking with one service.
 This catalog will be able not just list all possible deployable entities but 
 can be also a registry for already deployed configurations. This is perfectly 
 aligned with the goal for catalog to be a kind of market place which provides 
 billing information too.
 
 OpenStack users also will benefit from this as they will have the unified 
 approach for manage deployments and deployable entities.
 
 I doubt that it could be done by a single team. But if all teams join this 
 effort we can do this. From my perspective, this could be a part of Glance 
 program and it is not necessary to add a new program for that. As it was 
 mentioned earlier in this thread an idea of market place for images in Glance 
 was here for some time. I think we can extend it to the idea of creating a 
 marketplace for a deployable entity regardless of the way of deployment. As 
 Glance is a core project which means it always exist in OpenStack deployment 
 it makes sense to as a central catalog for everything.

+1 

Vish

 
 Thanks
 Georgy
 
 
 On Fri, Dec 6, 2013 at 8:57 AM, Mark Washenberger 
 mark.washenber...@markwash.net wrote:
 
 
 
 On Thu, Dec 5, 2013 at 9:32 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 12/05/2013 04:25 PM, Clint Byrum wrote:
 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
   wrote:
 
 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.
 
 Fitting is one thing, optimizations around particular assumptions about the 
 size of data and the frequency of reads/writes might be an issue, but I admit 
 to ignorance about those details in Glance.
 
 
 Optimizations can be improved for various use cases. The design, however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.
 
 I think we are getting out into the weeds a little bit here. It is important 
 to think about these apis in terms of what they actually do, before the 
 decision of combining them or not can be made.
 
 I think of HeatR as a template storage service, it provides extra data and 
 operations on templates. HeatR should not care about how those templates are 
 stored.
 Glance is an image storage service, it provides extra data and operations on 
 images (not blobs), and it happens to use swift as a backend.
 
 If HeatR and Glance were combined, it would result in taking two very 
 different types of data (template metadata vs image metadata) and mashing 
 them into one service. How would adding the complexity of HeatR benefit 
 Glance, when they are dealing with conceptually two very different types of 
 data? For instance, should a template ever care about the field minRam that 
 is stored with an image? Combining them adds a huge development complexity 
 with a very small operations payoff, and so Openstack is already so 
 operationally complex that HeatR as a separate service would be 
 knowledgeable. Only clients of Heat will ever care about data and operations 
 on templates, so I move that HeatR becomes it's own service, or becomes part 
 of Heat.
 
 
 I spoke at length via G+ with Randall and Tim about this earlier today.
 I think I understand the impetus for all of this a little better now.
 
 Basically what I'm suggesting is that Glance is only narrow in scope
 because that was the only object that OpenStack needed a catalog for
 before now.
 
 However, the overlap between a catalog of images and a catalog of
 templates is quite comprehensive. The individual fields that matter to
 images are different than 

Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-12-06 Thread Russell Bryant
On 12/02/2013 02:06 PM, Eric Windisch wrote:
 What more is needed from the blueprint or the patch authors to proceed?

I finally got back to looking at this.  Here is how I would like to
proceed with GCE.

1) Stackforge

It seems like this code is pretty self contained.  I'd like to see it
imported into a stackforge repository.  Then, I'd like to see jenkins
jobs showing that both the unit tests and the functional tests written
for this are passing.  It will also help keep the code up to date with
Nova while it's not in the main tree.

2) Support from nova-core

Taking on a new API in Nova has a significant ongoing maintenance
impact.  I'm assuming that the submitters are willing to help maintain
the code.  We also need commitment from some subset of nova-core to
review this code.

So, what do folks from nova-core think?  Are you on board with
maintaining this API?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Oslo] Discoverable home document for APIs (Was: Re: [Nova][Glance] Support of v1 and v2 glance APIs in Nova)

2013-12-06 Thread Dolph Mathews
On Mon, Nov 25, 2013 at 4:25 PM, Jamie Lennox jamielen...@redhat.comwrote:

 To most of your questions i don't know the answer as the format was in
 place before i started with the project. I know that it is similar (though
 not exactly the same) as nova's but not where they are documented (as they
 are version independent)

 I can tell you it looks like:

 {
   versions: {
 values: [
   {
 status: stable,
 updated: 2013-03-06T00:00:00Z,
 media-types: [
   {
 base: application\/json,
 type: application\/vnd.openstack.identity-v3+json
   },
   {
 base: application\/xml,
 type: application\/vnd.openstack.identity-v3+xml
   }
 ],
 id: v3.0,
 links: [
   {
 href: http:\/\/localhost:5000\/v3\/,
 rel: self
   }
 ]
   },
   {
 status: stable,
 updated: 2013-03-06T00:00:00Z,
 media-types: [
   {
 base: application\/json,
 type: application\/vnd.openstack.identity-v2.0+json
   },
   {
 base: application\/xml,
 type: application\/vnd.openstack.identity-v2.0+xml
   }
 ],
 id: v2.0,
 links: [
   {
 href: http:\/\/localhost:5000\/v2.0\/,
 rel: self
   },
   {
 href: http:\/\/docs.openstack.org
 \/api\/openstack-identity-service\/2.0\/content\/,
 type: text\/html,
 rel: describedby
   },
   {
 href: http:\/\/docs.openstack.org
 \/api\/openstack-identity-service\/2.0\/identity-dev-guide-2.0.pdf,
 type: application\/pdf,
 rel: describedby
   }
 ]
   }
 ]
   }
 }


The above is keystone's unversioned multiple choice response. I just wrote
docs for v3's existing version description response, which is closely based
on the above:

  https://review.openstack.org/#/c/60576/



 - Original Message -
  From: Flavio Percoco fla...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Monday, 25 November, 2013 6:41:42 PM
  Subject: [openstack-dev] [Keystone][Marconi][Oslo] Discoverable home
 document for APIs (Was: Re: [Nova][Glance]
  Support of v1 and v2 glance APIs in Nova)
 
  On 25/11/13 09:28 +1000, Jamie Lennox wrote:
  So the way we have this in keystone at least is that querying GET / will
  return all available API versions and querying /v2.0 for example is a
  similar result with just the v2 endpoint. So you can hard pin a version
  by using the versioned URL.
  
  I spoke to somebody the other day about the discovery process in
  services. The long term goal should be that the service catalog contains
  unversioned endpoints and that all clients should do discovery. For
  keystone the review has been underway for a while now:
  https://review.openstack.org/#/c/38414/ the basics of this should be
  able to be moved into OSLO for other projects if required.
 
  Did you guys create your own 'home document' language? or did you base
  it on some existing format? Is it documented somewhere? IIRC, there's
  a thread where part of this was discussed, it was related to horizon.
 
  I'm curious to know what you guys did and if you knew about
  JSON-Home[0] when you started working on this.
 
  We used json-home for Marconi v1 and we'd want the client to work in a
  'follow your nose' way. Since, I'd prefer OpenStack modules to use the
  same language for this, I'm curious to know why - if so - you
  created your own spec, what are the benefits and if it's documented
  somewhere.
 
  Cheers,
  FF
 
  [0] http://tools.ietf.org/html/draft-nottingham-json-home-02
 
  --
  @flaper87
  Flavio Percoco
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Vishvananda Ishaya

On Dec 6, 2013, at 10:38 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800:
 On 12/05/2013 04:25 PM, Clint Byrum wrote:
 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
  wrote:
 
 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.
 
 Fitting is one thing, optimizations around particular assumptions about 
 the size of data and the frequency of reads/writes might be an issue, 
 but I admit to ignorance about those details in Glance.
 
 
 Optimizations can be improved for various use cases. The design, however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.
 
 I think we are getting out into the weeds a little bit here. It is 
 important to think about these apis in terms of what they actually do, 
 before the decision of combining them or not can be made.
 
 I think of HeatR as a template storage service, it provides extra data and 
 operations on templates. HeatR should not care about how those templates 
 are stored.
 Glance is an image storage service, it provides extra data and operations 
 on images (not blobs), and it happens to use swift as a backend.
 
 If HeatR and Glance were combined, it would result in taking two very 
 different types of data (template metadata vs image metadata) and mashing 
 them into one service. How would adding the complexity of HeatR benefit 
 Glance, when they are dealing with conceptually two very different types 
 of data? For instance, should a template ever care about the field 
 minRam that is stored with an image? Combining them adds a huge 
 development complexity with a very small operations payoff, and so 
 Openstack is already so operationally complex that HeatR as a separate 
 service would be knowledgeable. Only clients of Heat will ever care about 
 data and operations on templates, so I move that HeatR becomes it's own 
 service, or becomes part of Heat.
 
 
 I spoke at length via G+ with Randall and Tim about this earlier today.
 I think I understand the impetus for all of this a little better now.
 
 Basically what I'm suggesting is that Glance is only narrow in scope
 because that was the only object that OpenStack needed a catalog for
 before now.
 
 However, the overlap between a catalog of images and a catalog of
 templates is quite comprehensive. The individual fields that matter to
 images are different than the ones that matter to templates, but that
 is a really minor detail isn't it?
 
 I would suggest that Glance be slightly expanded in scope to be an
 object catalog. Each object type can have its own set of fields that
 matter to it.
 
 This doesn't have to be a minor change to glance to still have many
 advantages over writing something from scratch and asking people to
 deploy another service that is 99% the same as Glance.
 
 My suggestion for long-term architecture would be to use Murano for 
 catalog/metadata information (for images/templates/whatever) and move 
 the block-streaming drivers into Cinder, and get rid of the Glance 
 project entirely. Murano would then become the catalog/registry of 
 objects in the OpenStack world, Cinder would be the thing that manages 
 and streams blocks of data or block devices, and Glance could go away. 
 Imagine it... OpenStack actually *reducing* the number of projects 
 instead of expanding! :)
 
 
 Have we not learned our lesson with Nova-Net/Neutron yet? Rewrites of
 existing functionality are painful.
 
 The Murano-concerned people have already stated they are starting over
 on that catalog.
 
 I suggest they start over by expanding Glance's catalog. If the block
 streaming bits of Glance need to move somewhere else, that sounds like a
 completely separate concern that distracts from this point.
 
 And to be clear, (I think I will just stop talking as I think I've
 made this point), my point is, we have a catalog, let's make it better.

+1

Vish

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Jay Dobies

On 12/06/2013 12:26 PM, Clint Byrum wrote:

Excerpts from Robert Collins's message of 2013-12-03 23:12:39 -0800:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core


+1, We've been getting good reviews from Ghe for a while now. :)


  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core


I suggest we delay this removal for 30 days.


For what it's worth, keep in mind the holidays coming up at the end of 
December. I suspect that trying to reevaluate 30 days from now will be 
even trickier when you have to take into account vacation times.




I know it is easy to add
them back in, but I hesitate to disrupt the flow if these people all
are willing to pick up the pace again. They may not have _immediate_
code knowledge but they should have enough historical knowledge that
has not gone completely stale in just the last 30-60 days.

What I'm suggesting is that review velocity will benefit from core being
a little more sticky, especially for sustained contributors who have
just had their attention directed elsewhere briefly.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Georgy Okrokvertskhov
As a Murano team we will be happy to contribute to Glance. Our Murano
metadata repository is a standalone component (with its own git
repository)which is not tightly coupled with Murano itself. We can easily
add our functionality to Glance as a new component\subproject.

Thanks
Georgy


On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:


 On Dec 6, 2013, at 10:38 AM, Clint Byrum cl...@fewbar.com wrote:

  Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800:
  On 12/05/2013 04:25 PM, Clint Byrum wrote:
  Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
  Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
  On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
   wrote:
 
  Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
  Why not just use glance?
 
 
  I've asked that question a few times, and I think I can collate the
  responses I've received below. I think enhancing glance to do these
  things is on the table:
 
  1. Glance is for big blobs of data not tiny templates.
  2. Versioning of a single resource is desired.
  3. Tagging/classifying/listing/sorting
  4. Glance is designed to expose the uploaded blobs to nova, not
 users
 
  My responses:
 
  1: Irrelevant. Smaller things will fit in it just fine.
 
  Fitting is one thing, optimizations around particular assumptions
 about the size of data and the frequency of reads/writes might be an issue,
 but I admit to ignorance about those details in Glance.
 
 
  Optimizations can be improved for various use cases. The design,
 however,
  has no assumptions that I know about that would invalidate storing
 blobs
  of yaml/json vs. blobs of kernel/qcow2/raw image.
 
  I think we are getting out into the weeds a little bit here. It is
 important to think about these apis in terms of what they actually do,
 before the decision of combining them or not can be made.
 
  I think of HeatR as a template storage service, it provides extra
 data and operations on templates. HeatR should not care about how those
 templates are stored.
  Glance is an image storage service, it provides extra data and
 operations on images (not blobs), and it happens to use swift as a backend.
 
  If HeatR and Glance were combined, it would result in taking two very
 different types of data (template metadata vs image metadata) and mashing
 them into one service. How would adding the complexity of HeatR benefit
 Glance, when they are dealing with conceptually two very different types of
 data? For instance, should a template ever care about the field minRam
 that is stored with an image? Combining them adds a huge development
 complexity with a very small operations payoff, and so Openstack is already
 so operationally complex that HeatR as a separate service would be
 knowledgeable. Only clients of Heat will ever care about data and
 operations on templates, so I move that HeatR becomes it's own service, or
 becomes part of Heat.
 
 
  I spoke at length via G+ with Randall and Tim about this earlier today.
  I think I understand the impetus for all of this a little better now.
 
  Basically what I'm suggesting is that Glance is only narrow in scope
  because that was the only object that OpenStack needed a catalog for
  before now.
 
  However, the overlap between a catalog of images and a catalog of
  templates is quite comprehensive. The individual fields that matter to
  images are different than the ones that matter to templates, but that
  is a really minor detail isn't it?
 
  I would suggest that Glance be slightly expanded in scope to be an
  object catalog. Each object type can have its own set of fields that
  matter to it.
 
  This doesn't have to be a minor change to glance to still have many
  advantages over writing something from scratch and asking people to
  deploy another service that is 99% the same as Glance.
 
  My suggestion for long-term architecture would be to use Murano for
  catalog/metadata information (for images/templates/whatever) and move
  the block-streaming drivers into Cinder, and get rid of the Glance
  project entirely. Murano would then become the catalog/registry of
  objects in the OpenStack world, Cinder would be the thing that manages
  and streams blocks of data or block devices, and Glance could go away.
  Imagine it... OpenStack actually *reducing* the number of projects
  instead of expanding! :)
 
 
  Have we not learned our lesson with Nova-Net/Neutron yet? Rewrites of
  existing functionality are painful.
 
  The Murano-concerned people have already stated they are starting over
  on that catalog.
 
  I suggest they start over by expanding Glance's catalog. If the block
  streaming bits of Glance need to move somewhere else, that sounds like a
  completely separate concern that distracts from this point.
 
  And to be clear, (I think I will just stop talking as I think I've
  made this point), my point is, we have a catalog, let's make it 

Re: [openstack-dev] [TripleO] capturing build details in images

2013-12-06 Thread Clint Byrum
Excerpts from Robert Collins's message of 2013-12-04 14:19:44 -0800:
 So - what about us capturing this information outside the image: we
 can create a uuid for the build, and write a file in the image with
 that uuid, and outside the image we can write:
  - all variables (no security ramifications now as this file can be
 kept by whomever built the image)
  - command line args
  - version information for the toolchain etc.

I forgot to weigh in on this. It has all been said already. I like the
idea of this being a json file as well. +1.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Questions around Development Process

2013-12-06 Thread Tzu-Mainn Chen
  b) In the past, we allowed parallel development of the UI and API by
  having well-documented expectations of what the API
 
 Are these expectations documented yet? I'm new to the project and still
 finding my way around. I've seen the wireframes and am going through
 Chen's icehouse requirements, but I haven't stumbled on too much talk
 about the APIs specifically (not suggesting they don't exist, more
 likely that I haven't found them yet).

Not quite yet; we'd like to finalize the requirements somewhat first.  Hopefully
something will be available sometime next week.  In the meantime, targeted UI 
work
is mostly structural (navigation) and making sure that the right widgets exist
for the wireframes.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Dmitry Mescheryakov
Hello all,

We would like to push further the discussion on unified guest agent. You
may find the details of our proposal at [1].

Also let me clarify why we started this conversation. Savanna currently
utilizes SSH to install/configure Hadoop on VMs. We were happy with that
approach until recently we realized that in many OpenStack deployments VMs
are not accessible from controller. That brought us to idea to use guest
agent for VM configuration instead. That approach is already used by Trove,
Murano and Heat and we can do the same.

Uniting the efforts on a single guest agent brings a couple advantages:
1. Code reuse across several projects.
2. Simplified deployment of OpenStack. Guest agent requires additional
facilities for transport like message queue or something similar. Sharing
agent means projects can share transport/config and hence ease life of
deployers.

We see it is a library and we think that Oslo is a good place for it.

Naturally, since this is going to be a _unified_ agent we seek input from
all interested parties.

[1] https://wiki.openstack.org/wiki/UnifiedGuestAgent

Thanks,

Dmitry
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Questions around Development Process

2013-12-06 Thread Clint Byrum
Excerpts from Tzu-Mainn Chen's message of 2013-12-06 07:37:20 -0800:
 Hey all,
 
 We're starting to work on the UI for tuskar based on Jarda's wireframes, and 
 as we're doing so, we're realizing that
 we're not quite sure what development methodology is appropriate.  Some 
 questions:
 
 a) Because we're essentially doing a tear-down and re-build of the whole 
 architecture (a lot of the concepts in tuskar
 will simply disappear), it's difficult to do small incremental patches that 
 support existing functionality.  Is it okay
 to have patches that break functionality?  Are there good alternatives?
 

I think Tuskar is early enough in its' life cycle that it has reached
that magical plan to throw one away point where you can actually do
this without disrupting anybody except yourselves, which actually sounds
valuable in this case since you have chosen a different course.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Jay Dobies
Disclaimer: I'm very new to the project, so apologies if some of my 
questions have been already answered or flat out don't make sense.


As I proofread, some of my comments may drift a bit past basic 
requirements, so feel free to tell me to take certain questions out of 
this thread into specific discussion threads if I'm getting too detailed.





*** Requirements are assumed to be targeted for Icehouse, unless marked 
otherwise:
(M) - Maybe Icehouse, dependency on other in-development features
(F) - Future requirement, after Icehouse

* NODES
* Creation
   * Manual registration
  * hardware specs from Ironic based on mac address (M)
  * IP auto populated from Neutron (F)
   * Auto-discovery during undercloud install process (M)
* Monitoring
* assignment, availability, status
* capacity, historical statistics (M)
* Management node (where triple-o is installed)
* created as part of undercloud install process
* can create additional management nodes (F)
 * Resource nodes
 * searchable by status, name, cpu, memory, and all attributes from 
ironic
 * can be allocated as one of four node types


It's pretty clear by the current verbiage but I'm going to ask anyway: 
one and only one?



 * compute
 * controller
 * object storage
 * block storage
 * Resource class - allows for further categorization of a node type
 * each node type specifies a single default resource class
 * allow multiple resource classes per node type (M)


My gut reaction is that we want to bite this off sooner rather than 
later. This will have data model and API implications that, even if we 
don't commit to it for Icehouse, should still be in our minds during it, 
so it might make sense to make it a first class thing to just nail down now.



 * optional node profile for a resource class (M)
 * acts as filter for nodes that can be allocated to that class 
(M)


To my understanding, once this is in Icehouse, we'll have to support 
upgrades. If this filtering is pushed off, could we get into a situation 
where an allocation created in Icehouse would no longer be valid in 
Icehouse+1 once these filters are in place? If so, we might want to make 
it more of a priority to get them in place earlier and not eat the 
headache of addressing these sorts of integrity issues later.



 * nodes can be viewed by node types
 * additional group by status, hardware specification
 * controller node type
* each controller node will run all openstack services
   * allow each node to run specified service (F)
* breakdown by workload (percentage of cpu used per node) (M)
 * Unallocated nodes


Is there more still being flushed out here? Things like:
 * Listing unallocated nodes
 * Unallocating a previously allocated node (does this make it a 
vanilla resource or does it retain the resource type? is this the only 
way to change a node's resource type?)
 * Unregistering nodes from Tuskar's inventory (I put this under 
unallocated under the assumption that the workflow will be an explicit 
unallocate before unregister; I'm not sure if this is the same as 
archive below).



 * Archived nodes (F)


Can you elaborate a bit more on what this is?


 * Will be separate openstack service (F)

* DEPLOYMENT
* multiple deployments allowed (F)
  * initially just one
* deployment specifies a node distribution across node types
   * node distribution can be updated after creation
* deployment configuration, used for initial creation only
   * defaulted, with no option to change
  * allow modification (F)
* review distribution map (F)
* notification when a deployment is ready to go or whenever something 
changes

* DEPLOYMENT ACTION
* Heat template generated on the fly
   * hardcoded images
  * allow image selection (F)
   * pre-created template fragments for each node type
   * node type distribution affects generated template
* nova scheduler allocates nodes
   * filters based on resource class and node profile information (M)
* Deployment action can create or update
* status indicator to determine overall state of deployment
   * status indicator for nodes as well
   * status includes 'time left' (F)

* NETWORKS (F)
* IMAGES (F)
* LOGS (F)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Sandy Walsh


On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
 Hello all,
 
 We would like to push further the discussion on unified guest agent. You
 may find the details of our proposal at [1].
 
 Also let me clarify why we started this conversation. Savanna currently
 utilizes SSH to install/configure Hadoop on VMs. We were happy with that
 approach until recently we realized that in many OpenStack deployments
 VMs are not accessible from controller. That brought us to idea to use
 guest agent for VM configuration instead. That approach is already used
 by Trove, Murano and Heat and we can do the same.
 
 Uniting the efforts on a single guest agent brings a couple advantages:
 1. Code reuse across several projects.
 2. Simplified deployment of OpenStack. Guest agent requires additional
 facilities for transport like message queue or something similar.
 Sharing agent means projects can share transport/config and hence ease
 life of deployers.
 
 We see it is a library and we think that Oslo is a good place for it.
 
 Naturally, since this is going to be a _unified_ agent we seek input
 from all interested parties.

It might be worth while to consider building from the Rackspace guest
agents for linux [2] and windows [3]. Perhaps get them moved over to
stackforge and scrubbed?

These are geared towards Xen, but that would be a good first step in
making the HV-Guest pipe configurable.

[2] https://github.com/rackerlabs/openstack-guest-agents-unix
[3] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver

-S


 [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
 
 Thanks,
 
 Dmitry
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Sergey Lukjanov
In addition, here are several related links:

etherpad with some collected requirements:
https://etherpad.openstack.org/p/UnifiedAgents
initial thread about unified agents:
http://lists.openstack.org/pipermail/openstack-dev/2013-November/thread.html#18276

Thanks.


On Fri, Dec 6, 2013 at 11:45 PM, Dmitry Mescheryakov 
dmescherya...@mirantis.com wrote:

 Hello all,

 We would like to push further the discussion on unified guest agent. You
 may find the details of our proposal at [1].

 Also let me clarify why we started this conversation. Savanna currently
 utilizes SSH to install/configure Hadoop on VMs. We were happy with that
 approach until recently we realized that in many OpenStack deployments VMs
 are not accessible from controller. That brought us to idea to use guest
 agent for VM configuration instead. That approach is already used by Trove,
 Murano and Heat and we can do the same.

 Uniting the efforts on a single guest agent brings a couple advantages:
 1. Code reuse across several projects.
 2. Simplified deployment of OpenStack. Guest agent requires additional
 facilities for transport like message queue or something similar. Sharing
 agent means projects can share transport/config and hence ease life of
 deployers.

 We see it is a library and we think that Oslo is a good place for it.

 Naturally, since this is going to be a _unified_ agent we seek input from
 all interested parties.

 [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent

 Thanks,

 Dmitry

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Tzu-Mainn Chen
Thanks for the comments!  Responses inline:

 Disclaimer: I'm very new to the project, so apologies if some of my
 questions have been already answered or flat out don't make sense.
 
 As I proofread, some of my comments may drift a bit past basic
 requirements, so feel free to tell me to take certain questions out of
 this thread into specific discussion threads if I'm getting too detailed.
 
  
 
  *** Requirements are assumed to be targeted for Icehouse, unless marked
  otherwise:
  (M) - Maybe Icehouse, dependency on other in-development features
  (F) - Future requirement, after Icehouse
 
  * NODES
  * Creation
 * Manual registration
* hardware specs from Ironic based on mac address (M)
* IP auto populated from Neutron (F)
 * Auto-discovery during undercloud install process (M)
  * Monitoring
  * assignment, availability, status
  * capacity, historical statistics (M)
  * Management node (where triple-o is installed)
  * created as part of undercloud install process
  * can create additional management nodes (F)
   * Resource nodes
   * searchable by status, name, cpu, memory, and all attributes from
   ironic
   * can be allocated as one of four node types
 
 It's pretty clear by the current verbiage but I'm going to ask anyway:
 one and only one?

Yep, that's right!

   * compute
   * controller
   * object storage
   * block storage
   * Resource class - allows for further categorization of a node
   type
   * each node type specifies a single default resource class
   * allow multiple resource classes per node type (M)
 
 My gut reaction is that we want to bite this off sooner rather than
 later. This will have data model and API implications that, even if we
 don't commit to it for Icehouse, should still be in our minds during it,
 so it might make sense to make it a first class thing to just nail down now.

That is entirely correct, which is one reason it's on the list of requirements. 
 The
forthcoming API design will have to account for it.  Not recreating the entire 
data
model between releases is a key goal :)


   * optional node profile for a resource class (M)
   * acts as filter for nodes that can be allocated to that
   class (M)
 
 To my understanding, once this is in Icehouse, we'll have to support
 upgrades. If this filtering is pushed off, could we get into a situation
 where an allocation created in Icehouse would no longer be valid in
 Icehouse+1 once these filters are in place? If so, we might want to make
 it more of a priority to get them in place earlier and not eat the
 headache of addressing these sorts of integrity issues later.

That's true.  The problem is that to my understanding, the filters we'd
need in nova-scheduler are not yet fully in place.

I also think that this is an issue that we'll need to address no matter what.
Even once filters exist, if a user applies a filter *after* nodes are allocated,
we'll need to do something clever if the already-allocated nodes don't meet the
filter criteria.

   * nodes can be viewed by node types
   * additional group by status, hardware specification
   * controller node type
  * each controller node will run all openstack services
 * allow each node to run specified service (F)
  * breakdown by workload (percentage of cpu used per node) (M)
   * Unallocated nodes
 
 Is there more still being flushed out here? Things like:
   * Listing unallocated nodes
   * Unallocating a previously allocated node (does this make it a
 vanilla resource or does it retain the resource type? is this the only
 way to change a node's resource type?)
   * Unregistering nodes from Tuskar's inventory (I put this under
 unallocated under the assumption that the workflow will be an explicit
 unallocate before unregister; I'm not sure if this is the same as
 archive below).

Ah, you're entirely right.  I'll add these to the list.

   * Archived nodes (F)
 
 Can you elaborate a bit more on what this is?

To be honest, I'm a bit fuzzy about this myself; Jarda mentioned that there was
an OpenStack service in the process of being planned that would handle this
requirement.  Jarda, can you detail a bit?

Thanks again for the comments!


Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Joshua Harlow
Another idea that I'll put up for consideration (since I work with the
cloud-init codebase also).

Cloud-init[1] which currently does lots of little useful initialization
types of activities (similar to the racker agents activities) has been
going through some of the same questions[2] as to should it be an agent
(or respond to some type of system signal on certain activities, like new
network metadata available). So this could be another way to go.

Including (ccing) scott who probably has more ideas around this to :-)

[1] https://launchpad.net/cloud-init
[2] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1153626

On 12/6/13 12:12 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:



On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
 Hello all,
 
 We would like to push further the discussion on unified guest agent. You
 may find the details of our proposal at [1].
 
 Also let me clarify why we started this conversation. Savanna currently
 utilizes SSH to install/configure Hadoop on VMs. We were happy with that
 approach until recently we realized that in many OpenStack deployments
 VMs are not accessible from controller. That brought us to idea to use
 guest agent for VM configuration instead. That approach is already used
 by Trove, Murano and Heat and we can do the same.
 
 Uniting the efforts on a single guest agent brings a couple advantages:
 1. Code reuse across several projects.
 2. Simplified deployment of OpenStack. Guest agent requires additional
 facilities for transport like message queue or something similar.
 Sharing agent means projects can share transport/config and hence ease
 life of deployers.
 
 We see it is a library and we think that Oslo is a good place for it.
 
 Naturally, since this is going to be a _unified_ agent we seek input
 from all interested parties.

It might be worth while to consider building from the Rackspace guest
agents for linux [2] and windows [3]. Perhaps get them moved over to
stackforge and scrubbed?

These are geared towards Xen, but that would be a good first step in
making the HV-Guest pipe configurable.

[2] https://github.com/rackerlabs/openstack-guest-agents-unix
[3] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver

-S


 [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
 
 Thanks,
 
 Dmitry
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Sergey Lukjanov
That's an interesting idea to use cloud-init, but it looks like such agent
will be unable to provide feedback like results of running commands.


On Sat, Dec 7, 2013 at 12:27 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

 Another idea that I'll put up for consideration (since I work with the
 cloud-init codebase also).

 Cloud-init[1] which currently does lots of little useful initialization
 types of activities (similar to the racker agents activities) has been
 going through some of the same questions[2] as to should it be an agent
 (or respond to some type of system signal on certain activities, like new
 network metadata available). So this could be another way to go.

 Including (ccing) scott who probably has more ideas around this to :-)

 [1] https://launchpad.net/cloud-init
 [2] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1153626

 On 12/6/13 12:12 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:

 
 
 On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
  Hello all,
 
  We would like to push further the discussion on unified guest agent. You
  may find the details of our proposal at [1].
 
  Also let me clarify why we started this conversation. Savanna currently
  utilizes SSH to install/configure Hadoop on VMs. We were happy with that
  approach until recently we realized that in many OpenStack deployments
  VMs are not accessible from controller. That brought us to idea to use
  guest agent for VM configuration instead. That approach is already used
  by Trove, Murano and Heat and we can do the same.
 
  Uniting the efforts on a single guest agent brings a couple advantages:
  1. Code reuse across several projects.
  2. Simplified deployment of OpenStack. Guest agent requires additional
  facilities for transport like message queue or something similar.
  Sharing agent means projects can share transport/config and hence ease
  life of deployers.
 
  We see it is a library and we think that Oslo is a good place for it.
 
  Naturally, since this is going to be a _unified_ agent we seek input
  from all interested parties.
 
 It might be worth while to consider building from the Rackspace guest
 agents for linux [2] and windows [3]. Perhaps get them moved over to
 stackforge and scrubbed?
 
 These are geared towards Xen, but that would be a good first step in
 making the HV-Guest pipe configurable.
 
 [2] https://github.com/rackerlabs/openstack-guest-agents-unix
 [3]
 https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver
 
 -S
 
 
  [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
 
  Thanks,
 
  Dmitry
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Two new blueprints

2013-12-06 Thread Shixiong Shang
Hi, stackers:

Randy Tuttle created two blueprints as an augment to Sean’s proposal to improve 
IPv6 readiness. You can find the details here:

https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port

Please let us know whether you have any questions. Thanks!

Shixiong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Liz Blanchard

On Dec 5, 2013, at 9:31 PM, Tzu-Mainn Chen tzuma...@redhat.com wrote:

 Hey all,
 
 I've attempted to spin out the requirements behind Jarda's excellent 
 wireframes 
 (http://lists.openstack.org/pipermail/openstack-dev/2013-December/020944.html).
 Hopefully this can add some perspective on both the wireframes and the needed 
 changes to the tuskar-api.

This list is great, thanks very much for taking the time to write this up! I 
think a big part of the User Experience design is to take a step back and 
understand the requirements from an end user's point of view…what would they 
want to accomplish by using this UI? This might influence the design in certain 
ways, so I've taken a cut at a set of user stories for the Icehouse timeframe 
based on these requirements that I hope will be useful during discussions.

Based on the OpenStack Personas[1], I think that Anna would be the main 
consumer of the TripleO UI, but please let me know if you think otherwise.

- As an infrastructure administrator, Anna needs to deploy or update a set of 
resources that will run OpenStack (This isn't a very specific use case, but 
more of the larger end goal of Anna coming into the UI.)
- As an infrastructure administrator, Anna expects that the management node for 
the deployment services is already up and running and the status of this node 
is shown in the UI.
- As an infrastructure administrator, Anna wants to be able to quickly see the 
set of unallocated nodes that she could use for her deployment of OpenStack. 
Ideally, she would not have to manually tell the system about these nodes. If 
she needs to manually register nodes for whatever reason, Anna would only want 
to have to define the essential data needed to register these nodes.
- As an infrastructure administrator, Anna needs to assign a role to each of 
the necessary nodes in her OpenStack deployment. The nodes could be either 
controller, compute, networking, or storage resources depending on the needs of 
this deployment.
- As an infrastructure administrator, Anna wants to review the distribution of 
the nodes that she has assigned before kicking off the Deploy task.
- As an infrastructure administrator, Anna wants to monitor the deployment 
process of all of the nodes that she has assigned.
- As an infrastructure administrator, Anna needs to be able to troubleshoot any 
errors that may occur during the deployment of nodes process.
- As an infrastructure administrator, Anna wants to monitor the availability 
and status of each node in her deployment.
- As an infrastructure administrator, Anna wants to be able to unallocate a 
node from a deployment.
- As an infrastructure administrator, Anna wants to be able to view the history 
of nodes that have been in a deployment.
- As an infrastructure administrator, Anna needs to be notified of any 
important changes to nodes that are in the OpenStack deployment. She does not 
want to be spammed with non-important notifications.

Please feel free to comment, change, or add to this list.

[1]https://docs.google.com/document/d/16rkiXWxxgzGT47_Wc6hzIPzO2-s2JWAPEKD0gP2mt7E/edit?pli=1#

Thanks,
Liz

 
 All comments are welcome!
 
 Thanks,
 Tzu-Mainn Chen
 
 
 
 *** Requirements are assumed to be targeted for Icehouse, unless marked 
 otherwise:
   (M) - Maybe Icehouse, dependency on other in-development features
   (F) - Future requirement, after Icehouse
 
 * NODES
   * Creation
  * Manual registration
 * hardware specs from Ironic based on mac address (M)
 * IP auto populated from Neutron (F)
  * Auto-discovery during undercloud install process (M)
   * Monitoring
   * assignment, availability, status
   * capacity, historical statistics (M)
   * Management node (where triple-o is installed)
   * created as part of undercloud install process
   * can create additional management nodes (F)
* Resource nodes
* searchable by status, name, cpu, memory, and all attributes from 
 ironic
* can be allocated as one of four node types
* compute
* controller
* object storage
* block storage
* Resource class - allows for further categorization of a node type
* each node type specifies a single default resource class
* allow multiple resource classes per node type (M)
* optional node profile for a resource class (M)
* acts as filter for nodes that can be allocated to that class 
 (M)
* nodes can be viewed by node types
* additional group by status, hardware specification
* controller node type
   * each controller node will run all openstack services
  * allow each node to run specified service (F)
   * breakdown by workload (percentage of cpu used per node) (M)
* Unallocated nodes
* Archived nodes (F)
* Will be separate openstack service (F)
 
 * DEPLOYMENT
 

Re: [openstack-dev] [qa][nova] The document for the changes from Nova v2 api to v3

2013-12-06 Thread Ken'ichi Ohmichi
Hi,

We are implementing Nova v3 API validation with jsonschema.
The schemas of API parameters are defined under
nova/api/openstack/compute/schemas/v3/.
I guess the shemas would be used for checking the difference between
doc and Nova v3 API
parameters as another approach.

example: 
https://review.openstack.org/#/c/59616/1/nova/api/openstack/compute/schemas/v3/agents_schema.py

Thanks
Ken'ichi Ohmichi

---
2013/12/7 David Kranz dkr...@redhat.com:
 On 11/13/2013 06:09 PM, Christopher Yeoh wrote:

 On Thu, Nov 14, 2013 at 7:52 AM, David Kranz dkr...@redhat.com wrote:

 On 11/13/2013 08:30 AM, Alex Xu wrote:

 Hi, guys

 This is the document for the changes from Nova v2 api to v3:
 https://wiki.openstack.org/wiki/NovaAPIv2tov3
 I will appreciate if anyone can help for review it.

 Another problem comes up - how to keep the doc updated. So can we ask
 people, who change
 something of api v3, update the doc accordingly? I think it's a way to
 resolve it.

 Thanks
 Alex



 ___
 openstack-qa mailing list
 openstack...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa

 Thanks, this is great. I fixed a bug in the os-services section. BTW,
 openstack...@lists.openstack.org list is obsolete. openstack-dev with
 subject starting with [qa] is the current qa list. About updating, I think
 this will have to be heavily socialized in the nova team. The initial review
 should happen by those reviewing the tempest v3 api changes. That is how I
 found the os-services bug.


 While reviewing https://review.openstack.org/#/c/59939/ I found that a lot
 of the flavors changes are missing from this doc. Hopefully some one closer
 to the code changes can update it.

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] list of negative tests that need to be separated from other tests.

2013-12-06 Thread Ken'ichi Ohmichi
Hi Adalberto,

2013/12/7 Adalberto Medeiros adal...@linux.vnet.ibm.com:
 Hi!

 In the QA meeting yesterday, we decide to create a blueprint specific for
 the negative tests in a separate file:
 https://blueprints.launchpad.net/tempest/+spec/negative-test-files and use
 it to track the patches.

Thank you for pointing this bp out.


 I added the etherpad link Ken'ichi pointed to this bp. Ken'ichi, should you
 be the owner of this bp?

Unfortunately, I could not change the items of this bp from my
launchpad account.
Could you be the owner?

Thanks
Ken'ichi Ohmichi

---
 On Tue, Dec 3, 2013 at 9:43 AM, Kenichi Oomichi
 oomi...@mxs.nes.nec.co.jp mailto:oomi...@mxs.nes.nec.co.jp wrote:


 Hi Sean, David, Marc

 I have one question about negative tests.
 Now we are in moratorium on new negative tests in Tempest:

 http://lists.openstack.org/pipermail/openstack-dev/2013-November/018748.html

 Is it OK to consider this kind of patch(separating negative tests from
 positive test file, without any additional negative tests) as an
 exception?


 I don't have a strong opinion on this, but I think it's ok given it
 will make the eventual removal of
 hand coded negative tests in the future easier even though it costs us
 a bit of churn now.

 Chris


 Thanks
 Ken'ichi Ohmichi

 ---

  -Original Message-
  From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com
 mailto:adal...@linux.vnet.ibm.com]
  Sent: Monday, December 02, 2013 8:33 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [tempest] list of negative tests
 that need to be separated from other tests.
 
  Thanks Ken'ichi. I added my name to a couple of them in that list.
 
  Adalberto Medeiros
  Linux Technology Center
  Openstack and Cloud Development
  IBM Brazil
  Email: adal...@linux.vnet.ibm.com
 mailto:adal...@linux.vnet.ibm.com

 
  On Mon 02 Dec 2013 07:36:38 AM BRST, Kenichi Oomichi wrote:
  
   Hi Adalberto,
  
   -Original Message-
   From: Adalberto Medeiros [mailto:adal...@linux.vnet.ibm.com
 mailto:adal...@linux.vnet.ibm.com]
   Sent: Saturday, November 30, 2013 11:29 PM
   To: OpenStack Development Mailing List
   Subject: [openstack-dev] [tempest] list of negative tests
 that need to be separated from other tests.
  
   Hi!
  
   I understand that one action toward negative tests, even before
   implementing the automatic schema generation, is to move them
 to their
   own file (.py), thus separating them from the 'positive'
 tests. (See
   patch https://review.openstack.org/#/c/56807/ as an example).
  
   In order to do so, I've got a list of testcases that still
 have both
   negative and positive tests together, and listed them in the
 following
   etherpad link:
 https://etherpad.openstack.org/p/bp_negative_tests_list
  
   The idea here is to have patches for each file until we get
 all the
   negative tests in their own files. I also linked the etherpad
 to the
   specific blueprint created by Marc for negative tests in icehouse
   (https://blueprints.launchpad.net/tempest/+spec/negative-tests ).
  
   Please, send any comments and whether you think this is the right
   approach to keep track on that task.
  
   We have already the same etherpad, and we are working on it.
   Please check the following:
   https://etherpad.openstack.org/p/TempestTestDevelopment
  
  
   Thanks
   Ken'ichi Ohmichi
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org

   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org

  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] The document for the changes from Nova v2 api to v3

2013-12-06 Thread David Kranz

On 12/06/2013 03:57 PM, Ken'ichi Ohmichi wrote:

Hi,

We are implementing Nova v3 API validation with jsonschema.
The schemas of API parameters are defined under
nova/api/openstack/compute/schemas/v3/.
I guess the shemas would be used for checking the difference between
doc and Nova v3 API
parameters as another approach.

example: 
https://review.openstack.org/#/c/59616/1/nova/api/openstack/compute/schemas/v3/agents_schema.py

Thanks
Ken'ichi Ohmichi


Thanks, Ken'ichi. These could be useful. But I don't think they are a 
substitute for the document for two reason. First, these schemas only 
have info for the json dict part, not any changes to the url suffix or 
return values. Second, there really needs to be a release note to help 
users upgrade their apps from v2 to v3.


 -David


---
2013/12/7 David Kranz dkr...@redhat.com:

On 11/13/2013 06:09 PM, Christopher Yeoh wrote:

On Thu, Nov 14, 2013 at 7:52 AM, David Kranz dkr...@redhat.com wrote:

On 11/13/2013 08:30 AM, Alex Xu wrote:

Hi, guys

This is the document for the changes from Nova v2 api to v3:
https://wiki.openstack.org/wiki/NovaAPIv2tov3
I will appreciate if anyone can help for review it.

Another problem comes up - how to keep the doc updated. So can we ask
people, who change
something of api v3, update the doc accordingly? I think it's a way to
resolve it.

Thanks
Alex



___
openstack-qa mailing list
openstack...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-qa

Thanks, this is great. I fixed a bug in the os-services section. BTW,
openstack...@lists.openstack.org list is obsolete. openstack-dev with
subject starting with [qa] is the current qa list. About updating, I think
this will have to be heavily socialized in the nova team. The initial review
should happen by those reviewing the tempest v3 api changes. That is how I
found the os-services bug.


While reviewing https://review.openstack.org/#/c/59939/ I found that a lot
of the flavors changes are missing from this doc. Hopefully some one closer
to the code changes can update it.

  -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-06 Thread Fox, Kevin M
Another option is this:
https://github.com/cloudbase/cloudbase-init

It is python based on windows rather then .NET.

Thanks,
Kevin

From: Sandy Walsh [sandy.wa...@rackspace.com]
Sent: Friday, December 06, 2013 12:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 12/06/2013 03:45 PM, Dmitry Mescheryakov wrote:
 Hello all,

 We would like to push further the discussion on unified guest agent. You
 may find the details of our proposal at [1].

 Also let me clarify why we started this conversation. Savanna currently
 utilizes SSH to install/configure Hadoop on VMs. We were happy with that
 approach until recently we realized that in many OpenStack deployments
 VMs are not accessible from controller. That brought us to idea to use
 guest agent for VM configuration instead. That approach is already used
 by Trove, Murano and Heat and we can do the same.

 Uniting the efforts on a single guest agent brings a couple advantages:
 1. Code reuse across several projects.
 2. Simplified deployment of OpenStack. Guest agent requires additional
 facilities for transport like message queue or something similar.
 Sharing agent means projects can share transport/config and hence ease
 life of deployers.

 We see it is a library and we think that Oslo is a good place for it.

 Naturally, since this is going to be a _unified_ agent we seek input
 from all interested parties.

It might be worth while to consider building from the Rackspace guest
agents for linux [2] and windows [3]. Perhaps get them moved over to
stackforge and scrubbed?

These are geared towards Xen, but that would be a good first step in
making the HV-Guest pipe configurable.

[2] https://github.com/rackerlabs/openstack-guest-agents-unix
[3] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver

-S


 [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent

 Thanks,

 Dmitry


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync (was: creating a default for oslo config variables within a project?)

2013-12-06 Thread Yuriy Taraday
Hello, Sean.

I get the issue with upgrade path. User doesn't want to update config
unless one is forced to do so.
But introducing code that weakens security and let it stay is an
unconditionally bad idea.
It looks like we have to weigh two evils: having troubles upgrading and
lessening security. That's obvious.

Here are my thoughts on what we can do with it:
1. I think we should definitely force user to do appropriate configuration
to let us use secure ways to do locking.
2. We can wait one release to do so, e.g. issue a deprecation warning now
and force user to do it the right way later.
3. If we are going to do 2. we should do it in the service that is affected
not in the library because library shouldn't track releases of an
application that uses it. It should do its thing and do it right (secure).

So I would suggest to deal with it in Cinder by importing 'lock_path'
option after parsing configs and issuing a deprecation warning and setting
it to tempfile.gettempdir() if it is still None.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-12-06 Thread Davanum Srinivas
Joe,

Looks like we may be a bit more stable now?

Short URL: http://bit.ly/18qq4q2

Long URL : 
http://graphite.openstack.org/graphlot/?from=-120houruntil=-0hourtarget=color(alias(movingAverage(asPercent(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-full.SUCCESS,sum(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-full.{SUCCESS,FAILURE})),'6hours'),%20'gate-tempest-dsvm-postgres-full'),'ED9121')target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-postgres-full.SUCCESS,sum(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-postgres-full.{SUCCESS,FAILURE})),'6hours'),%20'gate-tempest-dsvm-neutron-large-ops'),'00F0F0')target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-neutron.SUCCESS,sum(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-neutron.{SUCCESS,FAILURE})),'6hours'),%20'gate-tempest-dsvm-neutron'),'00FF00')target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-neutron-large-ops.SUCCESS,sum(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-neutron-large-ops.{S
 
UCCESS,FAILURE})),'6hours'),%20'gate-tempest-dsvm-neutron-large-ops'),'00c868')target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.check-grenade-dsvm.SUCCESS,sum(stats.zuul.pipeline.check.job.check-grenade-dsvm.{SUCCESS,FAILURE})),'6hours'),%20'check-grenade-dsvm'),'800080')target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-large-ops.SUCCESS,sum(stats.zuul.pipeline.gate.job.gate-tempest-dsvm-large-ops.{SUCCESS,FAILURE})),'6hours'),%20'gate-tempest-dsvm-neutron-large-ops'),'E080FF')

-- dims


On Fri, Dec 6, 2013 at 11:28 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On Wednesday, December 04, 2013 7:22:23 AM, Joe Gordon wrote:

 TL;DR: Gate is failing 23% of the time due to bugs in nova, neutron
 and tempest. We need help fixing these bugs.


 Hi All,

 Before going any further we have a bug that is affecting gate and
 stable, so its getting top priority here. elastic-recheck currently
 doesn't track unit tests because we don't expect them to fail very
 often. Turns out that assessment was wrong, we now have a nova py27
 unit test bug in gate and stable gate.

 https://bugs.launchpad.net/nova/+bug/1216851
 Title: nova unit tests occasionally fail migration tests for mysql and
 postgres
 Hits
   FAILURE: 74
 The failures appear multiple times for a single job, and some of those
 are due to bad patches in the check queue.  But this is being seen in
 stable and trunk gate so something is definitely wrong.

 ===


 Its time for another edition of of 'Top Gate Bugs.'  I am sending this
 out now because in addition to our usual gate bugs a few new ones have
 cropped up recently, and as we saw a few weeks ago it doesn't take
 very many new bugs to wedge the gate.

 Currently the gate has a failure rate of at least 23%! [0]

 Note: this email was generated with
 http://status.openstack.org/elastic-recheck/ and
 'elastic-recheck-success' [1]

 1) https://bugs.launchpad.net/bugs/1253896
 Title: test_minimum_basic_scenario fails with SSHException: Error
 reading SSH protocol banner
 Projects:  neutron, nova, tempest
 Hits
   FAILURE: 324
 This one has been around for several weeks now and although we have
 made some attempts at fixing this, we aren't any closer at resolving
 this then we were a few weeks ago.

 2) https://bugs.launchpad.net/bugs/1251448
 Title: BadRequest: Multiple possible networks found, use a Network ID
 to be more specific.
 Project: neutron
 Hits
   FAILURE: 141

 3) https://bugs.launchpad.net/bugs/1249065
 Title: Tempest failure: tempest/scenario/test_snapshot_pattern.py
 Project: nova
 Hits
   FAILURE: 112
 This is a bug in nova's neutron code.

 4) https://bugs.launchpad.net/bugs/1250168
 Title: gate-tempest-devstack-vm-neutron-large-ops is failing
 Projects: neutron, nova
 Hits
   FAILURE: 94
 This is an old bug that was fixed, but came back on December 3rd. So
 this is a recent regression. This may be an infra issue.

 5) https://bugs.launchpad.net/bugs/1210483
 Title: ServerAddressesTestXML.test_list_server_addresses FAIL
 Projects: neutron, nova
 Hits
   FAILURE: 73
 This has had some attempts made at fixing it but its still around.


 In addition to the existing bugs, we have some new bugs on the rise:

 1) https://bugs.launchpad.net/bugs/1257626
 Title: Timeout while waiting on RPC response - topic: network, RPC
 method: allocate_for_instance info: unknown
 Project: nova
 Hits
   FAILURE: 52
 large-ops only bug. This has been around for at least two weeks, but
 we have seen this in higher numbers starting around December 3rd. This
 may  be an infrastructure issue as the neutron-large-ops started
 failing more around the same time.

 2) https://bugs.launchpad.net/bugs/1257641
 Title: Quota exceeded for instances: Requested 1, but already used 10
 of 10 instances
 Projects: nova, tempest
 Hits
   FAILURE: 41
 Like the previous bug, this has been around 

Re: [openstack-dev] [TripleO][Tuskar] Questions around Development Process

2013-12-06 Thread Ben Nemec

On 2013-12-06 12:19, Jay Dobies wrote:

a) Because we're essentially doing a tear-down and re-build of the
whole architecture (a lot of the concepts in tuskar
will simply disappear), it's difficult to do small incremental 
patches

that support existing functionality.  Is it okay
to have patches that break functionality?  Are there good 
alternatives?


This is an incubating project, so there are no api stability promises.
If a patch breaks some functionality that we've decided to not support
going forward I don't see a problem with it.  That said, if a patch
breaks some functionality that we _do_ plan to keep, I'd prefer to see
it done as a series of dependent commits that end with the feature in 
a

working state again, even if some of the intermediate commits are not
fully functional.  Hopefully that will both keep the commit sizes down
and provide a definite path back to functionality.


Is there any sort of policy or convention of sending out a warning
before that sort of thing is merged in so that people don't
accidentally blindly pull master and break something they were using?


Not that I know of.  Part of using an incubating project is that 
incompatible changes can be made at any time.  I'm well aware how 
painful that can be if you're trying to consume such a project 
downstream (I've been there), but that's the price for using a project 
that hasn't released yet.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Provisioning Support

2013-12-06 Thread Veiga, Anthony
As part of the discussion around managing IPv6-addressed hosts both within
neutron itself and other systems that require address information, Sean
Collins and I had had a discussion about the types of addresses that could
be supported.  Since IPv6 has many modes of provisioning, we will need to
provide support for each of them.  However, there is a caveat when dealing
with SLAAC provisioning.  The only method of provisioning that is
predictable from Neutron's point of view is EUI-64.  Dazhao Yu has worked
on a patch set to do this [1].  Privacy Extensions are in use and well
documented by the IETF (RFC 4941), however it is not feasible for Neutron
to predict these addresses.  Thus it is my opinion that OpenStack should
officially support using EUI-64 only for provisioning addresses via SLAAC.
 This does not preclude PE methods from functioning, but it will be
impossible to provide the guest's IPv6 address to other systems such as
FWaaS or LBaaS.  Also, this is only for SLAAC provisioning mode.  Stateful
(DHCPv6) and static injection should not be impacted here.

To this end, I'd like to propose that OpenStack officially support guests
using EUI-64 when using SLAAC for provisioning.  Note that I am NOT
proposing anything regarding DHCPv6 or static provisioning, as I believe
that should allow any of the existing allocation methods.


[1] https://review.openstack.org/#/c/56184/

-Anthony


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync (was: creating a default for oslo config variables within a project?)

2013-12-06 Thread Ben Nemec
 

On 2013-12-06 15:14, Yuriy Taraday wrote: 

 Hello, Sean. 
 
 I get the issue with upgrade path. User doesn't want to update config unless 
 one is forced to do so. 
 But introducing code that weakens security and let it stay is an 
 unconditionally bad idea. 
 It looks like we have to weigh two evils: having troubles upgrading and 
 lessening security. That's obvious. 
 
 Here are my thoughts on what we can do with it: 
 1. I think we should definitely force user to do appropriate configuration to 
 let us use secure ways to do locking. 
 2. We can wait one release to do so, e.g. issue a deprecation warning now and 
 force user to do it the right way later. 
 3. If we are going to do 2. we should do it in the service that is affected 
 not in the library because library shouldn't track releases of an application 
 that uses it. It should do its thing and do it right (secure). 
 
 So I would suggest to deal with it in Cinder by importing 'lock_path' option 
 after parsing configs and issuing a deprecation warning and setting it to 
 tempfile.gettempdir() if it is still None.

This is what Sean's change is doing, but setting lock_path to
tempfile.gettempdir() is the security concern. 

Since there seems to be plenty of resistance to using /tmp by default,
here is my proposal: 

1) We make Sean's change to open files in append mode. I think we can
all agree this is a good thing regardless of any config changes. 

2) Leave lockutils broken in Icehouse if lock_path is not set, as I
believe Mark suggested earlier. Log an error if we find that
configuration. Users will be no worse off than they are today, and if
they're paying attention they can get the fixed lockutils behavior
immediately. 

3) Make an unset lock_path a fatal error in J. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Tzu-Mainn Chen
That looks really good, thanks for putting that together!

I'm going to put together a wiki page that consolidates the various Tuskar
planning documents - requirements, user stories, wireframes, etc - so it's
easier to see the whole planning picture.

Mainn

- Original Message -
 
 On Dec 5, 2013, at 9:31 PM, Tzu-Mainn Chen tzuma...@redhat.com wrote:
 
  Hey all,
  
  I've attempted to spin out the requirements behind Jarda's excellent
  wireframes
  (http://lists.openstack.org/pipermail/openstack-dev/2013-December/020944.html).
  Hopefully this can add some perspective on both the wireframes and the
  needed changes to the tuskar-api.
 
 This list is great, thanks very much for taking the time to write this up! I
 think a big part of the User Experience design is to take a step back and
 understand the requirements from an end user's point of view…what would they
 want to accomplish by using this UI? This might influence the design in
 certain ways, so I've taken a cut at a set of user stories for the Icehouse
 timeframe based on these requirements that I hope will be useful during
 discussions.
 
 Based on the OpenStack Personas[1], I think that Anna would be the main
 consumer of the TripleO UI, but please let me know if you think otherwise.
 
 - As an infrastructure administrator, Anna needs to deploy or update a set of
 resources that will run OpenStack (This isn't a very specific use case, but
 more of the larger end goal of Anna coming into the UI.)
 - As an infrastructure administrator, Anna expects that the management node
 for the deployment services is already up and running and the status of this
 node is shown in the UI.
 - As an infrastructure administrator, Anna wants to be able to quickly see
 the set of unallocated nodes that she could use for her deployment of
 OpenStack. Ideally, she would not have to manually tell the system about
 these nodes. If she needs to manually register nodes for whatever reason,
 Anna would only want to have to define the essential data needed to register
 these nodes.
 - As an infrastructure administrator, Anna needs to assign a role to each of
 the necessary nodes in her OpenStack deployment. The nodes could be either
 controller, compute, networking, or storage resources depending on the needs
 of this deployment.
 - As an infrastructure administrator, Anna wants to review the distribution
 of the nodes that she has assigned before kicking off the Deploy task.
 - As an infrastructure administrator, Anna wants to monitor the deployment
 process of all of the nodes that she has assigned.
 - As an infrastructure administrator, Anna needs to be able to troubleshoot
 any errors that may occur during the deployment of nodes process.
 - As an infrastructure administrator, Anna wants to monitor the availability
 and status of each node in her deployment.
 - As an infrastructure administrator, Anna wants to be able to unallocate a
 node from a deployment.
 - As an infrastructure administrator, Anna wants to be able to view the
 history of nodes that have been in a deployment.
 - As an infrastructure administrator, Anna needs to be notified of any
 important changes to nodes that are in the OpenStack deployment. She does
 not want to be spammed with non-important notifications.
 
 Please feel free to comment, change, or add to this list.
 
 [1]https://docs.google.com/document/d/16rkiXWxxgzGT47_Wc6hzIPzO2-s2JWAPEKD0gP2mt7E/edit?pli=1#
 
 Thanks,
 Liz
 
  
  All comments are welcome!
  
  Thanks,
  Tzu-Mainn Chen
  
  
  
  *** Requirements are assumed to be targeted for Icehouse, unless marked
  otherwise:
(M) - Maybe Icehouse, dependency on other in-development features
(F) - Future requirement, after Icehouse
  
  * NODES
* Creation
   * Manual registration
  * hardware specs from Ironic based on mac address (M)
  * IP auto populated from Neutron (F)
   * Auto-discovery during undercloud install process (M)
* Monitoring
* assignment, availability, status
* capacity, historical statistics (M)
* Management node (where triple-o is installed)
* created as part of undercloud install process
* can create additional management nodes (F)
 * Resource nodes
 * searchable by status, name, cpu, memory, and all attributes from
 ironic
 * can be allocated as one of four node types
 * compute
 * controller
 * object storage
 * block storage
 * Resource class - allows for further categorization of a node type
 * each node type specifies a single default resource class
 * allow multiple resource classes per node type (M)
 * optional node profile for a resource class (M)
 * acts as filter for nodes that can be allocated to that
 class (M)
 * nodes can be viewed by node types
 * additional 

[openstack-dev] [ironic][qa] How will ironic tests run in tempest?

2013-12-06 Thread David Kranz
It's great that tempest tests for ironic have been submitted! I was 
reviewing https://review.openstack.org/#/c/48109/ and noticed that the 
tests do not actually run. They are skipped because baremetal is not 
enabled. This is not terribly surprising but we have had a policy in 
tempest to only merge code that has demonstrated that it works. For 
services that cannot run in the single-vm environment of the upstream 
gate we said there could be a system running somewhere that would run 
them and report a result to gerrit. Is there a plan for this, or to make 
an exception for ironic?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-06 Thread Tzu-Mainn Chen
The relevant wiki page is here:

https://wiki.openstack.org/wiki/TripleO/Tuskar#Icehouse_Planning


- Original Message -
 That looks really good, thanks for putting that together!
 
 I'm going to put together a wiki page that consolidates the various Tuskar
 planning documents - requirements, user stories, wireframes, etc - so it's
 easier to see the whole planning picture.
 
 Mainn
 
 - Original Message -
  
  On Dec 5, 2013, at 9:31 PM, Tzu-Mainn Chen tzuma...@redhat.com wrote:
  
   Hey all,
   
   I've attempted to spin out the requirements behind Jarda's excellent
   wireframes
   (http://lists.openstack.org/pipermail/openstack-dev/2013-December/020944.html).
   Hopefully this can add some perspective on both the wireframes and the
   needed changes to the tuskar-api.
  
  This list is great, thanks very much for taking the time to write this up!
  I
  think a big part of the User Experience design is to take a step back and
  understand the requirements from an end user's point of view…what would
  they
  want to accomplish by using this UI? This might influence the design in
  certain ways, so I've taken a cut at a set of user stories for the Icehouse
  timeframe based on these requirements that I hope will be useful during
  discussions.
  
  Based on the OpenStack Personas[1], I think that Anna would be the main
  consumer of the TripleO UI, but please let me know if you think otherwise.
  
  - As an infrastructure administrator, Anna needs to deploy or update a set
  of
  resources that will run OpenStack (This isn't a very specific use case, but
  more of the larger end goal of Anna coming into the UI.)
  - As an infrastructure administrator, Anna expects that the management node
  for the deployment services is already up and running and the status of
  this
  node is shown in the UI.
  - As an infrastructure administrator, Anna wants to be able to quickly see
  the set of unallocated nodes that she could use for her deployment of
  OpenStack. Ideally, she would not have to manually tell the system about
  these nodes. If she needs to manually register nodes for whatever reason,
  Anna would only want to have to define the essential data needed to
  register
  these nodes.
  - As an infrastructure administrator, Anna needs to assign a role to each
  of
  the necessary nodes in her OpenStack deployment. The nodes could be either
  controller, compute, networking, or storage resources depending on the
  needs
  of this deployment.
  - As an infrastructure administrator, Anna wants to review the distribution
  of the nodes that she has assigned before kicking off the Deploy task.
  - As an infrastructure administrator, Anna wants to monitor the deployment
  process of all of the nodes that she has assigned.
  - As an infrastructure administrator, Anna needs to be able to troubleshoot
  any errors that may occur during the deployment of nodes process.
  - As an infrastructure administrator, Anna wants to monitor the
  availability
  and status of each node in her deployment.
  - As an infrastructure administrator, Anna wants to be able to unallocate a
  node from a deployment.
  - As an infrastructure administrator, Anna wants to be able to view the
  history of nodes that have been in a deployment.
  - As an infrastructure administrator, Anna needs to be notified of any
  important changes to nodes that are in the OpenStack deployment. She does
  not want to be spammed with non-important notifications.
  
  Please feel free to comment, change, or add to this list.
  
  [1]https://docs.google.com/document/d/16rkiXWxxgzGT47_Wc6hzIPzO2-s2JWAPEKD0gP2mt7E/edit?pli=1#
  
  Thanks,
  Liz
  
   
   All comments are welcome!
   
   Thanks,
   Tzu-Mainn Chen
   
   
   
   *** Requirements are assumed to be targeted for Icehouse, unless marked
   otherwise:
 (M) - Maybe Icehouse, dependency on other in-development features
 (F) - Future requirement, after Icehouse
   
   * NODES
 * Creation
* Manual registration
   * hardware specs from Ironic based on mac address (M)
   * IP auto populated from Neutron (F)
* Auto-discovery during undercloud install process (M)
 * Monitoring
 * assignment, availability, status
 * capacity, historical statistics (M)
 * Management node (where triple-o is installed)
 * created as part of undercloud install process
 * can create additional management nodes (F)
  * Resource nodes
  * searchable by status, name, cpu, memory, and all attributes from
  ironic
  * can be allocated as one of four node types
  * compute
  * controller
  * object storage
  * block storage
  * Resource class - allows for further categorization of a node
  type
  * each node type specifies a single default resource class
  * allow multiple resource classes 

[openstack-dev] [neutron] Questions on logging setup for development

2013-12-06 Thread Paul Michali
Hi,

For Neutron, I'm creating a module (one of several eventually) as part of a new 
blueprint I'm working on, and the associated unit test module. I'm in really 
early development, and just running this UT module as a standalone script 
(rather than through tox). It allows me to do TDD pretty quickly on the code 
I'm developing (that's the approach I'm taking right now - fingers crossed :).

In the module, I did an import of the logging package and when I run UTs I can 
see messaging that would occur, if desired.

I have the following hack to turn off/on the logging for debug level:

if False:  # Debugging
logging.basicConfig(format='%(asctime)-15s [%(levelname)s] %(message)s',
level=logging.DEBUG)

I made the log calls the same as what would be in other Neutron code, so that I 
don't have to change the code later, as I start to fold it into the Neutron 
code. However, I'd like to import the neutron.openstack.common.log package in 
my code, so that the code will be identical to what is needed once I start 
running this code as part of a process, but I had some questions…

When using neutron.openstack.common.log, how do I toggle the debug level 
logging on, if I run this standalone, as I'm doing now?
Is there a way to do it, without adding in the above conditional logic to the 
production code? Maybe put something in the UT module?

I can always continue as is, and then switch things over later (changing the 
import line and pulling the if clause), once I have things mostly done, and 
want to run as part of Neutron, but it would be nice if I can find a way to do 
that up front to avoid changes later.

Thoughts? Suggestions?

Thanks!


PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][qa] How will ironic tests run in tempest?

2013-12-06 Thread Clark Boylan
On Fri, Dec 6, 2013 at 1:53 PM, David Kranz dkr...@redhat.com wrote:
 It's great that tempest tests for ironic have been submitted! I was
 reviewing https://review.openstack.org/#/c/48109/ and noticed that the tests
 do not actually run. They are skipped because baremetal is not enabled. This
 is not terribly surprising but we have had a policy in tempest to only merge
 code that has demonstrated that it works. For services that cannot run in
 the single-vm environment of the upstream gate we said there could be a
 system running somewhere that would run them and report a result to gerrit.
 Is there a plan for this, or to make an exception for ironic?

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

There is a change[0] to openstack-infra/config to add experimental
tempest jobs to test ironic. I think that change is close to being
ready, but I need to give it time for a proper review. Once in that
will allow you to test 48109 (in theory, not sure if all the bits will
just work). I don't think these tests fall under the cannot run in a
single vm environment umbrella, we should be able to test the
baremetal code via the pxe booting of VMs within the single VM
environment.

[0] https://review.openstack.org/#/c/53917/


Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Edmund Troche

I agree with what seems to also be the general consensus, that Glance can
become Heater+Glance (the service that manages images in OS today).
Clearly, if someone looks at the Glance DB schema, APIs and service type
(as returned by keystone service-list), all of the terminology is about
images, so we would need to more formally define what are the
characteristics or image, template, maybe assembly, components etc
and find what is a good generalization. When looking at the attributes for
image (image table), I can see where there are a few that would be
generic enough to apply to image, template etc, so those could be taken
to be the base set of attributes, and then based on the type (image,
template, etc) we could then have attributes that are type-specific (maybe
by leveraging what is today image_properties).

As I read through the discussion, the one thing that came to mind is asset
management. I can see where if someone bothers to create an image, or a
template, then it is for a good reason, and that perhaps you'd like to
maintain it as an IT asset. Along those lines, it occurred to me that maybe
what we need is to make Glance some sort of asset management service that
can be leveraged by Service Catalogs, Nova, etc. Instead of storing
images and templates  we store assets of one kind or another, with
artifacts (like files, image content, etc), and associated metadata. There
is some work we could borrow from, conceptually at least, from OSLC's Asset
Management specification:
http://open-services.net/wiki/asset-management/OSLC-Asset-Management-2.0-Specification/.
 Looking at this spec, it probably has more than we need, but there's
plenty we could borrow from it.


Edmund Troche




From:   Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   12/06/2013 01:34 PM
Subject:Re: [openstack-dev] [heat] [glance] Heater Proposal



As a Murano team we will be happy to contribute to Glance. Our Murano
metadata repository is a standalone component (with its own git
repository)which is not tightly coupled with Murano itself. We can easily
add our functionality to Glance as a new component\subproject.

Thanks
Georgy


On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

  On Dec 6, 2013, at 10:38 AM, Clint Byrum cl...@fewbar.com wrote:

   Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800:
   On 12/05/2013 04:25 PM, Clint Byrum wrote:
   Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
   Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
   On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
    wrote:
  
   Excerpts from Monty Taylor's message of 2013-12-04 17:54:45
  -0800:
   Why not just use glance?
  
  
   I've asked that question a few times, and I think I can collate
  the
   responses I've received below. I think enhancing glance to do
  these
   things is on the table:
  
   1. Glance is for big blobs of data not tiny templates.
   2. Versioning of a single resource is desired.
   3. Tagging/classifying/listing/sorting
   4. Glance is designed to expose the uploaded blobs to nova, not
  users
  
   My responses:
  
   1: Irrelevant. Smaller things will fit in it just fine.
  
   Fitting is one thing, optimizations around particular assumptions
  about the size of data and the frequency of reads/writes might be an
  issue, but I admit to ignorance about those details in Glance.
  
  
   Optimizations can be improved for various use cases. The design,
  however,
   has no assumptions that I know about that would invalidate storing
  blobs
   of yaml/json vs. blobs of kernel/qcow2/raw image.
  
   I think we are getting out into the weeds a little bit here. It is
  important to think about these apis in terms of what they actually do,
  before the decision of combining them or not can be made.
  
   I think of HeatR as a template storage service, it provides extra
  data and operations on templates. HeatR should not care about how those
  templates are stored.
   Glance is an image storage service, it provides extra data and
  operations on images (not blobs), and it happens to use swift as a
  backend.
  
   If HeatR and Glance were combined, it would result in taking two
  very different types of data (template metadata vs image metadata) and
  mashing them into one service. How would adding the complexity of HeatR
  benefit Glance, when they are dealing with conceptually two very
  different types of data? For instance, should a template ever care about
  the field minRam that is stored with an image? Combining them adds a
  huge development complexity with a very small operations payoff, and so
  Openstack is already so operationally complex that HeatR as a separate
  service would be knowledgeable. Only clients of Heat will ever care about
  data and operations on templates, so I move that 

Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync (was: creating a default for oslo config variables within a project?)

2013-12-06 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2013-12-06 13:38:16 -0800:
  
 
 On 2013-12-06 15:14, Yuriy Taraday wrote: 
 
  Hello, Sean. 
  
  I get the issue with upgrade path. User doesn't want to update config 
  unless one is forced to do so. 
  But introducing code that weakens security and let it stay is an 
  unconditionally bad idea. 
  It looks like we have to weigh two evils: having troubles upgrading and 
  lessening security. That's obvious. 
  
  Here are my thoughts on what we can do with it: 
  1. I think we should definitely force user to do appropriate configuration 
  to let us use secure ways to do locking. 
  2. We can wait one release to do so, e.g. issue a deprecation warning now 
  and force user to do it the right way later. 
  3. If we are going to do 2. we should do it in the service that is affected 
  not in the library because library shouldn't track releases of an 
  application that uses it. It should do its thing and do it right (secure). 
  
  So I would suggest to deal with it in Cinder by importing 'lock_path' 
  option after parsing configs and issuing a deprecation warning and setting 
  it to tempfile.gettempdir() if it is still None.
 
 This is what Sean's change is doing, but setting lock_path to
 tempfile.gettempdir() is the security concern. 

Yuriy's suggestion is that we should let Cinder override the config
variable's default with something insecure. Basically only deprecate
it in Cinder's world, not oslo's. That makes more sense from a library
standpoint as it keeps the library's expected interface stable.

 
 Since there seems to be plenty of resistance to using /tmp by default,
 here is my proposal: 
 
 1) We make Sean's change to open files in append mode. I think we can
 all agree this is a good thing regardless of any config changes. 
 
 2) Leave lockutils broken in Icehouse if lock_path is not set, as I
 believe Mark suggested earlier. Log an error if we find that
 configuration. Users will be no worse off than they are today, and if
 they're paying attention they can get the fixed lockutils behavior
 immediately. 

Broken how? Broken in that it raises an exception, or broken in that it
carries a security risk?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-12-06 Thread Randall Burt
I hope I'm not re-opening worm cans here, and that's not my intent, but I just 
wanted to get a little clarification in-line below:

On Dec 6, 2013, at 3:24 PM, Tim Schnell tim.schn...@rackspace.com
 wrote:

 To resolve this thread, I have created 5 blueprints based on this mailing
 list discussion. I have attempted to distill the proposed specification
 down to what seemed generally agreed upon but if you feel strongly that I
 have incorrectly captured something let's talk about it!
 
 Here are the blueprints:
 
 1) Stack Keywords
 blueprint: https://blueprints.launchpad.net/heat/+spec/stack-keywords
 spec: https://wiki.openstack.org/wiki/Heat/UI#Stack_Keywords

As proposed, these look like template keywords and not stack keywords.

I may be mis-remembering the conversation around this, but it would seem to me 
this mixes tagging templates and tagging stacks. In my mind, these are separate 
things. For the stack part, it seems like I could just pass something like 
--keyword blah multiple times to python-heatclient and not have to edit the 
template I'm passing. This lets me organize my stacks the way I want rather 
than relying on the template author (who may not be me) to organize things for 
me. Alternatively, I'd at least like the ability to accept, replace, and/or 
augment the keywords the template author proposes.

 
 2) Parameter Grouping and Ordering
 blueprint: 
 https://blueprints.launchpad.net/heat/+spec/parameter-grouping-ordering
 spec: 
 https://wiki.openstack.org/wiki/Heat/UI#Parameter_Grouping_and_Ordering
 
 3) Parameter Help Text
 blueprint: 
 https://blueprints.launchpad.net/heat/+spec/add-help-text-to-template
 spec: https://wiki.openstack.org/wiki/Heat/UI#Help_Text
 
 4) Parameter Label
 blueprint: 
 https://blueprints.launchpad.net/heat/+spec/add-parameter-label-to-template
 spec: https://wiki.openstack.org/wiki/Heat/UI#Parameter_Label
 
 
 This last blueprint did not get as much discussion so I have added it with
 the discussion flag set. I think this will get more important in the
 future but I don't need to implement right now. I'd love to hear more
 thoughts about it.
 
 5) Get Parameters API Endpoint
 blueprint: 
 https://blueprints.launchpad.net/heat/+spec/get-parameters-from-api
 spec: 
 https://wiki.openstack.org/wiki/Heat/UI#New_API_Endpoint_for_Returning_Temp
 late_Parameters

History around validate_template aside, I wonder if this doesn't re-open the 
discussion around having an endpoint that will translate an entire template 
into the native format (HOT). I understand that the idea is that we normalize 
parameter values to relieve user interfaces from having to understand several 
formats supported by Heat, but it just seems to me that there's a more general 
use case here.

 
 Thanks,
 Tim

I know its probably nit-picky, but I would prefer these specs be individual 
wiki pages instead of lumped all together. At any rate, thanks for organizing 
all this!

 
 On 11/28/13 4:55 AM, Zane Bitter zbit...@redhat.com wrote:
 
 On 27/11/13 23:37, Fox, Kevin M wrote:
 Hmm... Yeah. when you tell heat client the url to a template file, you
 could set a flag telling the heat client it is in a git repo. It could
 then automatically look for repo information and set a stack metadata
 item pointing back to it.
 
 Or just store the URL.
 
 If you didn't care about taking a performance hit, heat client could
 always try and check to see if it was a git repo url. That may add
 several extra http requests though...
 
 Thanks,
 Kevin
 
 From: Clint Byrum [cl...@fewbar.com]
 Sent: Wednesday, November 27, 2013 1:04 PM
 To: openstack-dev
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements   roadmap
 
 Excerpts from Fox, Kevin M's message of 2013-11-27 08:58:16 -0800:
 This use case is sort of a providence case. Where did the stack come
 from so I can find out more about it.
 
 
 This exhibits similar problems to our Copyright header problems. Relying
 on authors to maintain their authorship information in two places is
 cumbersome and thus the one that is not automated will likely fall out
 of sync fairly quickly.
 
 You could put a git commit field in the template itself but then it
 would be hard to keep updated.
 
 
 Or you could have Heat able to pull from any remote source rather than
 just allowing submission of the template directly. It would just be
 another column in the stack record. This would allow said support person
 to see where it came from by viewing the stack, which solves the use
 case.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Randall Burt
I too have warmed to this idea but wonder about the actual implementation 
around it. While I like where Edmund is going with this, I wonder if it 
wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates to 
Glance (/assemblies, /applications, etc) along side /images.  Initially, we 
could have separate endpoints and data structures for these different asset 
types, refactoring the easy bits along the way and leveraging the existing data 
storage and caching bits, but leaving more disruptive changes alone. That can 
get the functionality going, prove some concepts, and allow all of the 
interested parties to better plan a more general v3 api.

On Dec 6, 2013, at 4:23 PM, Edmund Troche 
edmund.tro...@us.ibm.commailto:edmund.tro...@us.ibm.com
 wrote:


I agree with what seems to also be the general consensus, that Glance can 
become Heater+Glance (the service that manages images in OS today). Clearly, 
if someone looks at the Glance DB schema, APIs and service type (as returned by 
keystone service-list), all of the terminology is about images, so we would 
need to more formally define what are the characteristics or image, 
template, maybe assembly, components etc and find what is a good 
generalization. When looking at the attributes for image (image table), I can 
see where there are a few that would be generic enough to apply to image, 
template etc, so those could be taken to be the base set of attributes, and 
then based on the type (image, template, etc) we could then have attributes 
that are type-specific (maybe by leveraging what is today image_properties).

As I read through the discussion, the one thing that came to mind is asset 
management. I can see where if someone bothers to create an image, or a 
template, then it is for a good reason, and that perhaps you'd like to maintain 
it as an IT asset. Along those lines, it occurred to me that maybe what we need 
is to make Glance some sort of asset management service that can be leveraged 
by Service Catalogs, Nova, etc. Instead of storing images and templates  we 
store assets of one kind or another, with artifacts (like files, image content, 
etc), and associated metadata. There is some work we could borrow from, 
conceptually at least, from OSLC's Asset Management specification: 
http://open-services.net/wiki/asset-management/OSLC-Asset-Management-2.0-Specification/.
 Looking at this spec, it probably has more than we need, but there's plenty we 
could borrow from it.


Edmund Troche


graycol.gifGeorgy Okrokvertskhov ---12/06/2013 01:34:13 PM---As a Murano team 
we will be happy to contribute to Glance. Our Murano metadata repository is a 
stand

From: Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date: 12/06/2013 01:34 PM
Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal





As a Murano team we will be happy to contribute to Glance. Our Murano metadata 
repository is a standalone component (with its own git repository)which is not 
tightly coupled with Murano itself. We can easily add our functionality to 
Glance as a new component\subproject.

Thanks
Georgy


On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:

On Dec 6, 2013, at 10:38 AM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:

 Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800:
 On 12/05/2013 04:25 PM, Clint Byrum wrote:
 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at 
 fewbar.comhttp://fewbar.com/
  wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.

 Fitting is one thing, optimizations around particular assumptions about 
 the size of data and the frequency of reads/writes might be an issue, 
 but I admit to ignorance about those details in Glance.


 Optimizations can be improved for various use cases. The design, however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.

 I think we are getting out into the weeds a little bit here. It is 
 important to think about these apis in terms of what they actually do, 
 before the decision of combining 

Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync (was: creating a default for oslo config variables within a project?)

2013-12-06 Thread Ben Nemec

On 2013-12-06 16:30, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2013-12-06 13:38:16 -0800:



On 2013-12-06 15:14, Yuriy Taraday wrote:

 Hello, Sean.

 I get the issue with upgrade path. User doesn't want to update config unless 
one is forced to do so.
 But introducing code that weakens security and let it stay is an 
unconditionally bad idea.
 It looks like we have to weigh two evils: having troubles upgrading and 
lessening security. That's obvious.

 Here are my thoughts on what we can do with it:
 1. I think we should definitely force user to do appropriate configuration to 
let us use secure ways to do locking.
 2. We can wait one release to do so, e.g. issue a deprecation warning now and 
force user to do it the right way later.
 3. If we are going to do 2. we should do it in the service that is affected 
not in the library because library shouldn't track releases of an application that 
uses it. It should do its thing and do it right (secure).

 So I would suggest to deal with it in Cinder by importing 'lock_path' option 
after parsing configs and issuing a deprecation warning and setting it to 
tempfile.gettempdir() if it is still None.

This is what Sean's change is doing, but setting lock_path to
tempfile.gettempdir() is the security concern.


Yuriy's suggestion is that we should let Cinder override the config
variable's default with something insecure. Basically only deprecate
it in Cinder's world, not oslo's. That makes more sense from a library
standpoint as it keeps the library's expected interface stable.


Ah, I see the distinction now.  If we get this split off into 
oslo.lockutils (which I believe is the plan), that's probably what we'd 
have to do.




Since there seems to be plenty of resistance to using /tmp by default,
here is my proposal:

1) We make Sean's change to open files in append mode. I think we can
all agree this is a good thing regardless of any config changes.

2) Leave lockutils broken in Icehouse if lock_path is not set, as I
believe Mark suggested earlier. Log an error if we find that
configuration. Users will be no worse off than they are today, and if
they're paying attention they can get the fixed lockutils behavior
immediately.


Broken how? Broken in that it raises an exception, or broken in that it
carries a security risk?


Broken as in external locks don't actually lock.  If we fall back to 
using a local semaphore it might actually be a little better because 
then at least the locks work within a single process, whereas before 
there was no locking whatsoever.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Edmund Troche

I thought about that, i.e. first step in implementation just adding
templates, but like you said, you might end up duplicating 5 of the 7
tables in the Glance database, for every new asset type (image, template,
etc). Then you would do a similar thing for the endpoints. So, I'm not sure
what's a better way to approach this. For all I know, doing a
s/image/asset/g for *.py,, adding attribute images.type, and a little
more refactoring, might get us 80% of the asset management functionality
that we would need initially ;-) Not knowing the Glance code base I'm only
going by the surface footprint, so I'll leave it to the experts to comment
on what would be a good approach to take Glance to the next level.


Edmund Troche



From:   Randall Burt randall.b...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   12/06/2013 04:47 PM
Subject:Re: [openstack-dev] [heat] [glance] Heater Proposal



I too have warmed to this idea but wonder about the actual implementation
around it. While I like where Edmund is going with this, I wonder if it
wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates
to Glance (/assemblies, /applications, etc) along side /images.  Initially,
we could have separate endpoints and data structures for these different
asset types, refactoring the easy bits along the way and leveraging the
existing data storage and caching bits, but leaving more disruptive changes
alone. That can get the functionality going, prove some concepts, and allow
all of the interested parties to better plan a more general v3 api.

On Dec 6, 2013, at 4:23 PM, Edmund Troche edmund.tro...@us.ibm.com
 wrote:



  I agree with what seems to also be the general consensus, that Glance
  can become Heater+Glance (the service that manages images in OS
  today). Clearly, if someone looks at the Glance DB schema, APIs and
  service type (as returned by keystone service-list), all of the
  terminology is about images, so we would need to more formally define
  what are the characteristics or image, template, maybe
  assembly, components etc and find what is a good generalization.
  When looking at the attributes for image (image table), I can see
  where there are a few that would be generic enough to apply to
  image, template etc, so those could be taken to be the base set
  of attributes, and then based on the type (image, template, etc) we
  could then have attributes that are type-specific (maybe by
  leveraging what is today image_properties).

  As I read through the discussion, the one thing that came to mind is
  asset management. I can see where if someone bothers to create an
  image, or a template, then it is for a good reason, and that perhaps
  you'd like to maintain it as an IT asset. Along those lines, it
  occurred to me that maybe what we need is to make Glance some sort of
  asset management service that can be leveraged by Service Catalogs,
  Nova, etc. Instead of storing images and templates  we store
  assets of one kind or another, with artifacts (like files, image
  content, etc), and associated metadata. There is some work we could
  borrow from, conceptually at least, from OSLC's Asset Management
  specification:
  
http://open-services.net/wiki/asset-management/OSLC-Asset-Management-2.0-Specification/
  . Looking at this spec, it probably has more than we need, but
  there's plenty we could borrow from it.


  Edmund Troche


  graycol.gifGeorgy Okrokvertskhov ---12/06/2013 01:34:13 PM---As a
  Murano team we will be happy to contribute to Glance. Our Murano
  metadata repository is a stand

  From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org,
  Date: 12/06/2013 01:34 PM
  Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal





  As a Murano team we will be happy to contribute to Glance. Our Murano
  metadata repository is a standalone component (with its own git
  repository)which is not tightly coupled with Murano itself. We can
  easily add our functionality to Glance as a new component\subproject.

  Thanks
  Georgy


  On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya 
  vishvana...@gmail.com wrote:

On Dec 6, 2013, at 10:38 AM, Clint Byrum cl...@fewbar.com
wrote:

 Excerpts from Jay Pipes's message of 2013-12-05 21:32:54
-0800:
 On 12/05/2013 04:25 PM, Clint Byrum wrote:
 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49
-0800:
 Excerpts from Randall Burt's message of 2013-12-05
09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at
fewbar.com

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Randall Burt
On Dec 6, 2013, at 5:04 PM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Randall Burt's message of 2013-12-06 14:43:05 -0800:
 I too have warmed to this idea but wonder about the actual implementation 
 around it. While I like where Edmund is going with this, I wonder if it 
 wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates 
 to Glance (/assemblies, /applications, etc) along side /images.  Initially, 
 we could have separate endpoints and data structures for these different 
 asset types, refactoring the easy bits along the way and leveraging the 
 existing data storage and caching bits, but leaving more disruptive changes 
 alone. That can get the functionality going, prove some concepts, and allow 
 all of the interested parties to better plan a more general v3 api.
 
 
 +1 on bolting the different views for things on as new v2 pieces instead
 of trying to solve the API genericism immediately.
 
 I would strive to make this a facade, and start immediately on making
 Glance more generic under the hood.  Otherwise these will just end up
 as silos inside Glance instead of silos inside OpenStack.

Totally agreed. Where it makes sense to refactor we should do that rather than 
implementing essentially different services underneath.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Mark Washenberger
On Fri, Dec 6, 2013 at 2:43 PM, Randall Burt randall.b...@rackspace.comwrote:

  I too have warmed to this idea but wonder about the actual implementation
 around it. While I like where Edmund is going with this, I wonder if it
 wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates
 to Glance (/assemblies, /applications, etc) along side /images.  Initially,
 we could have separate endpoints and data structures for these different
 asset types, refactoring the easy bits along the way and leveraging the
 existing data storage and caching bits, but leaving more disruptive changes
 alone. That can get the functionality going, prove some concepts, and allow
 all of the interested parties to better plan a more general v3 api.


I think this trajectory makes a lot of sense as an initial plan. We should
definitely see how much overlap there is through a detailed proposal. If
there are some extremely low-hanging fruit on the side of generalization,
maybe we can revise such a proposal before we get going too far.

It also occurs to me that this is a very big shift in focus for the Glance
team, however, so perhaps it would make sense to try to discuss this at the
midcycle meetup [1]? I know some of the discussion there is going to
revolve around finding a better solution to the image sharing / image
marketplace problem.


[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019230.html



  On Dec 6, 2013, at 4:23 PM, Edmund Troche edmund.tro...@us.ibm.com
  wrote:

  I agree with what seems to also be the general consensus, that Glance
 can become Heater+Glance (the service that manages images in OS today).
 Clearly, if someone looks at the Glance DB schema, APIs and service type
 (as returned by keystone service-list), all of the terminology is about
 images, so we would need to more formally define what are the
 characteristics or image, template, maybe assembly, components etc
 and find what is a good generalization. When looking at the attributes for
 image (image table), I can see where there are a few that would be
 generic enough to apply to image, template etc, so those could be taken
 to be the base set of attributes, and then based on the type (image,
 template, etc) we could then have attributes that are type-specific (maybe
 by leveraging what is today image_properties).

 As I read through the discussion, the one thing that came to mind is
 asset management. I can see where if someone bothers to create an image,
 or a template, then it is for a good reason, and that perhaps you'd like to
 maintain it as an IT asset. Along those lines, it occurred to me that maybe
 what we need is to make Glance some sort of asset management service that
 can be leveraged by Service Catalogs, Nova, etc. Instead of storing
 images and templates  we store assets of one kind or another, with
 artifacts (like files, image content, etc), and associated metadata. There
 is some work we could borrow from, conceptually at least, from OSLC's Asset
 Management specification:
 http://open-services.net/wiki/asset-management/OSLC-Asset-Management-2.0-Specification/.
 Looking at this spec, it probably has more than we need, but there's plenty
 we could borrow from it.


 Edmund Troche


 graycol.gifGeorgy Okrokvertskhov ---12/06/2013 01:34:13 PM---As a
 Murano team we will be happy to contribute to Glance. Our Murano metadata
 repository is a stand


 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 Date: 12/06/2013 01:34 PM
 Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal

 --



 As a Murano team we will be happy to contribute to Glance. Our Murano
 metadata repository is a standalone component (with its own git
 repository)which is not tightly coupled with Murano itself. We can easily
 add our functionality to Glance as a new component\subproject.

 Thanks
 Georgy


 On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya 
 *vishvana...@gmail.com* vishvana...@gmail.com wrote:


On Dec 6, 2013, at 10:38 AM, Clint Byrum 
 *cl...@fewbar.com*cl...@fewbar.com
wrote:

 Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800:
 On 12/05/2013 04:25 PM, Clint Byrum wrote:
 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
 Excerpts from Randall Burt's message of 2013-12-05 09:05:44
-0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at 
 *fewbar.com*http://fewbar.com/

  wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45
-0800:
 Why not just use glance?


 I've asked that question a few times, and I think I can
collate the
 responses I've received below. I think enhancing glance to do
these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
  

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Georgy Okrokvertskhov
That is great. How this work will be coordinated? I just want to be sure
that all assets are covered.

Thanks
Georgy


On Fri, Dec 6, 2013 at 3:15 PM, Randall Burt randall.b...@rackspace.comwrote:

 On Dec 6, 2013, at 5:04 PM, Clint Byrum cl...@fewbar.com
  wrote:

  Excerpts from Randall Burt's message of 2013-12-06 14:43:05 -0800:
  I too have warmed to this idea but wonder about the actual
 implementation around it. While I like where Edmund is going with this, I
 wonder if it wouldn't be valuable in the short-to-mid-term (I/J) to just
 add /templates to Glance (/assemblies, /applications, etc) along side
 /images.  Initially, we could have separate endpoints and data structures
 for these different asset types, refactoring the easy bits along the way
 and leveraging the existing data storage and caching bits, but leaving more
 disruptive changes alone. That can get the functionality going, prove some
 concepts, and allow all of the interested parties to better plan a more
 general v3 api.
 
 
  +1 on bolting the different views for things on as new v2 pieces instead
  of trying to solve the API genericism immediately.
 
  I would strive to make this a facade, and start immediately on making
  Glance more generic under the hood.  Otherwise these will just end up
  as silos inside Glance instead of silos inside OpenStack.

 Totally agreed. Where it makes sense to refactor we should do that rather
 than implementing essentially different services underneath.

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes - close to implementation start (Jaromir Coufal)

2013-12-06 Thread Steve Doll
Let me know if I can be of assistance in the visual design of this.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >