[openstack-dev] [qa] Update: Nova API List for Missing Tempest Tests

2013-10-15 Thread Masayuki Igawa
Hi, 

First, thank you to an anonymous for updating this list!
- GET /{project_id}/servers/:server_id/diagnostics

And, I have updated: Nova API List for Missing Tempest Tests.
  
https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc

The summary of this list:
different count from
Tested or not # of APIs ratio   the last time
---
Tested API  124  49.6%  +2
Not Tested API   66  26.4%  -2
Not Need to Test(*1) 60  24.0%   0
---
Total(*2):  250 100.0%   0
(*1) Because they are deprecated APIs such as nova-network and volume.
(*2) not included v3 APIs

The tempest version is:
 commit f55f4e54ceab7c6a4d330f92c8059e46233e3560
 Merge: 86ab238 062e30a
 Author: Jenkins jenk...@review.openstack.org
 Date:   Mon Oct 14 15:55:59 2013 +

By the way, I saw a design summit proposal related to this topic(*3). I think
this information should be generated automatically. So I'd like to talk about
this topic at the summit session.
(*3) Coverage analysis tooling: http://summit.openstack.org/cfp/details/171

This information would be useful for creating Tempest tests.
Any comments/questions/suggestions are welcome.

Best Regards,
-- Masayuki Igawa


 Hi,
 
 # I'm sorry for this resending because my last mail has unnecessary messages.
 
 
 I have updated: Nova API List for Missing Tempest Tests.
  
 https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc
 
 The summary of this list:
   different count from
 Tested or not# of APIsratio   the last time
 ---
 Tested API122  48.8%  +5
 Not Tested API 68  27.2%  -5
 Not Need to Test(*1)   60  24.0%   0
 ---
 Total(*2):250 100.0%   0
 
 (*1) Because they are deprecated APIs such as nova-network and volume.
 (*2) not included v3 APIs
 
 I hope this information would be helpful for creating Tempest tests.
 Any comments and questions are welcome.
 
 Best Regards,
 -- Masayuki Igawa
 
 
  Hi, Tempest developers
  
  I have made:
   Nova API List for Missing Tempest Tests.
   
  https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc
  
  This list shows what we should test. That is:
   * Nova has 250 APIs(not include v3 APIs).
   * 117 APIs are executed(maybe tested).
   * 73 APIs are not executed.
   * 60 APIs are not executed. But they maybe not need to test.
   - Because they are deprecated APIs such as nova-network and volume.
  
  So I think we need more tempest test cases.
  If this idea is acceptable, can you put your name to 'assignee' at your 
  favorites,
  and implement tempest tests.
  
  Any comments are welcome.
  
  Additional information:
   I made this API list with modification of nova's code that based on 
   https://review.openstack.org/#/c/25882/ (Abandoned).
  
  Best Regards,
  -- Masayuki Igawa
  
  
 
 




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] [Horizon] Havana RC2 available

2013-10-15 Thread Thierry Carrez
Hi everyone,

Our last two RC2 windows just closed. Due to major issues detected in
key features during RC1 testing, we just published new Havana release
candidates for OpenStack Identity (Keystone) and OpenStack Dashboard
(Horizon).

You can find RC2 tarballs and lists of fixed bugs at:

https://launchpad.net/keystone/havana/havana-rc2
https://launchpad.net/horizon/havana/havana-rc2

This is hopefully the last Havana release candidate for Keystone and
Horizon. Unless a last-minute release-critical regression is found that
warrant another release candidate respin, those RC2s will be formally
included in the common OpenStack 2013.2 final release Thursday. You are
therefore strongly encouraged to test and validate these tarballs.

Alternatively, you can grab the code at:
https://github.com/openstack/keystone/tree/milestone-proposed
https://github.com/openstack/horizon/tree/milestone-proposed

If you find a regression that could be considered release-critical,
please file it at https://bugs.launchpad.net/keystone/+filebug (or
https://bugs.launchpad.net/horizon/+filebug if the bug is in Horizon)
and tag it *havana-rc-potential* to bring it to the release crew's
attention.

Happy regression hunting,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-15 Thread Yathiraj Udupi (yudupi)
I have made some edits to the document: 
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?pli=1#
by updating the Instance Group Model and APIs based on the recent mailing list 
discussion below and also about the Policy Model in another email thread.  An 
action item is to provide some examples of request and response to the REST 
apis.  But they will be based on enhancements and reuse of some examples 
already provided here: https://review.openstack.org/#/c/30028/22

This API proposal is still under discussion and work-in-progress,  and will 
definitely be a good session topic to finalize this proposal.

Regards,
Yathi.


From: Yathiraj Udupi yud...@cisco.commailto:yud...@cisco.com
Date: Monday, October 14, 2013 10:17 AM
To: Mike Spreitzer mspre...@us.ibm.commailto:mspre...@us.ibm.com, Debojyoti 
Dutta ddu...@gmail.commailto:ddu...@gmail.com, garyk 
gkot...@vmware.commailto:gkot...@vmware.com
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - 
Updated Instance Group Model and API extension model - WIP Draft

Hi Mike,

I read your email where you expressed concerns regarding create-time 
dependencies, and I agree they are valid concerns to be addressed.  But like we 
all agree, as a starting point, we are just focusing on the APIs for now, and 
will leave that aside as implementation details to be addressed later.
Thanks for sharing your suggestions on how we can simplify the APIs.  I think 
we are getting closer to finalizing this one.

Let us start at the model proposed here -
[1] 
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing
(Ignore the white diamonds - they will be black, when I edit the doc)

The InstanceGroup represents all the information necessary to capture the group 
- nodes, edges, policies, and metadata

InstanceGroupMember - is a reference to an Instance, which is saved separately, 
using the existing Instance Model in Nova.

InstanceGroupMemberConnection - represents the edge

InstanceGroupPolicy is a reference to a Policy, which will also be saved 
separately, (currently not existing in the model, but has to be created). Here 
in the Policy model, I don't mind adding any number of additional fields, and 
key-value pairs to be able to fully define a policy.  I guess a Policy-metadata 
dictionary is sufficient to capture all the required arguments.
The InstanceGroupPolicy will be associated to a group as a whole or an edge.

InstanceGroupMetadata - represents key-value dictionary for any additional 
metadata for the instance group.

I think this should fully support what we care about - nodes, edges, policies 
and metadata.

Do we all agree ?


Now going to the APIs,

Register GROUP API (from my doc [1]):


POST


/v3.0/{tenant_id}/groups


Register a group


I think the confusion is only about when the member (all nested members) and 
policy about when they are saved in the DB (registered, but not CREATED 
actually), such that we can associate a UUID.  This led to my original thinking 
that it is a 3-phase operation where we have to register (save in DB) the 
nested members first, then register the group as a whole.  But this is not 
client friendly.

Like I had suggested earlier, as an implementation detail of the Group 
registration API (CREATE part 1 in your terminology), we can support this: as 
part of the group registration transaction,  complete the registration of the 
nested members, get their UUIDs,  create the InstanceGroupMemberConnections, 
and then complete saving the group - resulting in a UUID for the group,  all in 
a single transaction-scope.  While you start the transaction, you can start 
with a UUID for the group, so that you can add the group_id pointers to the 
individual members,  and then finally complete the transaction.
This means,  you provide as an input to the REST API - the complete nested 
tree, including all the details about the nested members and policies, and the 
register API, will handle the saving of all the individual objects required.

But I think it does help to also add additional APIs to just register an 
InstanceGroupMember and  an InstanceGroupPolicy separately.  This might help 
the client while creating a group, rather than giving the entire nested tree.   
(his makes it a 3-phase) This API will support adding members and policies to 
an instance group that is created.  (you can start with an empty group)


POST


/v3.0/{tenant_id}/groups/instance


Register an instance belonging to an instancegroup



POST


/v3.0/{tenant_id}/groups/policy


Register a policy belonging to an instance group



Are we okay with this ?


The next API - is the actual creation of the resources.  (CREATE part 2  in 
your terminology).   This is was my create API in the doc-


POST


/v3.0/{tenant_id}/groups/{id}/create


Create and schedule 

[openstack-dev] [Nova] What validation feature is necessary for Nova v3 API

2013-10-15 Thread Kenichi Oomichi

Hi,

# I resend this because gmail distinguished my previous mail as spam one..

I'd like to know what validation feature is really needed for Nova v3 API,
and I hope this mail will be a kick-off of brain-storming for it.

 Introduction 
I have submitted a blueprint nova-api-validation-fw.
The purpose is comprehensive validation of API input parameters.
32% of Nova v3 API parameters are not validated with any ways[1], and the
fact would cause an internal error if some clients just send an invalid 
request. If an internal error happens, the error message is output to a
log file and OpenStack operators should research its reason. It would be
hard work for the operators.

In Havana development cycle, I proposed the implementation code of the BP
but it was abandoned. Nova web framework will move to Pecan/WSME, but my
code depended on WSGI. So the code would have merits in short term, but not
in long term.
Now some Pecan/WSME sessions are proposed for Hong-Kong summit, so I feel
this situation is a good chance for this topic.

At the first step, I'd like to find what validation features are necessary
for Nova v3 API through the discussion. After listing necessary features, 
necessary features will be proposed for WSME improvement if we need.


For discussing, I have investigated all validation ways of current Nova v3
API parameters. There are 79 API methods, and 49 methods use API parameters
of a request body. Totally, they have 148 API parameters. (details: [1])

Necessary features, what I guess now, are the following:

 Basic Validation Feature 
Through this investigation, it seems that we need some basic validation
features such as:
* Type validation
  str(name, ..), int(vcpus, ..), float(rxtx_factor), dict(metadata, ..),
  list(networks, ..), bool(conbine, ..), None(availability_zone)
* String length validation
  1 - 255
* Value range validation
  value = 0(rotation, ..), value  0(vcpus, ..),
  value = 1(os-multiple-create:min_count, os-multiple-create:max_count)
* Data format validation
  * Pattern:
uuid(volume_id, ..), boolean(on_shared_storage, ..), 
base64encoded(contents),
ipv4(access_ip_v4, fixed_ip), ipv6(access_ip_v6)
  * Allowed list:
'active' or 'error'(state), 'parent' or 'child'(cells.type),
'MANUAL' or 'AUTO'(os-disk-config:disk_config), ...
  * Allowed string:
not contain '!' and '.'(cells.name),
contain [a-zA-Z0-9_.- ] only(flavor.name, flavor.id)
* Mandatory validation
  * Required: server.name, flavor.name, ..
  * Optional: flavor.ephemeral, flavor.swap, ..


 Auxiliary Validation Feature 
Some parameters have a dependency between other parameter.
For example, name or/and availability_zone should be specified when updating an
aggregate. The parameter dependencies are few cases, and the dependency 
validation
feature would not be mandatory.

The cases are the following:
* Required if not specifying other:
  (update aggregate: name or availability_zone), (host: status or 
maintenance_mode),
  (server: os-block-device-mapping:block_device_mapping or image_ref)
* Should not specify both:
  (interface_attachment: net_id and port_id),
  (server: fixed_ip and port)


 API Documentation Feature 
WSME has a unique feature which generates API documentations from source code.
The 
documentations(http://docs.openstack.org/developer/ceilometer/webapi/v2.html)
contains:
* Method, URL (GET /v2/resources/, etc)
* Parameters
* Reterun type
* Parameter samples of both JSON and XML

This feature will help us for Nova v3 API, because it consists of many APIs and
the API sample implementations seemed hard processes in Havana development.
Current documentations, which are generated with this feature, contain only Type
as each parameter attribute.
If Nova-v3-API documentation contains String length, Value range, Data format,
Mandatory also, it help many developers and users.


Please let me know necessary features you are thinking.
Of course, any comments are welcome :-)


Thanks
Ken'ichi Ohmichi

---
[1]: https://wiki.openstack.org/wiki/NovaApiValidationFramework 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-15 Thread Lingxian Kong
then, what's the conclusion that we can begin to start?


2013/10/15 Christopher Yeoh cbky...@gmail.com

 On Tue, Oct 15, 2013 at 10:25 AM, Caitlin Bestler 
 caitlin.best...@nexenta.com wrote:

 On 10/14/2013 8:37 AM, Ben Nemec wrote:

 I agree that this needs to be fixed.  It's very counterintuitive, if
 nothing else (which is also my argument against requiring all-tenants
 for admin users in the first place).  The only question for me is
 whether to fix it in novaclient or in Nova itself.


 If it is fixed in novaclient, then any unscrupulous tenant would be able
 to unfix it in novaclient themselves and gain the same information about
 other tenants that the bug is allowing.

 So if the intent is to protect leakage of information across tenant lines
 then the correct solution is a real lock (i.e. in Nova) rather
 than just a screen door lock.


 The novaclient fix for V2 would be simply to automatically pass
 all-tenants where needed. It would not give a non admin user any extra
 privileges even if they modified novaclient.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
**
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Project release status meeting - 21:00 UTC

2013-10-15 Thread Thierry Carrez
Today we'll have the last release meeting before Havana final release.
At the time of this writing no RC window is currently open. For each
project we'll look at havana-rc-potential bugs and see if any of them is
worth the risk associated with a late respin. We'll also review the
state of the Release Notes and see how they can be improved.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads for integrated programs should be present (if you
can't make it, please name a substitute on [1]). Other program leads and
everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20131015T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-15 Thread Renat Akhmerov
Hey Clint, thanks for your question. I think it's been fully answered by this 
time. You and other people are very welcome to collaborate :)

Joshua, as far as renaming the original document I would suggest we keep it as 
it is for now just to preserve the explicit history of how it's been going so 
far. We now have a link from launchpad name to Convection proposal so one 
shouldn't be confused about what is what.

Thanks!

Renat Akhmerov
Mirantis Inc.

On 15.10.2013, at 0:32, Joshua Harlow harlo...@yahoo-inc.com wrote:

 +2 More collaboration the better :)
 
 From: Stan Lagun sla...@mirantis.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: Monday, October 14, 2013 1:20 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Mistral] Announcing a new task scheduling and 
 orchestration service for OpenStack
 
 
  Why exactly aren't you just calling this Convection and/or collaborating
  with the developers who came up with it?
 
 We do actively collaborate with TaskFlow/StateManagement team who are also 
 the authors of Convection proposal. This is a joint project and we invite you 
 and other developers to join and contribute.
 Convection is a Microsoft trademark. That's why Mistral
 
 
 On Tue, Oct 15, 2013 at 12:04 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700:
  Hi OpenStackers,
 
  I am proud to announce the official launch of the Mistral project. At 
  Mirantis we have a team to start contributing to the project right away. 
  We invite anybody interested in task service  state management to join 
  the initiative.
 
  Mistral is a new OpenStack service designed for task flow control, 
  scheduling, and execution. The project will implement Convection proposal 
  (https://wiki.openstack.org/wiki/Convection) and provide an API and 
  domain-specific language that enables users to manage tasks and their 
  dependencies, and to define workflows, triggers, and events. The service 
  will provide the ability to schedule tasks, as well as to define and 
  manage external sources of events to act as task execution triggers.
 
 Why exactly aren't you just calling this Convection and/or collaborating
 with the developers who came up with it?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Sincerely yours
 Stanislav (Stan) Lagun
 Senior Developer
 Mirantis
 35b/3, Vorontsovskaya St.
 Moscow, Russia
 Skype: stanlagun
 www.mirantis.com
 sla...@mirantis.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-15 Thread Robert Collins
Please don't invert the bug though: if --all-tenants becomes the
default nova server behaviour in v3, please ensure there is a
--no-all-tenants to unbreak it for non-trivial clouds.

Thanks!
-Rob

On 15 October 2013 20:54, Lingxian Kong anlin.k...@gmail.com wrote:
 then, what's the conclusion that we can begin to start?


 2013/10/15 Christopher Yeoh cbky...@gmail.com

 On Tue, Oct 15, 2013 at 10:25 AM, Caitlin Bestler
 caitlin.best...@nexenta.com wrote:

 On 10/14/2013 8:37 AM, Ben Nemec wrote:

 I agree that this needs to be fixed.  It's very counterintuitive, if
 nothing else (which is also my argument against requiring all-tenants
 for admin users in the first place).  The only question for me is
 whether to fix it in novaclient or in Nova itself.


 If it is fixed in novaclient, then any unscrupulous tenant would be able
 to unfix it in novaclient themselves and gain the same information about
 other tenants that the bug is allowing.

 So if the intent is to protect leakage of information across tenant lines
 then the correct solution is a real lock (i.e. in Nova) rather
 than just a screen door lock.


 The novaclient fix for V2 would be simply to automatically pass
 all-tenants where needed. It would not give a non admin user any extra
 privileges even if they modified novaclient.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 
 Lingxian Kong
 Huawei Technologies Co.,LTD.
 IT Product Line CloudOS PDU
 China, Xi'an
 Mobile: +86-18602962792
 Email: konglingx...@huawei.com; anlin.k...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Plugin packaging

2013-10-15 Thread Zane Bitter

On 15/10/13 02:18, Sam Alba wrote:

Hello,

I am working on a Heat plugin that makes a new resource available in a
template. It's working great and I will opensource it this week if I
can get the packaging right...


Awesome! :)


Right now, I am linking my module.py file in /usr/lib/heat to get it
loaded when heat-engine starts. But according to the doc, I am
supposed to be able to make the plugin discoverable by heat-engine if
the module appears in the package heat.engine.plugins[1]


I think the documentation may be leading you astray here; it's the other 
way around: once the plugin is discovered by the engine, it will appear 
in the package heat.engine.plugins. So you should be able to do:


 import heat.engine.resources
 heat.engine.resources.initialise()
 from heat.engine.plugins import module
 print module.resource_mapping()

(FWIW this is working for me on latest master of Heat.)

As far as making the plugin discoverable is concerned, all you should 
have to do is install the module in /usr/lib/heat/.



I looked into the plugin_loader module in the Heat source code and it
looks like it should work. However I was unable to get a proper Python
package.

Has anyone been able to make this packaging right for an external Heat plugin?


I've never tried to do this with a Python package, the mechanism is 
really designed more for either dropping the module in there manually, 
or installing it from a Debian or RPM package.


It sounds like what you're doing is trying to install the package in 
/usr/lib/python2.7/site-packages/ (or in a venv) in the package 
heat.engine.plugins and get the engine to recognise it that way? I don't 
think there's a safe way to make that work, because the plugin loader 
creates its own heat.engine.plugins package that will replace anything 
that exists on that path (see line 41 of plugin_loader.py).


Heat (in fact, all of OpenStack) is designed as a system service, so the 
normal rules of Python _application_ packaging don't quite fit. e.g. If 
you want to use a plugin locally (for a single user) rather than install 
it globally, the way to do it is to specify a local plugin directory 
when running heat-engine, rather than have the plugin installed in a venv.


Hope that helps.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to get VXLAN Endpoint IP without agent

2013-10-15 Thread B Veera-B37207
Hi,

Vxlan endpoint ip is configured in 
'/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini' as 'local_ip'
While starting openvswitch agent the above local ip is populated in neutron 
database.

Is there any way to get local_ip of compute node without any agent running?

Thanks in advance.

Regards,
Veera.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-15 Thread Lingxian Kong
Have there been any changes in V3 now?


2013/10/15 Robert Collins robe...@robertcollins.net

 Please don't invert the bug though: if --all-tenants becomes the
 default nova server behaviour in v3, please ensure there is a
 --no-all-tenants to unbreak it for non-trivial clouds.

 Thanks!
 -Rob

 On 15 October 2013 20:54, Lingxian Kong anlin.k...@gmail.com wrote:
  then, what's the conclusion that we can begin to start?
 
 
  2013/10/15 Christopher Yeoh cbky...@gmail.com
 
  On Tue, Oct 15, 2013 at 10:25 AM, Caitlin Bestler
  caitlin.best...@nexenta.com wrote:
 
  On 10/14/2013 8:37 AM, Ben Nemec wrote:
 
  I agree that this needs to be fixed.  It's very counterintuitive, if
  nothing else (which is also my argument against requiring all-tenants
  for admin users in the first place).  The only question for me is
  whether to fix it in novaclient or in Nova itself.
 
 
  If it is fixed in novaclient, then any unscrupulous tenant would be
 able
  to unfix it in novaclient themselves and gain the same information
 about
  other tenants that the bug is allowing.
 
  So if the intent is to protect leakage of information across tenant
 lines
  then the correct solution is a real lock (i.e. in Nova) rather
  than just a screen door lock.
 
 
  The novaclient fix for V2 would be simply to automatically pass
  all-tenants where needed. It would not give a non admin user any extra
  privileges even if they modified novaclient.
 
  Chris
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  
  Lingxian Kong
  Huawei Technologies Co.,LTD.
  IT Product Line CloudOS PDU
  China, Xi'an
  Mobile: +86-18602962792
  Email: konglingx...@huawei.com; anlin.k...@gmail.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
**
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-15 Thread Renat Akhmerov

On 15.10.2013, at 14:45, Zane Bitter zbit...@redhat.com wrote:

 That said, can we please, please, please not invent a *third* meaning of 
 orchestration? The proposal, as I understand it, is Workflow as a Service, 
 so let's call it that. The fact that orchestration uses a workflow does not 
 make them the same thing. OpenStack already has an Orchestration program, 
 calling Mistral a task scheduling and orchestration service just adds 
 confusion.

Yes, that's right. It's all up to discussion and I'm glad you mentioned that. 
Orchestration seems now a very cool popular word in the community and lots of 
discussions are now going on about it, mostly meaning Heat mission and design. 
However, when we started this initiative we found word orchestration very 
suitable for what we were planning with respect to task processing. I believe 
we'll come to a non-confusing terminology while discussing all the details with 
the community.

Thanks, very valuable note.

Renat___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-15 Thread Day, Phil
Hi Debo,

 I was wondering if we are shooting for too much to discuss by clubbing.
 i) Group Scheduling / Smart Placement / Heat and Scheduling  (1), (2), (3),  
 (7)
- How do you schedule something more complex that a single VM ?

I agree it's a lot to get through,  but we're working to a budget of only 3 
slots for scheduler sessions.   If we split this across two slots as originally 
suggested by Gary, then we don't get to discuss one of generalized 
metrics/ceilometer  or performance at all - which would seem an even worse 
compromise to me.

We never going to be able to avoid session which have more content than can 
comfortably fit into a single slot - that's kind of just a way of life from the 
time constraints of the summit - what we're trying to do is make sure that we 
can plan those sessions ahead of time rather that the morning before at the 
Hotel ;-)

I did add a rider that if Russell can give a 4th session to scheduling then 
this is the one that would most benefit from being split.


Phil
-Original Message-
From: Debojyoti Dutta [mailto:ddu...@gmail.com] 
Sent: 15 October 2013 04:31
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Scheduler meeting and Icehouse Summit

Hi Phil

Good summary ...

I was wondering if we are shooting for too much to discuss by clubbing
 i) Group Scheduling / Smart Placement / Heat and Scheduling  (1), (2), (3),  
(7)
- How do you schedule something more complex that a single VM ?

I think specifying something more complex than a single VM is a great theme. 
But dont know if we can do justice in 1 session. I think maybe a simple nova 
scheduling API with groups/bundles of resources  itself would be a lot for 1 
session. In fact in order to specify what you want in your resources bundle, 
you would need to think about policies.
So maybe just the simple Nova API and policies might be useful.

Also we might have a session correlating the different models of how more than 
1 VM can be requested - you could start from nova and then generalize to cross 
services or you could start from heat workload models and drill down. There are 
passionate people on both sides and maybe that debate needs a session.

I think the smart resource placement is very interesting and might need at 
least 1/2 a slot since one can show how it can be done today in nova and how it 
can handle cross services scenarios.

See you tomorrow on IRC

debo


On Mon, Oct 14, 2013 at 10:56 AM, Alex Glikson glik...@il.ibm.com wrote:
 IMO, the three themes make sense, but I would suggest waiting until 
 the submission deadline and discuss at the following IRC meeting on the 22nd.
 Maybe there will be more relevant proposals to consider.

 Regards,
 Alex

 P.S. I plan to submit a proposal regarding scheduling policies, and 
 maybe one more related to theme #1 below



 From:Day, Phil philip@hp.com
 To:OpenStack Development Mailing List
 openstack-dev@lists.openstack.org,
 Date:14/10/2013 06:50 PM
 Subject:Re: [openstack-dev] Scheduler meeting and Icehouse Summit
 



 Hi Folks,

 In the weekly scheduler meeting we've been trying to pull together a 
 consolidated list of Summit sessions so that we can find logical 
 groupings and make a more structured set of sessions for the limited 
 time available at the summit.

 https://etherpad.openstack.org/p/IceHouse-Nova-Scheduler-Sessions

 With the deadline for sessions being this Thursday 17th, tomorrows IRC 
 meeting is the last chance to decide which sessions we want to combine /
 prioritize.Russell has indicated that a starting assumption of three
 scheduler sessions is reasonable, with any extras depending on what 
 else is submitted.

 I've matched the list on the Either pad to submitted sessions below, 
 and added links to any other proposed sessions that look like they are 
 related.


 1) Instance Group Model and API
Session Proposal:
 http://summit.openstack.org/cfp/details/190

 2) Smart Resource Placement:
   Session Proposal:
 http://summit.openstack.org/cfp/details/33
Possibly related sessions:  Resource
 optimization service for nova  
 (http://summit.openstack.org/cfp/details/201)

 3) Heat and Scheduling and Software, Oh My!:
 Session Proposal:
 http://summit.openstack.org/cfp/details/113

 4) Generic Scheduler Metrics and Celiometer:
 Session Proposal:
 http://summit.openstack.org/cfp/details/218
 Possibly related sessions:  Making Ceilometer and Nova 
 play nice  http://summit.openstack.org/cfp/details/73

 5) Image Properties and Host Capabilities
 Session Proposal:  NONE

 6) Scheduler Performance:
 Session Proposal:  NONE
 Possibly related Sessions: Rethinking Scheduler Design
 http://summit.openstack.org/cfp/details/34

 7) Scheduling Across Services:
  

Re: [openstack-dev] Scheduler meeting and Icehouse Summit

2013-10-15 Thread Day, Phil
Hi Alex,

My understanding is that the 17th is the deadline and that Russell needs to be 
planning the sessions from that point onwards.  If we delay in giving him our 
suggestions until the 22nd I think it would be too late.We've had weeks if 
not months now of discussing possible scheduler sessions, I really don't see 
why we can't deliver a recommendation on how best to fit into the 3 committed 
slots on or before the 17th.

Phil

On Mon, Oct 14, 2013 at 10:56 AM, Alex Glikson glik...@il.ibm.com wrote:
 IMO, the three themes make sense, but I would suggest waiting until 
 the submission deadline and discuss at the following IRC meeting on the 22nd.
 Maybe there will be more relevant proposals to consider.

 Regards,
 Alex

 P.S. I plan to submit a proposal regarding scheduling policies, and 
 maybe one more related to theme #1 below



 From:Day, Phil philip@hp.com
 To:OpenStack Development Mailing List
 openstack-dev@lists.openstack.org,
 Date:14/10/2013 06:50 PM
 Subject:Re: [openstack-dev] Scheduler meeting and Icehouse Summit
 



 Hi Folks,

 In the weekly scheduler meeting we've been trying to pull together a 
 consolidated list of Summit sessions so that we can find logical 
 groupings and make a more structured set of sessions for the limited 
 time available at the summit.

 https://etherpad.openstack.org/p/IceHouse-Nova-Scheduler-Sessions

 With the deadline for sessions being this Thursday 17th, tomorrows IRC 
 meeting is the last chance to decide which sessions we want to combine /
 prioritize.Russell has indicated that a starting assumption of three
 scheduler sessions is reasonable, with any extras depending on what 
 else is submitted.

 I've matched the list on the Either pad to submitted sessions below, 
 and added links to any other proposed sessions that look like they are 
 related.


 1) Instance Group Model and API
Session Proposal:
 http://summit.openstack.org/cfp/details/190

 2) Smart Resource Placement:
   Session Proposal:
 http://summit.openstack.org/cfp/details/33
Possibly related sessions:  Resource
 optimization service for nova  
 (http://summit.openstack.org/cfp/details/201)

 3) Heat and Scheduling and Software, Oh My!:
 Session Proposal:
 http://summit.openstack.org/cfp/details/113

 4) Generic Scheduler Metrics and Celiometer:
 Session Proposal:
 http://summit.openstack.org/cfp/details/218
 Possibly related sessions:  Making Ceilometer and Nova 
 play nice  http://summit.openstack.org/cfp/details/73

 5) Image Properties and Host Capabilities
 Session Proposal:  NONE

 6) Scheduler Performance:
 Session Proposal:  NONE
 Possibly related Sessions: Rethinking Scheduler Design
 http://summit.openstack.org/cfp/details/34

 7) Scheduling Across Services:
 Session Proposal: NONE

 8) Private Clouds:
 Session Proposal:
 http://summit.openstack.org/cfp/details/228

 9) Multiple Scheduler Policies:
 Session Proposal: NONE


 The proposal from last weeks meeting was to use the three slots for:
 - Instance Group Model and API   (1)
 - Smart Resource Placement (2)
 - Performance (6)

 However, at the moment there doesn't seem to be a session proposed to 
 cover the performance work ?

 It also seems to me that the Group Model and Smart Placement are 
 pretty closely linked along with (3) (which says it wants to combine 1 
  2 into the same topic) , so if we only have three slots available then 
 these look like
 logical candidates for consolidating into a single session.That would
 free up a session to cover the generic metrics (4) and Ceilometer - 
 where a lot of work in Havana stalled because we couldn't get a 
 consensus on the way forward.  The third slot would be kept for 
 performance - which based on the lively debate in the scheduler meetings I'm 
 assuming will still be submitted
 as a session.Private Clouds isn't really a scheduler topic, so I suggest
 it takes its chances as a general session.  Hence my revised proposal 
 for the three slots is:

  i) Group Scheduling / Smart Placement / Heat and Scheduling  (1), 
 (2), (3),  (7)
 - How do you schedule something more complex that a 
 single VM ?

 ii) Generalized scheduling metrics / celiometer integration (4)
 - How do we extend the set of resources a scheduler 
 can use to make its decisions ?
 - How do we make this work with  / compatible with 
 Celiometer ?

 iii) Scheduler Performance (6)

 In that way we will at least give airtime to all of the topics. If a 4th
 scheduler slot becomes available then we could break up the first 
 session into two parts.

 Thoughts welcome here or in tomorrows IRC meeting.

 

[openstack-dev] Doc Team meeting - in an hour

2013-10-15 Thread Tom Fifield

Hi all,

The OpenStack Doc team meeting will take place in #openstack-meeting on 
IRC at 13:00 UTC - about an hour from now. The Agenda is here:


https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting


Please add items of interest to you and join us.


Regards,


Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [tempest] [ceilometer] Defining Diagnotics API Schema (was: looking for clarification...)

2013-10-15 Thread Sean Dague
Open Ended API isn't really an API. An API has a contract, otherwise it
isn't an API. The diagnostics REST call currently seems to have no contract
at all, and is just implemented by the underlying driver to whatever seems
like a good idea today (also, no versioning on things, so what libvirt
returns in grizzly vs. havana is up in the air).

At minimum we need some kind of per driver (probably with a version) json
schema so you could at least identify over the wire what you should be
expecting. Even better would be a generic definition, but I get that takes
time / effort.

I think the approach at this point is a bug on nova has been filed -
https://bugs.launchpad.net/nova/+bug/1240043 ... and  we're going to skip
the tempest tests entirely based on that bug. Testing an API, that isn't
actually an API, is really beyond tempest scope.

On Mon, Oct 14, 2013 at 4:54 AM, Bob Ball bob.b...@citrix.com wrote:

  I’m happy with that approach – again I’ve not seen any discussions about
 how this should be done.

 ** **

 I’ve added [tempest] and [ceilometer] tags so we can hopefully get input
 from the guys involved.

 ** **

 Bob

 ** **

 *From:* Gary Kotton [mailto:gkot...@vmware.com]
 *Sent:* 13 October 2013 05:21

 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [nova] Looking for clarification on the
 diagnostics API

  ** **

 Hi,

 I agree with Matt here. This is not broad enough. One option is to have a
 tempest class that overrides for various backend plugins. Then the test can
 be haredednd for each driver. I am not sure if that is something that has
 been talked about.

 Thanks

 Gary

 ** **

 *From: *Matt Riedemann mrie...@us.ibm.com
 *Reply-To: *OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 *Date: *Sunday, October 13, 2013 6:13 AM
 *To: *OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 *Subject: *Re: [openstack-dev] [nova] Looking for clarification on the
 diagnostics API

 ** **

 There is also a tempest patch now to ease some of the libvirt-specific
 keys checked in the new diagnostics tests there:

 https://review.openstack.org/#/c/51412/

 To relay some of my concerns that I put in that patch:

 *I'm not sure how I feel about this. It should probably be more generic
 but I think we need more than just a change in tempest to enforce it, i.e.
 we should have a nova patch that changes the doc strings for the abstract
 compute driver method to specify what the minimum keys are for the info
 returned, maybe a doc api sample change, etc?*

 *For reference, here is the mailing list post I started on this last week:
 *

 *
 http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html
 *http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html

 *There are also docs here (these examples use xen and libvirt):*

 *
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html
 *http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

 *And under procedure 4.4 here:*

 *
 http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_manage-the-cloud
 *http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_manage-the-cloud

 =

 I also found this wiki page related to metering and the nova diagnostics
 API:


 https://wiki.openstack.org/wiki/EfficientMetering/FutureNovaInteractionModel

 So it seems like if at some point this will be used with ceilometer it
 should be standardized a bit which is what the Tempest part starts but I
 don't want it to get lost there.


 Thanks,

 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development
--

 *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* mrie...@us.ibm.com

 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States






 From:Gary Kotton gkot...@vmware.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:10/12/2013 01:42 PM
 Subject:Re: [openstack-dev] [nova] Looking for clarification on
 the diagnostics API
  --




 Yup, it seems to be hypervisor specific. I have added in the Vmware
 support following you correcting in the Vmware driver.
 Thanks
 Gary

 *From: *Matt Riedemann mrie...@us.ibm.com*
 Reply-To: *OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org*
 Date: *Thursday, October 10, 2013 10:17 PM*
 To: *OpenStack Development Mailing List openstack-dev@lists.openstack.org
 *
 Subject: *Re: [openstack-dev] [nova] Looking for clarification on the
 diagnostics API

 Looks like this has been brought up a couple of times:
 *
 *https://lists.launchpad.net/openstack/msg09138.html
 *
 

Re: [openstack-dev] how can i get volume name in create snapshot call

2013-10-15 Thread Avishay Traeger
Dinakar,
The driver's create_snapshot function gets a dictionary that describes the
snapshot.  In that dictionary, you have the volume_name field that has
the source volume's name: snapshot['volume_name'].  You can get other
details via snapshot['volume'], which is a dictionary containing the
volume's metadata.

Hope that helps.

Thanks,
Avishay



From:   Dinakar Gorti Maruti dinakar...@cloudbyte.co
To: openstack-dev@lists.openstack.org,
Date:   10/15/2013 08:44 AM
Subject:[openstack-dev] how can i get volume name in create snapshot
call



Dear all,
    we are in progress of implementing a new driver for cinder
services , we have a scenario where we need volume name for creation of
snapshot. In detail , we designed the driver in such a way that it
communicates with our server through http calls , and now we need the
volume name and other details in create snapshot function , how can we get
those details

Thanks
Dinakar ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [tempest] [ceilometer] Defining Diagnotics API Schema (was: looking for clarification...)

2013-10-15 Thread Doug Hellmann
On Tue, Oct 15, 2013 at 8:13 AM, Sean Dague s...@dague.net wrote:

 Open Ended API isn't really an API. An API has a contract, otherwise it
 isn't an API.


I need that saying on a t-shirt.



 The diagnostics REST call currently seems to have no contract at all, and
 is just implemented by the underlying driver to whatever seems like a good
 idea today (also, no versioning on things, so what libvirt returns in
 grizzly vs. havana is up in the air).

 At minimum we need some kind of per driver (probably with a version) json
 schema so you could at least identify over the wire what you should be
 expecting. Even better would be a generic definition, but I get that takes
 time / effort.


We have found it very useful in ceilometer to define classes (even simple
namedtuples) for passing data into and out of our drivers. It is much
easier to understand the expectations on both sides of the plugin API than
when using dictionaries.



 I think the approach at this point is a bug on nova has been filed -
 https://bugs.launchpad.net/nova/+bug/1240043 ... and  we're going to skip
 the tempest tests entirely based on that bug. Testing an API, that isn't
 actually an API, is really beyond tempest scope.

 On Mon, Oct 14, 2013 at 4:54 AM, Bob Ball bob.b...@citrix.com wrote:

  I’m happy with that approach – again I’ve not seen any discussions
 about how this should be done.

 ** **

 I’ve added [tempest] and [ceilometer] tags so we can hopefully get input
 from the guys involved.

 ** **

 Bob

 ** **

 *From:* Gary Kotton [mailto:gkot...@vmware.com]
 *Sent:* 13 October 2013 05:21

 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [nova] Looking for clarification on the
 diagnostics API

  ** **

 Hi,

 I agree with Matt here. This is not broad enough. One option is to have a
 tempest class that overrides for various backend plugins. Then the test can
 be haredednd for each driver. I am not sure if that is something that has
 been talked about.

 Thanks

 Gary

 ** **

 *From: *Matt Riedemann mrie...@us.ibm.com
 *Reply-To: *OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 *Date: *Sunday, October 13, 2013 6:13 AM
 *To: *OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 *Subject: *Re: [openstack-dev] [nova] Looking for clarification on the
 diagnostics API

 ** **

 There is also a tempest patch now to ease some of the libvirt-specific
 keys checked in the new diagnostics tests there:

 https://review.openstack.org/#/c/51412/

 To relay some of my concerns that I put in that patch:

 *I'm not sure how I feel about this. It should probably be more generic
 but I think we need more than just a change in tempest to enforce it, i.e.
 we should have a nova patch that changes the doc strings for the abstract
 compute driver method to specify what the minimum keys are for the info
 returned, maybe a doc api sample change, etc?*

 *For reference, here is the mailing list post I started on this last
 week:*

 *
 http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html
 *http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html

 *There are also docs here (these examples use xen and libvirt):*

 *
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html
 *http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

 *And under procedure 4.4 here:*

 *
 http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_manage-the-cloud
 *http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_manage-the-cloud

 =

 I also found this wiki page related to metering and the nova diagnostics
 API:


 https://wiki.openstack.org/wiki/EfficientMetering/FutureNovaInteractionModel

 So it seems like if at some point this will be used with ceilometer it
 should be standardized a bit which is what the Tempest part starts but I
 don't want it to get lost there.


 Thanks,

 *MATT RIEDEMANN*
 Advisory Software Engineer
 Cloud Solutions and OpenStack Development
--

 *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
 E-mail:* mrie...@us.ibm.com

 [image: IBM]

 3605 Hwy 52 N
 Rochester, MN 55901-1407
 United States






 From:Gary Kotton gkot...@vmware.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:10/12/2013 01:42 PM
 Subject:Re: [openstack-dev] [nova] Looking for clarification on
 the diagnostics API
  --




 Yup, it seems to be hypervisor specific. I have added in the Vmware
 support following you correcting in the Vmware driver.
 Thanks
 Gary

 *From: *Matt Riedemann mrie...@us.ibm.com*
 Reply-To: *OpenStack Development Mailing List 
 

Re: [openstack-dev] [qa] Update: Nova API List for Missing Tempest Tests

2013-10-15 Thread Matthew Treinish
On Tue, Oct 15, 2013 at 06:25:28AM +, Masayuki Igawa wrote:
 Hi, 
 
 First, thank you to an anonymous for updating this list!
 - GET /{project_id}/servers/:server_id/diagnostics
 
 And, I have updated: Nova API List for Missing Tempest Tests.
   
 https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc
 
 The summary of this list:
   different count from
 Tested or not # of APIs   ratio   the last time
 ---
 Tested API124  49.6%  +2
 Not Tested API 66  26.4%  -2
 Not Need to Test(*1)   60  24.0%   0
 ---
 Total(*2):250 100.0%   0
 (*1) Because they are deprecated APIs such as nova-network and volume.
 (*2) not included v3 APIs
 
 The tempest version is:
  commit f55f4e54ceab7c6a4d330f92c8059e46233e3560
  Merge: 86ab238 062e30a
  Author: Jenkins jenk...@review.openstack.org
  Date:   Mon Oct 14 15:55:59 2013 +
 
 By the way, I saw a design summit proposal related to this topic(*3). I think
 this information should be generated automatically. So I'd like to talk about
 this topic at the summit session.
 (*3) Coverage analysis tooling: http://summit.openstack.org/cfp/details/171

I'm glad that others have an interest in this topic. I've started an etherpad
for that discussion here:

https://etherpad.openstack.org/p/icehouse-summit-qa-coverage-tooling

Right now it's a very rough outline, without much on it. I'm planning to add
more later. But, feel free to add any discussion points or information that you
think needs to be a part of the session.

-Matt Treinish

 
 This information would be useful for creating Tempest tests.
 Any comments/questions/suggestions are welcome.
 
 Best Regards,
 -- Masayuki Igawa
 
 
  Hi,
  
  # I'm sorry for this resending because my last mail has unnecessary 
  messages.
  
  
  I have updated: Nova API List for Missing Tempest Tests.
   
  https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc
  
  The summary of this list:
  different count from
  Tested or not# of APIs  ratio   the last time
  ---
  Tested API  122  48.8%  +5
  Not Tested API   68  27.2%  -5
  Not Need to Test(*1) 60  24.0%   0
  ---
  Total(*2):  250 100.0%   0
  
  (*1) Because they are deprecated APIs such as nova-network and volume.
  (*2) not included v3 APIs
  
  I hope this information would be helpful for creating Tempest tests.
  Any comments and questions are welcome.
  
  Best Regards,
  -- Masayuki Igawa
  
  
   Hi, Tempest developers
   
   I have made:
Nova API List for Missing Tempest Tests.

   https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc
   
   This list shows what we should test. That is:
* Nova has 250 APIs(not include v3 APIs).
* 117 APIs are executed(maybe tested).
* 73 APIs are not executed.
* 60 APIs are not executed. But they maybe not need to test.
- Because they are deprecated APIs such as nova-network and volume.
   
   So I think we need more tempest test cases.
   If this idea is acceptable, can you put your name to 'assignee' at your 
   favorites,
   and implement tempest tests.
   
   Any comments are welcome.
   
   Additional information:
I made this API list with modification of nova's code that based on 
https://review.openstack.org/#/c/25882/ (Abandoned).
   
   Best Regards,
   -- Masayuki Igawa
   
   

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-15 Thread Nick Chase
Maybe add a link to the Mistral page from the Convention page?
On Oct 15, 2013 4:22 AM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Hey Clint, thanks for your question. I think it's been fully answered by
 this time. You and other people are very welcome to collaborate :)

 Joshua, as far as renaming the original document I would suggest we keep
 it as it is for now just to preserve the explicit history of how it's been
 going so far. We now have a link from launchpad name to Convection proposal
 so one shouldn't be confused about what is what.

 Thanks!

 Renat Akhmerov
 Mirantis Inc.

 On 15.10.2013, at 0:32, Joshua Harlow harlo...@yahoo-inc.com wrote:

  +2 More collaboration the better :)

   From: Stan Lagun sla...@mirantis.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: Monday, October 14, 2013 1:20 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Mistral] Announcing a new task scheduling
 and orchestration service for OpenStack


  Why exactly aren't you just calling this Convection and/or collaborating
  with the developers who came up with it?

  We do actively collaborate with TaskFlow/StateManagement team who are
 also the authors of Convection proposal. This is a joint project and we
 invite you and other developers to join and contribute.
  Convection is a Microsoft trademark. That's why Mistral


 On Tue, Oct 15, 2013 at 12:04 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700:
  Hi OpenStackers,
 
  I am proud to announce the official launch of the Mistral project. At
 Mirantis we have a team to start contributing to the project right away. We
 invite anybody interested in task service  state management to join the
 initiative.
 
  Mistral is a new OpenStack service designed for task flow control,
 scheduling, and execution. The project will implement Convection proposal (
 https://wiki.openstack.org/wiki/Convection) and provide an API and
 domain-specific language that enables users to manage tasks and their
 dependencies, and to define workflows, triggers, and events. The service
 will provide the ability to schedule tasks, as well as to define and manage
 external sources of events to act as task execution triggers.

  Why exactly aren't you just calling this Convection and/or collaborating
 with the developers who came up with it?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours
 Stanislav (Stan) Lagun
 Senior Developer
 Mirantis
 35b/3, Vorontsovskaya St.
 Moscow, Russia
 Skype: stanlagun
 www.mirantis.com
 sla...@mirantis.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Announcing a new task scheduling and orchestration service for OpenStack

2013-10-15 Thread Renat Akhmerov
Yes, will do. Thanks!

On 15.10.2013, at 18:37, Nick Chase nch...@mirantis.com wrote:

 Maybe add a link to the Mistral page from the Convention page?
 
 On Oct 15, 2013 4:22 AM, Renat Akhmerov rakhme...@mirantis.com wrote:
 Hey Clint, thanks for your question. I think it's been fully answered by this 
 time. You and other people are very welcome to collaborate :)
 
 Joshua, as far as renaming the original document I would suggest we keep it 
 as it is for now just to preserve the explicit history of how it's been going 
 so far. We now have a link from launchpad name to Convection proposal so one 
 shouldn't be confused about what is what.
 
 Thanks!
 
 Renat Akhmerov
 Mirantis Inc.
 
 On 15.10.2013, at 0:32, Joshua Harlow harlo...@yahoo-inc.com wrote:
 
 +2 More collaboration the better :)
 
 From: Stan Lagun sla...@mirantis.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: Monday, October 14, 2013 1:20 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Mistral] Announcing a new task scheduling and 
 orchestration service for OpenStack
 
 
  Why exactly aren't you just calling this Convection and/or collaborating
  with the developers who came up with it?
 
 We do actively collaborate with TaskFlow/StateManagement team who are also 
 the authors of Convection proposal. This is a joint project and we invite 
 you and other developers to join and contribute.
 Convection is a Microsoft trademark. That's why Mistral
 
 
 On Tue, Oct 15, 2013 at 12:04 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Renat Akhmerov's message of 2013-10-14 12:40:28 -0700:
  Hi OpenStackers,
 
  I am proud to announce the official launch of the Mistral project. At 
  Mirantis we have a team to start contributing to the project right away. 
  We invite anybody interested in task service  state management to join 
  the initiative.
 
  Mistral is a new OpenStack service designed for task flow control, 
  scheduling, and execution. The project will implement Convection proposal 
  (https://wiki.openstack.org/wiki/Convection) and provide an API and 
  domain-specific language that enables users to manage tasks and their 
  dependencies, and to define workflows, triggers, and events. The service 
  will provide the ability to schedule tasks, as well as to define and 
  manage external sources of events to act as task execution triggers.
 
 Why exactly aren't you just calling this Convection and/or collaborating
 with the developers who came up with it?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Sincerely yours
 Stanislav (Stan) Lagun
 Senior Developer
 Mirantis
 35b/3, Vorontsovskaya St.
 Moscow, Russia
 Skype: stanlagun
 www.mirantis.com
 sla...@mirantis.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] configdrive not presenting IPAddress information

2013-10-15 Thread John Garbutt
Hi,

There is some information in the EC2 metadata, or at least I have a
patch up for that:
https://review.openstack.org/#/c/46286/

I am raising some ideas to improve things here:
http://summit.openstack.org/cfp/details/191

John

On 15 October 2013 08:24, Vijay Venkatachalam
vijay.venkatacha...@citrix.com wrote:
 Hi,

 The configdrive does not have IPAddress information in
 meta_data.json  or in any of the other files. Is this planned for the
 future?

 The only other options is for the user to manually pass
 this information through other parameters like “—meta” or “—file” or
 “userdata” etc.  which is not desired.

 Thanks,

 Vijay V.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Duncan Thomas
On 11 October 2013 15:41, Alessandro Pilotti
apilo...@cloudbasesolutions.com wrote:
 Current reviews require:

 +1 de facto driver X mantainer(s)
 +2  core reviewer
 +2A  core reviewer

 While with the proposed scenario we'd get to a way faster route:

 +2  driver X mantainer
 +2A another driver X mantainer or a core reviewer

 This would make a big difference in terms of review time.

Unfortunately I suspect it would also lead to a big difference in
review quality, and not in a positive way. The things that are
important / obvious to somebody who focuses on one driver are totally
different, and often far more limited, than the concerns of somebody
who reviews many drivers and core code changes.

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Notifications from non-local exchanges

2013-10-15 Thread Neal, Phil


 -Original Message-
 From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
 Sent: Thursday, October 10, 2013 6:20 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ceilometer] Notifications from non-local
 exchanges
 
 
 On 10/10/2013 06:16 PM, Neal, Phil wrote:
 Greetings all, I'm looking at how to expand the ability of our CM
 instance to consume notifications and have a quick question about
 the configuration and flow...
 
 For the notifications central agent ,  we rely on the services (i.e.
 glance, cinder)  to drop messages on the same messaging host as used
 by Ceilometer. From there the listener picks it up and cycles through
 the plugin logic to convert it to a sample. It's apparent that we
 can't pass an alternate hostname via the control_exchange values, so
 is there another method for harvesting messages off of other
 instances (e.g. another compute node)?
 
 Hey Phil,
 
 You don't really need to specify the exchange name to consume
 notifications. It will default to the control-exchange if not specified
 anyway.
 
 How it works isn't so obvious.
 
 Depending on the priority of then notification the oslo notifier will
 publish on topic.priority using the service's control-exchange. If
 that queue doesn't exist it'll create it and bind the control-exchange
 to it. This is so we can publish even if there are no consumers yet.

I think the common default is notifications.info, yes?

 
 Oslo.rpc creates a 1:1 mapping of routing_key and queue to topic (no
 wildcards). So we get
 
 exchange:service - binding: routing_key topic.priority -
 queue topic.priority
 
 (essentially, 1 queue per priority)
 
 Which is why, if you want to enable services to generate notifications,
 you just have to set the driver and the topic(s) to publish on. Exchange
 is implied and routing key/queue are inferred from topic.

Yep, following up to this point: Oslo takes care of the setup of exchanges on 
behalf of the 
services. When, say, Glance wants to push notifications onto the message bus, 
they can set 
the control_exchange value and the driver (rabbit, for example) and voila! An 
exchange is
set up with a default queue bound to the key. 
 
 Likewise we only have to specify the queue name to consume, since we
 only need an exchange to publish.

Here's where my gap is: the notification plugins seem to assume that Ceilometer 
is sitting on the same messaging node/endpoint as the service. The config file 
allows
us to specify the exchange names for the services , but not endpoints, so if 
Glance 
is publishing to notifications.info on rabbit.glance.hpcloud.net, and 
ceilometer
 is  publishing/consuming from the rabbit.ceil.hpcloud.net node then the 
Glance
 notifications won't be collected.

 I took another look at the Ceilometer config options...rabbit_hosts
takes multiple hosts (i.e. rabbit.glance.hpcloud.net:, 
rabbit.ceil.hpcloud.net:) 
but it's not clear whether that's for publishing, collection, or both?  The 
impl_kombu
module does cycle through that list to create the connection pool, but it's not
clear to me how it all comes together in the plugin instantiation...

 
 I have a bare-bones oslo notifier consumer and client here if you want
 to mess around with it (and a bare-bones kombu version in the parent).

Will take a look! 

 
 https://github.com/SandyWalsh/amqp_sandbox/tree/master/oslo
 
 Not sure if that answered your question or made it worse? :)
 
 Cheers
 -S
 
 
 
 
 - Phil
 
 ___ OpenStack- dev mailing
 list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Alessandro Pilotti

On Oct 15, 2013, at 18:14 , Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

On 11 October 2013 15:41, Alessandro Pilotti
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com wrote:
Current reviews require:

+1 de facto driver X mantainer(s)
+2  core reviewer
+2A  core reviewer

While with the proposed scenario we'd get to a way faster route:

+2  driver X mantainer
+2A another driver X mantainer or a core reviewer

This would make a big difference in terms of review time.

Unfortunately I suspect it would also lead to a big difference in
review quality, and not in a positive way. The things that are
important / obvious to somebody who focuses on one driver are totally
different, and often far more limited, than the concerns of somebody
who reviews many drivers and core code changes.

Although the eyes of somebody which comes from a different domain bring usually 
additional points of views and befits, this was not particularly the case for 
what our driver is concerned. As I already wrote, almost all the reviews so far 
have been related to unit tests or minor formal corrections.

I disagree on the far more limited: driver devs (at least in our case), have 
to work on a wider range of projects beside Nova (e.g.: Neutron, Cinder, 
Ceilometer and outside proper OpenStack OpenVSwitch and Crowbar, to name the 
most relevant cases).





--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Plugin packaging

2013-10-15 Thread Sam Alba
On Tue, Oct 15, 2013 at 2:31 AM, Zane Bitter zbit...@redhat.com wrote:
 On 15/10/13 02:18, Sam Alba wrote:

 Hello,

 I am working on a Heat plugin that makes a new resource available in a
 template. It's working great and I will opensource it this week if I
 can get the packaging right...


 Awesome! :)


 Right now, I am linking my module.py file in /usr/lib/heat to get it
 loaded when heat-engine starts. But according to the doc, I am
 supposed to be able to make the plugin discoverable by heat-engine if
 the module appears in the package heat.engine.plugins[1]


 I think the documentation may be leading you astray here; it's the other way
 around: once the plugin is discovered by the engine, it will appear in the
 package heat.engine.plugins. So you should be able to do:

 import heat.engine.resources
 heat.engine.resources.initialise()
 from heat.engine.plugins import module
 print module.resource_mapping()

 (FWIW this is working for me on latest master of Heat.)

 As far as making the plugin discoverable is concerned, all you should have
 to do is install the module in /usr/lib/heat/.


 I looked into the plugin_loader module in the Heat source code and it
 looks like it should work. However I was unable to get a proper Python
 package.

 Has anyone been able to make this packaging right for an external Heat
 plugin?


 I've never tried to do this with a Python package, the mechanism is really
 designed more for either dropping the module in there manually, or
 installing it from a Debian or RPM package.

 It sounds like what you're doing is trying to install the package in
 /usr/lib/python2.7/site-packages/ (or in a venv) in the package
 heat.engine.plugins and get the engine to recognise it that way? I don't
 think there's a safe way to make that work, because the plugin loader
 creates its own heat.engine.plugins package that will replace anything that
 exists on that path (see line 41 of plugin_loader.py).

 Heat (in fact, all of OpenStack) is designed as a system service, so the
 normal rules of Python _application_ packaging don't quite fit. e.g. If you
 want to use a plugin locally (for a single user) rather than install it
 globally, the way to do it is to specify a local plugin directory when
 running heat-engine, rather than have the plugin installed in a venv.

 Hope that helps.

Thanks Zane, it helps.


-- 
@sam_alba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Duncan Thomas
On 11 October 2013 20:51, Rochelle.Grober rochelle.gro...@huawei.com wrote:
 Proposed solution:

 There have been a couple of solutions proposed.  I’m presenting a
 merged/hybrid solution that may work

 · Create a new repository for the extra drivers:

 o   Keep kvm and Xenapi in the Nova project as “reference” drivers

 oopenstack/nova-extra-drivers (proposed by rbryant)

 oHave all drivers other than reference drivers in the extra-drivers
 project until they meet the maturity of the ones in Nova

 o   The core reviewers for nova-extra-drivers will come from its developer
 pool.  As Alessandro pointed out, all the driver developers have more in
 common with each other than core Nova, so they should be able to do a better
 job of reviewing these patches than Nova core.  Plus, this might create some
 synergy between different drivers that will result in more commonalities
 across drivers and better stability.  This also reduces the workloads on
 both Nova Core reviewers and the driver developers/core reviewers.

 o   If you don’t feel comfortable with the last bullet, have the Nova core
 reviewers do the final approval, but only for the obvious “does this code
 meet our standards?”



 The proposed solution focuses the strengths of the different developers in
 their strong areas.  Everyone will still have to stretch to do reviews and
 now there is a possibility that the developers that best understand the
 drivers might be able to advance the state of the drivers by sharing their
 expertise amongst each other.

The problem here is that you need to keep nova-core and the drivers
tree in sync... if a core change causes CI to break because it
requires a change to the driver code (this has happened a couple of
times in cinder... when they're in the same tree you can just fix em
all up, easy), there's a nasty dance to get the patches in since the
drivers need updating to work with both the old and the new core code,
then the core code updating, then the support for the old core code
removing... yuck

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting agenda.

2013-10-15 Thread Peter Pouliot
Hi All,
The following will be discussed in today's hyper-v meeting.


* Splitting the hyper-v driver options and discussion

* Bug Status

* Puppet modules


Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research  Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.commailto:ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Apache2 Installation Question

2013-10-15 Thread Fox, Kevin M
Since negotiate isn't supported, at least basic auth support allows 
authenticating against our domain rather then users needing to come up with 
another password. The basic auth support is not specific to kerberos either, so 
its more generally useful.

Thanks,
Kevin

From: Simo Sorce [s...@redhat.com]
Sent: Monday, October 14, 2013 6:58 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Keystone Apache2 Installation Question

On Mon, 2013-10-14 at 14:31 -0700, Fox, Kevin M wrote:
 Hi Adam,

 I was trying to get both kerberos negotiate and kerberos basic auth working. 
 Negotiate does not seem to be supported by any of the clients so I think it 
 will be a fair amount of work to get working.

 /keystone/main/v2.0/tokens can't support having an apache auth module on it, 
 it seems because it is overloaded to do too many things. After playing around 
 with it, it looks like some services (like horizon) assume they can give it a 
 token and get back a restricted token without doing basic auth/negotiate all 
 the time. You can't put auth around it in apache and Require valid-user and 
 still have it perform its other functions. the tokens endpoint needs to be 
 able to be split out so that you can do something like /auth/type/tokens so 
 you can put a different handler on each url and /tokens has all the rest of 
 the functionality. I guess this will have to wait for Icehouse.

 I also played around with basic auth as an alternative in the mean time to 
 negotiate and ran into that same issue. It also requires changes to not just 
 python-keystoneclient but a lot of the other python-*clients as well, and 
 even then, horizon breaks as described above.

 I found a work around for basic auth though that is working quite nicely. I'm 
 trying to get the patch through our legal department, but they are tripping 
 over the contributor agreement. :/

 The trick is, if you are using basic auth, you only support a 
 username/password anyway and havana keystone is plugable in its handling of 
 username/passwords.

 So, I'll just tell you the idea of the patch so you can work on 
 reimplementing it if you'd like.
  * I made a new file 
 /usr/lib/python2.6/site-packages/keystone/identity/backends/basic_auth_sql.py
  * I made a class Identity that inherits from the sql Identity class.
  * I overrode the _check_password function.
  * I took the username/password and base64 encoded it, then make a http 
 request with it to whatever http basic auth service url you want to validate 
 with. apache on localhost works great.
  * Check the result for status 200. You can even fall back to the super 
 class's _chck_password to support both basic auth and sql passwords if you'd 
 like.

 The interesting bit about this configuration is keystone does not need to be 
 embedded in apache to support apache basic auth, while still providing you 
 most of the flexability of apache basic auth plugins. The only thing that 
 doesn't work is REMOTE_USER rewriting. Though you could probably add that 
 feature in somehow using a http response header or something.

If all you end up using is basic auth, what is the point of using
Kerberos at all ?

Basic Auth should never be used with kerberos except in exceptional
cases.

Simo.

--
Simo Sorce * Red Hat, Inc * New York


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler meeting minutes

2013-10-15 Thread Andrew Laski

Thanks everyone for a great meeting.

Minutes: 
http://eavesdrop.openstack.org/meetings/scheduler/2013/scheduler.2013-10-15-15.01.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/scheduler/2013/scheduler.2013-10-15-15.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/scheduler/2013/scheduler.2013-10-15-15.01.log.html


And a reminder that there's an agenda for the next meeting(sometimes up 
to date) at https://wiki.openstack.org/wiki/Meetings/Scheduler.  Please 
add any topics you would like to discuss.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Duncan Thomas
On 13 October 2013 00:19, Alessandro Pilotti
apilo...@cloudbasesolutions.com wrote:

 If you don't like any of the options that this already long thread is
 providing, I'm absolutely open to discuss any constructive idea. But please,
 let's get out of this awful mess.

 OpenStack is still a young project. Let's make sure that we can hand it on
 to the next devs generations by getting rid of these management bottlenecks
 now!

Get a hyper-v person trained up to the point they are a nova core
reviewer, where they can not only prioritise hyper-v related reviews
but also reduce the general review backlog in nova (which affects
everybody... there are a few cinder features that required nova merges
that didn't get in before feature freeze either and had to be disabled
in cinder).

There's a 'tax' to contributing to openstack, which is dedicating some
time to reviewing other people's work. The more people do that, the
faster things go for everybody. The higher rate tax is being a core
reviewer, but that comes with certain advantages too.

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Peter Pouliot
Hi Everyone,

Here are the minutes from today's hyper-v meeting.

Minutes:
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-10-15-16.03.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-10-15-16.03.txt
Log:
http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-10-15-16.03.log.html


Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research  Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.commailto:ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone RC1 Bug Question 1209440

2013-10-15 Thread Dolph Mathews
On Tue, Oct 15, 2013 at 12:05 PM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) mark.m.mil...@hp.com wrote:
 Hello,

 I have a generic question about the logic now available for LDAP users in 
 association with bug 1209440. How do you associate a read-only LDAP user with 
 a domain?

I suppose it depends on your definition of association? Users have
two significant relationships with domains:

A) they can be owned by (namespaced to) a domain
B) they can be assigned roles on domains, granting authorization

 LDAP users are not entered into the keystone user table so the only way I can 
 see to associate a user with a domain is to give them a role for the domain 
 so an entry is built for them in the user_domain_metadata table. Am I correct 
 or is there something I am missing?

This is [B], above. This pattern is identical to that used for projects.


 Regards,

 Mark

 =

 https://bugs.launchpad.net/keystone/+bug/1209440

 =

 At keystone/identity/backends/ldap.py:230 we allow mapping domain_id of a 
 user based on the attribute specified in conf.ldap.user_domain_id_attribute 
 which defaults to 'businessCategory'.
 My understanding is that this is no longer required and should no longer be 
 allowed and indeed in practice it completely overrides any domain information 
 that is provided in the authentication body.

 =

 commit 668ee718127a9983d4838b868efd44ddf661b533
 Author: Morgan Fainberg m...@metacloud.com
 Date: Thu Sep 19 19:53:02 2013 -0700
 Remove ldap identity domain attribute options
 LDAP Identity backend is not domain aware, and therefore does not
 need mappings for the domain attributes for user and group.
 closes-bug: 1209440

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

-Dolph

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneclient] self-signed keystone not accessible from other services

2013-10-15 Thread Bhuvan Arumugam
On Mon, Oct 14, 2013 at 7:20 PM, Jamie Lennox jamielen...@redhat.comwrote:

 On Mon, 2013-10-14 at 18:36 -0700, Bhuvan Arumugam wrote:
  Just making sure i'm not the only one facing this problem.
  https://bugs.launchpad.net/nova/+bug/1239894

 Yep, we thought this may raise some issues but insecure by default was
 just not acceptable.


I think we should document it.



  keystoneclient v0.4.0 was released last week and used by all openstack
  services now. The insecure=False, as defined in
  keystoneclient.middleware.auth_token. The keystone client is happy as
  long as --insecure flag is used. There is no way to configure it in
  other openstack services like nova, neutron or glance while it is
  integrated with self-signed keystone instance.

 I'm not following the problem. As you mentioned before the equivalent
 setting for --insecure in auth_token is setting insecure=True in the
 service's config file along with all the other keystone auth_token
 settings. The equivalent when using the client library is passing
 insecure=True to the client initialization.


Yep, the problem is solved after setting this flag in [filter:authtoken]
section in /etc/nova/api-paste.ini.


  We should introduce new config parameter keystone_api_insecure and
  configure keystoneclient behavior based on this parameter. The config
  parameter should be defined in all other openstack services, as all of
  them integrate with keystone.

 A new config parameter where? I guess we could make insecure in
 auth_token also response to an OS_SSL_INSECURE but that pattern is not
 followed for any other service or parameter.


I think we are inconsistent in using this flag for different services. For
instance, we use:
  neutron_api_insecure
  glance_api_insecure

for keystone, we use:
insecure=True

I think it's reasonable as one way or the other, it's configurable. We'll
be good if we document it somwhere here.
  http://docs.openstack.org/developer/python-keystoneclient/using-api.html

 Until it's resolved, I think the known workaround is to use
  keystoneclient==0.3.2.
 
 
  Is there any other workaround for this issue?

 Signed certificates.


Oh yeah! we use signed cert in our prod environment. This one is our test
bed.

Thank you,

-- 
Regards,
Bhuvan Arumugam
www.livecipher.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Alessandro Pilotti


On Oct 15, 2013, at 19:18 , Duncan Thomas duncan.tho...@gmail.com
 wrote:

 On 13 October 2013 00:19, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 
 If you don't like any of the options that this already long thread is
 providing, I'm absolutely open to discuss any constructive idea. But please,
 let's get out of this awful mess.
 
 OpenStack is still a young project. Let's make sure that we can hand it on
 to the next devs generations by getting rid of these management bottlenecks
 now!
 
 Get a hyper-v person trained up to the point they are a nova core
 reviewer, where they can not only prioritise hyper-v related reviews
 but also reduce the general review backlog in nova (which affects
 everybody... there are a few cinder features that required nova merges
 that didn't get in before feature freeze either and had to be disabled
 in cinder).
 

About getting a Nova core, from a previous email that I wrote on this thread:

 …
 Our domain is the area in which me and my sub-team can add the biggest value. 
 Being also an independent startup, we reached now the stage in which we can 
 sponsor some devs to do reviews all the time outside of our core domain, but 
 this will take a few months spawning one or two releases as aquiring the 
 necessary understanding of a project like e.g. Nova cannot be done overnight.
 …

Anyway, although this will help, it won't be a solution. A central control 
without delegation is going to fail anyway as the project size increases.


 There's a 'tax' to contributing to openstack, which is dedicating some
 time to reviewing other people's work. The more people do that, the
 faster things go for everybody. The higher rate tax is being a core
 reviewer, but that comes with certain advantages too.
 

Here's the point. A driver dev pays this tax across multiple projects, e.g.: 
by reviewing Hyper-V code in Nova, Neutron, Cinder, Ceilometer, Cloudbase-Init, 
Crowbar, OpenVSwitch and so on. 
As a consequence the amount of review work in a single project will never be 
enough to get a core status.


 -- 
 Duncan Thomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Alessandro Pilotti

On Oct 15, 2013, at 19:03 , Duncan Thomas duncan.tho...@gmail.com
 wrote:

 On 11 October 2013 20:51, Rochelle.Grober rochelle.gro...@huawei.com wrote:
 Proposed solution:
 
 There have been a couple of solutions proposed.  I’m presenting a
 merged/hybrid solution that may work
 
 · Create a new repository for the extra drivers:
 
 o   Keep kvm and Xenapi in the Nova project as “reference” drivers
 
 oopenstack/nova-extra-drivers (proposed by rbryant)
 
 oHave all drivers other than reference drivers in the extra-drivers
 project until they meet the maturity of the ones in Nova
 
 o   The core reviewers for nova-extra-drivers will come from its developer
 pool.  As Alessandro pointed out, all the driver developers have more in
 common with each other than core Nova, so they should be able to do a better
 job of reviewing these patches than Nova core.  Plus, this might create some
 synergy between different drivers that will result in more commonalities
 across drivers and better stability.  This also reduces the workloads on
 both Nova Core reviewers and the driver developers/core reviewers.
 
 o   If you don’t feel comfortable with the last bullet, have the Nova core
 reviewers do the final approval, but only for the obvious “does this code
 meet our standards?”
 
 
 
 The proposed solution focuses the strengths of the different developers in
 their strong areas.  Everyone will still have to stretch to do reviews and
 now there is a possibility that the developers that best understand the
 drivers might be able to advance the state of the drivers by sharing their
 expertise amongst each other.
 
 The problem here is that you need to keep nova-core and the drivers
 tree in sync... if a core change causes CI to break because it
 requires a change to the driver code (this has happened a couple of
 times in cinder... when they're in the same tree you can just fix em
 all up, easy), there's a nasty dance to get the patches in since the
 drivers need updating to work with both the old and the new core code,
 then the core code updating, then the support for the old core code
 removing… yuck
 

We are discussing about this since a while and it's IMO almost an inexistent 
issue in Nova considering how seldomly those changes happen in the driver 
interface.
It might be obviously helpful if the Nova team would like to make the driver 
interface stable (e.g. versioned).

Said that, if having to cope with occasional breakings during a dev cycle is 
the price to pay to get out of the current management mess, well, that'd be by 
far the lesser of the two evils. :-)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Alessandro Pilotti


On Oct 15, 2013, at 18:59 , Matt Riedemann 
mrie...@us.ibm.commailto:mrie...@us.ibm.com
 wrote:

Sorry to pile on, but:

this was not particularly the case for what our driver is concerned. As I 
already wrote, almost all the reviews so far have been related to unit tests or 
minor formal corrections.

As was pointed out by me in patch set 1 here: 
https://review.openstack.org/#/c/43592/

There was no unit test coverage for an entire module 
(nova.virt.hyperv.volumeops) before that patch.

So while I agree that driver maintainers know their code the best and how it 
all works with the dirty details, but they are also going to be the ones to cut 
corners to get things fixed which usually shows up in a lack of test coverage - 
and that's a good reason to have external reviewers on everything, to keep us 
all honest.


Let me add to this as an example this patch with a large number of additional 
unit tests that we decided to provide to improve our (already good) test 
coverage without external input:
https://review.openstack.org/#/c/48940/

I agree on the fact that peer review is a fundamental part of the development 
process, as you were saying also to keep each other on track. But this is 
something that we can do among the driver team with or without the help of the 
Nova team, especially now that the Hyper-V community is growing up at a fast 
pace.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com
Mail Attachment.gif

3605 Hwy 52 N
Rochester, MN 55901-1407
United States






From:Alessandro Pilotti 
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:10/15/2013 10:39 AM
Subject:Re: [openstack-dev] [Hyper-V] Havana status





On Oct 15, 2013, at 18:14 , Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:

On 11 October 2013 15:41, Alessandro Pilotti
apilo...@cloudbasesolutions.commailto:apilo...@cloudbasesolutions.com wrote:
Current reviews require:

+1 de facto driver X mantainer(s)
+2  core reviewer
+2A  core reviewer

While with the proposed scenario we'd get to a way faster route:

+2  driver X mantainer
+2A another driver X mantainer or a core reviewer

This would make a big difference in terms of review time.

Unfortunately I suspect it would also lead to a big difference in
review quality, and not in a positive way. The things that are
important / obvious to somebody who focuses on one driver are totally
different, and often far more limited, than the concerns of somebody
who reviews many drivers and core code changes.

Although the eyes of somebody which comes from a different domain bring usually 
additional points of views and befits, this was not particularly the case for 
what our driver is concerned. As I already wrote, almost all the reviews so far 
have been related to unit tests or minor formal corrections.

I disagree on the far more limited: driver devs (at least in our case), have 
to work on a wider range of projects beside Nova (e.g.: Neutron, Cinder, 
Ceilometer and outside proper OpenStack OpenVSwitch and Crowbar, to name the 
most relevant cases).





--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] One question to the fakelibvirt in nova test

2013-10-15 Thread Jiang, Yunhong
Hi, stackers,
I have a question to followed code in 
nova/tests/virt/libvirt/test_libvirt.py. My question is, when will the 'import 
libvirt' success and the fake libvirt not used?

try:
import libvirt
except ImportError:
import nova.tests.virt.libvirt.fakelibvirt as libvirt
libvirt_driver.libvirt = libvirt

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Metrics] Working on affiliations details: Activity Board

2013-10-15 Thread Daniel Izquierdo

Hi!,

During the last days we've been improving the Activity Board [1] where 
you can find updated development activity and affiliation information.


For this purpose, we have finally created (sorry for the delay u_u ) an 
affiliation file [2]. This is a csv file with the following information: 
name, affiliation, init date, end date and focused on the source code 
activity.


Feedback is more than welcome. If inconsistencies, repeated identities 
or any other issue are found, please, feel free to directly change that 
using the usual Gerrit approach [3]. Other options are:  open a bug 
report in the OpenStack Community project in Launchpad [4], reply this 
email here or personal reply (probably better option than adding too 
much noise to the dev list).


This has been based on several data sources and manual work, so 
hopefully, there are a few minor issues!


Regards,
Daniel.

ps: not sure if it's a good idea to resend this to the general list 
without the topic [metrics]


[1] http://activity.openstack.org/dash/newbrowser/browser/
[2] 
http://activity.openstack.org/dash/browser/data/affs/openstack-community-affs.csv

[3] http://git.openstack.org/cgit/openstack-infra/activity-board/
[4] https://launchpad.net/openstack-community

--
Daniel Izquierdo Cortazar, PhD
Chief Data Officer
-
Software Analytics for your peace of mind
www.bitergia.com
@bitergia


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] One question to the fakelibvirt in nova test

2013-10-15 Thread Jiang, Yunhong
I thought it will be used when use local environment for ./run_test.sh, 
but./run_test.sh -N will have several failure and seems not supported anymore.

Thanks for any input/suggestions.

--jyh

 -Original Message-
 From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
 Sent: Tuesday, October 15, 2013 11:04 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] One question to the fakelibvirt in nova test
 
 Hi, stackers,
   I have a question to followed code in
 nova/tests/virt/libvirt/test_libvirt.py. My question is, when will the 'import
 libvirt' success and the fake libvirt not used?
 
 try:
 import libvirt
 except ImportError:
 import nova.tests.virt.libvirt.fakelibvirt as libvirt
 libvirt_driver.libvirt = libvirt
 
 Thanks
 --jyh
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Russell Bryant
On 10/15/2013 12:52 PM, Peter Pouliot wrote:
 Hi Everyone,
 
 
 Here are the minutes from today’s hyper-v meeting.
 
  
 
 Minutes:   
 http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-10-15-16.03.html
 
 Minutes (text):
 http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-10-15-16.03.txt
 
 Log:   
 http://eavesdrop.openstack.org/meetings/hyper_v/2013/hyper_v.2013-10-15-16.03.log.html

I read over the meeting notes and felt it was worth continuing the
discussion about the home of this driver.  I feel like we're not that
far from a conclusion, so we don't necessarily have to wait a few weeks
to talk about it.

In the meeting, the following options were metioned:

16:16:51 alexpilotti 1) we move the code somewhere else in a separate
repo (e.g.: cloudbase/nova)
16:17:26 alexpilotti 2) we move the code somewhere else in a separate
repo in OpenStack (e.g.: openstack/nova-driver-hyperv)
16:17:50 alexpilotti errata: on 1) it was meant to be:
cloudbase/nova-driver-hyperv
16:18:48 alexpilotti 3) we find a solution in which we get +2 rights
in our subtree: nova/virt/hyperv and nova/tests/virt/hyperv

I've thought about this quite a bit, and I no longer feel that #2 is an
option on the table.

#3 is possible, but it's not automatic.  It would happen the same way
anyone else gets on the core team (through participation and gaining
trust).  Staying in the tree, and eventually having someone with hyper-v
expertise on nova-core is the ideal outcome here, IMO.

#1 is certainly an option, and if that's what you want to do, I would
support that.  Honestly, after reading the full meeting log, it really
sounds like this is what you want.  You really do want the full control
that you get with having it be your own project, and that's fine.  I
feel that there are downsides too, but it's your call.  If you'd like to
go this route, just let me know so we can coordinate, and we can remove
the driver from the nova tree.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Notifications from non-local exchanges

2013-10-15 Thread Sandy Walsh


On 10/15/2013 12:28 PM, Neal, Phil wrote:
 
 
 -Original Message-
 From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
 Sent: Thursday, October 10, 2013 6:20 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Ceilometer] Notifications from non-local
 exchanges


 On 10/10/2013 06:16 PM, Neal, Phil wrote:
 Greetings all, I'm looking at how to expand the ability of our CM
 instance to consume notifications and have a quick question about
 the configuration and flow...

 For the notifications central agent ,  we rely on the services (i.e.
 glance, cinder)  to drop messages on the same messaging host as used
 by Ceilometer. From there the listener picks it up and cycles through
 the plugin logic to convert it to a sample. It's apparent that we
 can't pass an alternate hostname via the control_exchange values, so
 is there another method for harvesting messages off of other
 instances (e.g. another compute node)?

 Hey Phil,

 You don't really need to specify the exchange name to consume
 notifications. It will default to the control-exchange if not specified
 anyway.

 How it works isn't so obvious.

 Depending on the priority of then notification the oslo notifier will
 publish on topic.priority using the service's control-exchange. If
 that queue doesn't exist it'll create it and bind the control-exchange
 to it. This is so we can publish even if there are no consumers yet.
 
 I think the common default is notifications.info, yes?
 

 Oslo.rpc creates a 1:1 mapping of routing_key and queue to topic (no
 wildcards). So we get

 exchange:service - binding: routing_key topic.priority -
 queue topic.priority

 (essentially, 1 queue per priority)

 Which is why, if you want to enable services to generate notifications,
 you just have to set the driver and the topic(s) to publish on. Exchange
 is implied and routing key/queue are inferred from topic.
 
 Yep, following up to this point: Oslo takes care of the setup of exchanges on 
 behalf of the 
 services. When, say, Glance wants to push notifications onto the message bus, 
 they can set 
 the control_exchange value and the driver (rabbit, for example) and voila! 
 An exchange is
 set up with a default queue bound to the key. 

Correct.


 Likewise we only have to specify the queue name to consume, since we
 only need an exchange to publish.
 
 Here's where my gap is: the notification plugins seem to assume that 
 Ceilometer 
 is sitting on the same messaging node/endpoint as the service. The config 
 file allows
 us to specify the exchange names for the services , but not endpoints, so if 
 Glance 
 is publishing to notifications.info on rabbit.glance.hpcloud.net, and 
 ceilometer
  is  publishing/consuming from the rabbit.ceil.hpcloud.net node then the 
 Glance
  notifications won't be collected.

Hmm, I think I see your point. All the rabbit endpoints are determined
by these switches:
https://github.com/openstack/nova/blob/master/etc/nova/nova.conf.sample#L1532-L1592

We will need a way in CM to pull from multiple rabbits.

  I took another look at the Ceilometer config options...rabbit_hosts
 takes multiple hosts (i.e. rabbit.glance.hpcloud.net:, 
 rabbit.ceil.hpcloud.net:) 
 but it's not clear whether that's for publishing, collection, or both?  The 
 impl_kombu
 module does cycle through that list to create the connection pool, but it's 
 not
 clear to me how it all comes together in the plugin instantiation...

Nice catch. I'll have a look at that as well.

Regardless, I think CM should have separate switches for each collector
we run and break out the consume rabbit from the service rabbit.

I may be in a position to work on this shortly if that's needed.

 

 I have a bare-bones oslo notifier consumer and client here if you want
 to mess around with it (and a bare-bones kombu version in the parent).
 
 Will take a look! 
 

 https://github.com/SandyWalsh/amqp_sandbox/tree/master/oslo

 Not sure if that answered your question or made it worse? :)

 Cheers
 -S




 - Phil

 ___ OpenStack- dev mailing
 list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday October 15th at 19:00 UTC

2013-10-15 Thread Elizabeth Krumbach Joseph
On Mon, Oct 14, 2013 at 10:43 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday October 15th, at 19:00 UTC in
 #openstack-meeting

Meeting minutes and log:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-10-15-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-10-15-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-10-15-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Metrics] Working on affiliations details: Activity Board

2013-10-15 Thread Daniel Izquierdo

On 10/15/2013 08:07 PM, Daniel Izquierdo wrote:

Hi!,

During the last days we've been improving the Activity Board [1] where 
you can find updated development activity and affiliation information.


For this purpose, we have finally created (sorry for the delay u_u ) 
an affiliation file [2]. This is a csv file with the following 
information: name, affiliation, init date, end date and focused on the 
source code activity.


oops, the csv file columns are indeed: name, affiliation, init date, end 
date, number of commits. I missed the number of commits column in the 
previous email. Sorry about the noise.


Regards,
Daniel.




Feedback is more than welcome. If inconsistencies, repeated identities 
or any other issue are found, please, feel free to directly change 
that using the usual Gerrit approach [3]. Other options are:  open a 
bug report in the OpenStack Community project in Launchpad [4], reply 
this email here or personal reply (probably better option than adding 
too much noise to the dev list).


This has been based on several data sources and manual work, so 
hopefully, there are a few minor issues!


Regards,
Daniel.

ps: not sure if it's a good idea to resend this to the general list 
without the topic [metrics]


[1] http://activity.openstack.org/dash/newbrowser/browser/
[2] 
http://activity.openstack.org/dash/browser/data/affs/openstack-community-affs.csv

[3] http://git.openstack.org/cgit/openstack-infra/activity-board/
[4] https://launchpad.net/openstack-community




--
Daniel Izquierdo Cortazar, PhD
Chief Data Officer
-
Software Analytics for your peace of mind
www.bitergia.com
@bitergia


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] icehouse summit session planning

2013-10-15 Thread Doug Hellmann
I will be reviewing the summit session proposals starting some time Friday
18th of October. If you have any topics you would like covered, please
submit the details by 10:00:00 AM UTC.

Thanks,
Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Robert Collins
On 16 October 2013 09:54, Vishvananda Ishaya vishvana...@gmail.com wrote:
 Hi Everyone,

 I've been following this conversation and weighing the different sides. This 
 is a tricky issue but I think it is important to decouple further and extend 
 our circle of trust.

 When nova started it was very easy to do feature development. As it has 
 matured the pace has slowed. This is expected and necessary, but we 
 periodically must make decoupling decisions or we will become mired in 
 overhead. We did this already with cinder and neutron, and we have discussed 
 doing this with virt drivers in the past.

 We have a large number of people attempting to contribute to small sections 
 of nova and getting frustrated with the process.  The perception of 
 developers is much more important than the actual numbers here. If people are 
 frustrated they are disincentivized to help and it hurts everyone. Suggesting 
 that these contributors need to learn all of nova and help with the review 
 queue is silly and makes us seem elitist. We should make it as easy as 
 possible for new contributors to help.


So I agree that perception is a big and significant thing here, but...

I think the fundamental issue is review latency:
http://russellbryant.net/openstack-stats/nova-openreviews.html

Stats since the last revision without -1 or -2 (ignoring jenkins):

Average wait time: 6 days, 17 hours, 24 minutes
1rd quartile wait time: 0 days, 19 hours, 25 minutes
Median wait time: 3 days, 20 hours, 3 minutes
3rd quartile wait time: 7 days, 7 hours, 16 minutes

At the moment 25% of the time Nova reviews sit for over a week [btw we
probably want to not ignore Jenkins in that statistic, as I suspect
reviewers tend to ignore patches that Jenkins hated on].

Now, you may say 'hey, 7 days isn't terrible', but actually it is: if
a developer is producing (say) 2 patches a day, then after 7 calendar
days they may have as many as 10 patches in a vertical queue awaiting
review. Until the patch is reviewed for design-and-architectural
issues (ignoring style stuff which isn't disruptive), all the work
they are doing may need significant rework. Having to redo a week of
work is disruptive to the contributor.

The median wait time of 3 days would nearly halve the amount of rework
that can occur, which would be a significant benefit.

Nova has (over the last 30 days) -
http://russellbryant.net/openstack-stats/nova-reviewers-30.txt:
Total reviews: 3386
Total reviewers: 173

By my count 138 of the reviewers have done less than 20 reviews in
that 30 day period - thats less than one review a day. -
https://docs.google.com/spreadsheet/ccc?key=0AlLkXwa7a4bpdDNjd2gtTE1odjJRYjRVWjhhR2VKQVEusp=sharing

So, 520 reviews from that 138 folk, but if they did 1 per weekday,
that would be 2760 reviews, bringing the total to 3386+2760-520=5626
reviews - or nearly *double* the review bandwidth.

Now, that won't get you more cores overnight, but that sustained
effort learning about the codebase will bring significant knowledge to
a wide set of folk - and thats what's needed to become core, and
increase the approval bandwidth in the team. I don't know exactly what
russell looks for in managing the set of -core, but surely having
enough knowledge is part of it.

Now, I've nothing against having reviewers specialise in a part of the
code base, and making that official - but I think it must be paired
with still requiring substantial knowledge of the projects code and
architecture: just because something is changing in e.g.
nova/virt/disk/api.py doesn't make it irrelevant to specific virt
drivers, and the right way to use something in the rest of the code
base is also relevant to virt drivers, and knowing *whats there to
reuse* is also important.

So, I guess my proposal is, make a restricted +2 category of reviewer:
 - social agreement to +2 only in enumerated areas
 - still need widespread knowledge of nova's code
 - best demonstrated by sustained regular reviews of other changes
 - but granted after a shorter incubation period
 - migrates to full in time
 - privilege and responsibility lost by the same criteria as all other reviewers

Of course, if this has been tried, fine - but AFAICT 'contribute
equally' hasn't been tried yet.

-Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] HOT Software configuration proposal

2013-10-15 Thread Steve Baker
I've just written some proposals to address Heat's HOT software
configuration needs, and I'd like to use this thread to get some feedback:
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config

Please read the proposals and reply to the list with any comments or
suggestions.

We can spend some time discussing software configuration at tomorrow's
Heat meeting, but I fully expect we'll still be in the discussion phase
at Hong Kong.

cheers
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] stabilizing internal APIs

2013-10-15 Thread Morgan Fainberg
Hi Fellow Developers (and those who interface with keystone at the code
level),

I wanted to reach out to the ML and see what the opinions were about
starting to stabilize the internal APIs (wherever possible) for the
keystone project (I would also like to see similar work done in the other
fully integrated projects, but I am starting with a smaller audience).

I believe that (especially in the case of things like identity_api -
assignment_api) we should start supporting the concept of Release -1
(current release, and previous release) of a given internal API.  While
this isn't feasible everywhere, if we can't maintain at least an exception
being raised that indicates what should be called would be ideal for the
release the change occurred in.

This will significantly help any developers who have custom code that
relies on these APIs to find the locations of our new internal APIs.
 Perhaps the stub function/method replacement that simply raises a go
used this new method/function type exception would be sufficient and make
porting code easier.

This would require at the start of each release a cleanup patchset that
removed the stub or old methods/functions that are now fully deprecated.

So with that, lets talk about this more in depth see where it lands.  I
want to weed out any potential pitfalls before a concept like this makes it
anywhere beyond some neural misfires that came up in a passing discussion.
 It may just not be feasible/worth the effort in the grand scheme of things.

Cheers,
Morgan Fainberg

IRC: morganfainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Ben Nemec

On 2013-10-15 17:02, Robert Collins wrote:
On 16 October 2013 09:54, Vishvananda Ishaya vishvana...@gmail.com 
wrote:

Hi Everyone,

I've been following this conversation and weighing the different 
sides. This is a tricky issue but I think it is important to decouple 
further and extend our circle of trust.


When nova started it was very easy to do feature development. As it 
has matured the pace has slowed. This is expected and necessary, but 
we periodically must make decoupling decisions or we will become mired 
in overhead. We did this already with cinder and neutron, and we have 
discussed doing this with virt drivers in the past.


We have a large number of people attempting to contribute to small 
sections of nova and getting frustrated with the process.  The 
perception of developers is much more important than the actual 
numbers here. If people are frustrated they are disincentivized to 
help and it hurts everyone. Suggesting that these contributors need to 
learn all of nova and help with the review queue is silly and makes us 
seem elitist. We should make it as easy as possible for new 
contributors to help.



So I agree that perception is a big and significant thing here, but...

I think the fundamental issue is review latency:
http://russellbryant.net/openstack-stats/nova-openreviews.html

Stats since the last revision without -1 or -2 (ignoring jenkins):

Average wait time: 6 days, 17 hours, 24 minutes
1rd quartile wait time: 0 days, 19 hours, 25 minutes
Median wait time: 3 days, 20 hours, 3 minutes
3rd quartile wait time: 7 days, 7 hours, 16 minutes

At the moment 25% of the time Nova reviews sit for over a week [btw we
probably want to not ignore Jenkins in that statistic, as I suspect
reviewers tend to ignore patches that Jenkins hated on].

Now, you may say 'hey, 7 days isn't terrible', but actually it is: if
a developer is producing (say) 2 patches a day, then after 7 calendar
days they may have as many as 10 patches in a vertical queue awaiting
review. Until the patch is reviewed for design-and-architectural
issues (ignoring style stuff which isn't disruptive), all the work
they are doing may need significant rework. Having to redo a week of
work is disruptive to the contributor.

The median wait time of 3 days would nearly halve the amount of rework
that can occur, which would be a significant benefit.

Nova has (over the last 30 days) -
http://russellbryant.net/openstack-stats/nova-reviewers-30.txt:
Total reviews: 3386
Total reviewers: 173

By my count 138 of the reviewers have done less than 20 reviews in
that 30 day period - thats less than one review a day. -
https://docs.google.com/spreadsheet/ccc?key=0AlLkXwa7a4bpdDNjd2gtTE1odjJRYjRVWjhhR2VKQVEusp=sharing

So, 520 reviews from that 138 folk, but if they did 1 per weekday,
that would be 2760 reviews, bringing the total to 3386+2760-520=5626
reviews - or nearly *double* the review bandwidth.

Now, that won't get you more cores overnight, but that sustained
effort learning about the codebase will bring significant knowledge to
a wide set of folk - and thats what's needed to become core, and
increase the approval bandwidth in the team. I don't know exactly what
russell looks for in managing the set of -core, but surely having
enough knowledge is part of it.

Now, I've nothing against having reviewers specialise in a part of the
code base, and making that official - but I think it must be paired
with still requiring substantial knowledge of the projects code and
architecture: just because something is changing in e.g.
nova/virt/disk/api.py doesn't make it irrelevant to specific virt
drivers, and the right way to use something in the rest of the code
base is also relevant to virt drivers, and knowing *whats there to
reuse* is also important.

So, I guess my proposal is, make a restricted +2 category of reviewer:
 - social agreement to +2 only in enumerated areas
 - still need widespread knowledge of nova's code
 - best demonstrated by sustained regular reviews of other changes
 - but granted after a shorter incubation period
 - migrates to full in time
 - privilege and responsibility lost by the same criteria as all other 
reviewers


Of course, if this has been tried, fine - but AFAICT 'contribute
equally' hasn't been tried yet.


FWIW, this is similar to how things work in Oslo, except that the 
restricted +2 is done through the MAINTAINERS file rather than actual 
granting of +2 authority.  So Oslo still requires a regular core 
reviewer to approve, but it only requires one because a +1 from a 
maintainer qualifies as a +2 (e.g. maintainer +1 and core +2 - 
approved).  For Nova that might not be quite as simple as just giving +2 
to driver maintainers, but it has the advantage of keeping the people 
with an overall view of the project in the loop.


Details on the Oslo system can be found here: 
https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS#L17


-Ben

___

Re: [openstack-dev] [Heat] Plugin packaging

2013-10-15 Thread Tim Smith
Hi Sam,

We wrote a Heat resource plugin as well [1]

The trick to getting heat to auto-discover a module is to install it to the
/usr/lib/heat (or possibly, but unlikely, /usr/lib64/heat) directory.

I have found that the setuptools incantation:

sudo python setup.py install --install-purelib=/usr/lib/heat

works well for source-installed heat.

Hope that helps,

Cheers,
Tim

[1] https://github.com/gridcentric/cobalt-heat


On Mon, Oct 14, 2013 at 5:18 PM, Sam Alba sam.a...@gmail.com wrote:

 Hello,

 I am working on a Heat plugin that makes a new resource available in a
 template. It's working great and I will opensource it this week if I
 can get the packaging right...

 Right now, I am linking my module.py file in /usr/lib/heat to get it
 loaded when heat-engine starts. But according to the doc, I am
 supposed to be able to make the plugin discoverable by heat-engine if
 the module appears in the package heat.engine.plugins[1]

 I looked into the plugin_loader module in the Heat source code and it
 looks like it should work. However I was unable to get a proper Python
 package.

 Has anyone been able to make this packaging right for an external Heat
 plugin?

 Thanks in advance,


 [1]
 https://wiki.openstack.org/wiki/Heat/Plugins#Installation_and_Configuration

 --
 @sam_alba

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] stabilizing internal APIs

2013-10-15 Thread Dolph Mathews
On Tue, Oct 15, 2013 at 6:05 PM, Morgan Fainberg m...@metacloud.com wrote:
 Hi Fellow Developers (and those who interface with keystone at the code
 level),

 I wanted to reach out to the ML and see what the opinions were about
 starting to stabilize the internal APIs (wherever possible) for the keystone
 project (I would also like to see similar work done in the other fully
 integrated projects, but I am starting with a smaller audience).

 I believe that (especially in the case of things like identity_api -
 assignment_api) we should start supporting the concept of Release -1
 (current release, and previous release) of a given internal API.  While this
 isn't feasible everywhere, if we can't maintain at least an exception being
 raised that indicates what should be called would be ideal for the release
 the change occurred in.

 This will significantly help any developers who have custom code that relies
 on these APIs to find the locations of our new internal APIs.  Perhaps the
 stub function/method replacement that simply raises a go used this new
 method/function type exception would be sufficient and make porting code
 easier.

With a more stringent end goal in mind (release - 1), I think
functional documentation would be a great step in the right direction,
and an easy ask for developers  reviewers.

The two most important capabilities are to provide warnings for when
something may be completely removed (release +1 or +2) and to point to
the new method/function (where possible).

The immediate use case for this patch is to deprecate v2.0
controllers, but there's no reason why it couldn't be adapted to
driver interfaces, etc:

  https://review.openstack.org/#/c/50486/


 This would require at the start of each release a cleanup patchset that
 removed the stub or old methods/functions that are now fully deprecated.

 So with that, lets talk about this more in depth see where it lands.  I want
 to weed out any potential pitfalls before a concept like this makes it
 anywhere beyond some neural misfires that came up in a passing discussion.
 It may just not be feasible/worth the effort in the grand scheme of things.

 Cheers,
 Morgan Fainberg

 IRC: morganfainberg

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Sean Dague

On 10/15/2013 04:54 PM, Vishvananda Ishaya wrote:

Hi Everyone,

I've been following this conversation and weighing the different sides. This is 
a tricky issue but I think it is important to decouple further and extend our 
circle of trust.

When nova started it was very easy to do feature development. As it has matured 
the pace has slowed. This is expected and necessary, but we periodically must 
make decoupling decisions or we will become mired in overhead. We did this 
already with cinder and neutron, and we have discussed doing this with virt 
drivers in the past.

We have a large number of people attempting to contribute to small sections of 
nova and getting frustrated with the process.  The perception of developers is 
much more important than the actual numbers here. If people are frustrated they 
are disincentivized to help and it hurts everyone. Suggesting that these 
contributors need to learn all of nova and help with the review queue is silly 
and makes us seem elitist. We should make it as easy as possible for new 
contributors to help.

I think our current model is breaking down at our current size and we need to 
adopt something more similar to the linux model when dealing with subsystems. 
The hyper-v team is the only one suggesting changes, but there have been 
similar concerns from the vmware team. I have no doubt that there are similar 
issues with the PowerVM, Xen, Docker, lxc and even kvm driver contributors.


The Linux kernel process works for a couple of reasons...

1) the subsystem maintainers have known each other for a solid decade 
(i.e. 3x the lifespan of the OpenStack project), over a history of 10 
years, of people doing the right things, you build trust in their judgment.


*no one* in the Linux tree was given trust first, under the hope that it 
would work out. They had to earn it, hard, by doing community work, and 
not just playing in their corner of the world.


2) This 
http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ is 
completely acceptable behavior. So when someone has bad code, they are 
flamed to within an inch of their life, repeatedly, until they never 
ever do that again. This is actually a time saving measure in code 
review. It's a lot faster to just call people idiots then to help them 
with line by line improvements in their code, 10, 20, 30, or 40 
iterations in gerrit.


We, as a community have decided, I think rightly, that #2 really isn't 
in our culture. But you can't start cherry picking parts of the Linux 
kernel community without considering how all the parts work together. 
The good and the bad are part of why the whole system works.



In my opinion, nova-core needs to be willing to trust the subsystem developers 
and let go of a little bit of control. I frankly don't see the drawbacks.


I actually see huge draw backs. Culture matters. Having people active 
and willing to work on real core issues matter. The long term health of 
Nova matters.


As the QA PTL I can tell you that when you look at Nova vs. Cinder vs. 
Neutron, you'll see some very clear lines about how long it takes to get 
to the bottom of a race condition, and how many deep races are in each 
of them. I find this directly related to the stance each project has 
taken on whether it's socially acceptable to only work on your own 
vendor code. Nova's insistence up until this point that if you only play 
in your corner, you don't get the same attention is important incentive 
for people to integrate and work beyond just their boundaries. I think 
diluting this part of the culture would be hugely detrimental to Nova.


Let's take an example that came up today, the compute_diagnostics API. 
This is an area where we've left it completely to the virt drivers to 
vomit up a random dictionary of the day for debugging reasons, and 
stamped it as an API. With a model where we let virt driver authors go 
hide in a corner, that's never going to become an API with any kind of 
contract, and given how much effort we've spent on ensuring RPC 
versioning and message formats, the idea that we are exposing a public 
rest endpoint that's randomly fluctuating data based on date and 
underlying implementation, is a bit saddening.



I'm leaning towards giving control of the subtree to the team as the best 
option because it is simple and works with our current QA system. 
Alternatively, we could split out the driver into a nova subproject (2 below) 
or we could allow them to have a separate branch and do a trusted merge of all 
changes at the end of the cycle (similar to the linux model).

I hope we can come to a solution to the summit that makes all of our 
contributors want to participate more. I believe that giving people more 
responsibility inspires them to participate more fully.


I would like nothing more than all our contributors to participate more. 
But more has to mean caring about not only your stuff.


I was called out today in the hyper-v meeting because I had the 

[openstack-dev] [Heat] Meeting agenda for Wed Oct 16th at 2000 UTC

2013-10-15 Thread Steve Baker
The Heat team holds a weekly meeting in #openstack-meeting, see

https://wiki.openstack.org/wiki/Meetings/HeatAgenda for more details

The next meeting is on Wed Oct 16th at 2000 UTC

Current topics for discussion:
* Review last week's actions
* Havana release status
* https://wiki.openstack.org/wiki/ReleaseNotes/Havana
* Summit session proposals
* HOT software configuration
* Open discussion

If anyone has any other topic to discuss, please add to the wiki.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Bug triage for TripleO - all ATC's please read.

2013-10-15 Thread Robert Collins
Hey everyone, during the meeting today we observed folk who are active
contributors were filing bugs without triaging them.

Triage is a team responsibility : we need to triage bugs that folk who
aren't familiar with TripleO file, but for bugs *we* file, it just
makes work for someone else on the team if they are filed without
doing triage. It's ok if you don't want to triage other peoples bugs -
thats fine, the team as a whole can handle it, but please don't /add/
work for other people when only a few seconds thought as you file the
bug will avoid that.

Triage specifically is:
 - assigning an importance
 - putting any obvious tags (e.g. 'baremetal') on it
 - setting status to 'triaged'.

As contributors, you can skip 'confirm' and go straight to triaged -
unless you believe the bug isn't real, in which case why are you
filing it? :)

The bug triage team for tripleo is https://launchpad.net/~tripleo -
and as all our code review etc is managed in Gerrit, there is no
reason why we can't accept pretty much anyone that is contributing to
TripleO into the team. I think the bar to being a bug supervisor
should be pretty low - e.g. show some familiarity with TripleO,
current program goals etc.

For reference: https://wiki.openstack.org/wiki/BugTriage - this isn't
100% what we're doing but it's not far off, with two key differences:
 - we don't use wishlist: things we'd like to do and things that we do
wrong are both defects; except for regressions the priority of the
work is not affect by whether we've delivered the the thing or not,
and using wishlist just serves to flatten all the priority of all
unimplemented things into one bucket: not helpful.
 - we use triaged, not just 'confirmed'.

Thanks!

-Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Monty Taylor


On 10/15/2013 08:36 PM, Sean Dague wrote:
 On 10/15/2013 04:54 PM, Vishvananda Ishaya wrote:
 Hi Everyone,

 I've been following this conversation and weighing the different
 sides. This is a tricky issue but I think it is important to decouple
 further and extend our circle of trust.

 When nova started it was very easy to do feature development. As it
 has matured the pace has slowed. This is expected and necessary, but
 we periodically must make decoupling decisions or we will become mired
 in overhead. We did this already with cinder and neutron, and we have
 discussed doing this with virt drivers in the past.

 We have a large number of people attempting to contribute to small
 sections of nova and getting frustrated with the process.  The
 perception of developers is much more important than the actual
 numbers here. If people are frustrated they are disincentivized to
 help and it hurts everyone. Suggesting that these contributors need to
 learn all of nova and help with the review queue is silly and makes us
 seem elitist. We should make it as easy as possible for new
 contributors to help.

 I think our current model is breaking down at our current size and we
 need to adopt something more similar to the linux model when dealing
 with subsystems. The hyper-v team is the only one suggesting changes,
 but there have been similar concerns from the vmware team. I have no
 doubt that there are similar issues with the PowerVM, Xen, Docker, lxc
 and even kvm driver contributors.
 
 The Linux kernel process works for a couple of reasons...
 
 1) the subsystem maintainers have known each other for a solid decade
 (i.e. 3x the lifespan of the OpenStack project), over a history of 10
 years, of people doing the right things, you build trust in their judgment.
 
 *no one* in the Linux tree was given trust first, under the hope that it
 would work out. They had to earn it, hard, by doing community work, and
 not just playing in their corner of the world.
 
 2) This
 http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ is
 completely acceptable behavior. So when someone has bad code, they are
 flamed to within an inch of their life, repeatedly, until they never
 ever do that again. This is actually a time saving measure in code
 review. It's a lot faster to just call people idiots then to help them
 with line by line improvements in their code, 10, 20, 30, or 40
 iterations in gerrit.
 
 We, as a community have decided, I think rightly, that #2 really isn't
 in our culture. But you can't start cherry picking parts of the Linux
 kernel community without considering how all the parts work together.
 The good and the bad are part of why the whole system works.
 
 In my opinion, nova-core needs to be willing to trust the subsystem
 developers and let go of a little bit of control. I frankly don't see
 the drawbacks.
 
 I actually see huge draw backs. Culture matters. Having people active
 and willing to work on real core issues matter. The long term health of
 Nova matters.
 
 As the QA PTL I can tell you that when you look at Nova vs. Cinder vs.
 Neutron, you'll see some very clear lines about how long it takes to get
 to the bottom of a race condition, and how many deep races are in each
 of them. I find this directly related to the stance each project has
 taken on whether it's socially acceptable to only work on your own
 vendor code. Nova's insistence up until this point that if you only play
 in your corner, you don't get the same attention is important incentive
 for people to integrate and work beyond just their boundaries. I think
 diluting this part of the culture would be hugely detrimental to Nova.
 
 Let's take an example that came up today, the compute_diagnostics API.
 This is an area where we've left it completely to the virt drivers to
 vomit up a random dictionary of the day for debugging reasons, and
 stamped it as an API. With a model where we let virt driver authors go
 hide in a corner, that's never going to become an API with any kind of
 contract, and given how much effort we've spent on ensuring RPC
 versioning and message formats, the idea that we are exposing a public
 rest endpoint that's randomly fluctuating data based on date and
 underlying implementation, is a bit saddening.
 
 I'm leaning towards giving control of the subtree to the team as the
 best option because it is simple and works with our current QA system.
 Alternatively, we could split out the driver into a nova subproject (2
 below) or we could allow them to have a separate branch and do a
 trusted merge of all changes at the end of the cycle (similar to the
 linux model).

 I hope we can come to a solution to the summit that makes all of our
 contributors want to participate more. I believe that giving people
 more responsibility inspires them to participate more fully.
 
 I would like nothing more than all our contributors to participate more.
 But more has to mean caring about not only your stuff.
 
 

Re: [openstack-dev] [Metrics] Working on affiliations details: Activity Board

2013-10-15 Thread Stefano Maffulli
Thank you Daniel.

On 10/15/2013 11:07 AM, Daniel Izquierdo wrote:
 ps: not sure if it's a good idea to resend this to the general list
 without the topic [metrics]

nah, I don't think it's needed.

 http://activity.openstack.org/dash/browser/data/affs/openstack-community-affs.csv

This is useful also for other projects, like git-dm and stackalytics.

Cheers,
stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] stabilizing internal APIs

2013-10-15 Thread Doug Hellmann
On Tue, Oct 15, 2013 at 7:05 PM, Morgan Fainberg m...@metacloud.com wrote:

 Hi Fellow Developers (and those who interface with keystone at the code
 level),

 I wanted to reach out to the ML and see what the opinions were about
 starting to stabilize the internal APIs (wherever possible) for the
 keystone project (I would also like to see similar work done in the other
 fully integrated projects, but I am starting with a smaller audience).

 I believe that (especially in the case of things like identity_api -
 assignment_api) we should start supporting the concept of Release -1
 (current release, and previous release) of a given internal API.  While
 this isn't feasible everywhere, if we can't maintain at least an exception
 being raised that indicates what should be called would be ideal for the
 release the change occurred in.

 This will significantly help any developers who have custom code that
 relies on these APIs to find the locations of our new internal APIs.
  Perhaps the stub function/method replacement that simply raises a go
 used this new method/function type exception would be sufficient and make
 porting code easier.

 This would require at the start of each release a cleanup patchset that
 removed the stub or old methods/functions that are now fully deprecated.

 So with that, lets talk about this more in depth see where it lands.  I
 want to weed out any potential pitfalls before a concept like this makes it
 anywhere beyond some neural misfires that came up in a passing discussion.
  It may just not be feasible/worth the effort in the grand scheme of things.


Making updates easier would be nice, and the abstract base class work
should help with that. On the other hand, as a deployer who has had to
rewrite our custom integration a few times in the past 6 months or so, I
would also welcome some stability in the plugin APIs. I understand the need
to provide flexibility and updated features for new REST APIs, but I hope
we can find a way to migrate more smoothly or make newer features optional
in the plugins themselves.

DreamHost will have several developers at the summit; is there a session to
talk about approaches for this that we should make sure to attend?

Doug



 Cheers,
 Morgan Fainberg

 IRC: morganfainberg

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Plugin packaging

2013-10-15 Thread Angus Salkeld

On 15/10/13 16:05 -0700, Tim Smith wrote:

Hi Sam,

We wrote a Heat resource plugin as well [1]

The trick to getting heat to auto-discover a module is to install it to the
/usr/lib/heat (or possibly, but unlikely, /usr/lib64/heat) directory.

I have found that the setuptools incantation:

sudo python setup.py install --install-purelib=/usr/lib/heat


Note: you can just add your own directories to the heat.conf
https://github.com/openstack/heat/blob/master/etc/heat/heat.conf.sample#L22

-Angus



works well for source-installed heat.

Hope that helps,

Cheers,
Tim

[1] https://github.com/gridcentric/cobalt-heat


On Mon, Oct 14, 2013 at 5:18 PM, Sam Alba sam.a...@gmail.com wrote:


Hello,

I am working on a Heat plugin that makes a new resource available in a
template. It's working great and I will opensource it this week if I
can get the packaging right...

Right now, I am linking my module.py file in /usr/lib/heat to get it
loaded when heat-engine starts. But according to the doc, I am
supposed to be able to make the plugin discoverable by heat-engine if
the module appears in the package heat.engine.plugins[1]

I looked into the plugin_loader module in the Heat source code and it
looks like it should work. However I was unable to get a proper Python
package.

Has anyone been able to make this packaging right for an external Heat
plugin?

Thanks in advance,


[1]
https://wiki.openstack.org/wiki/Heat/Plugins#Installation_and_Configuration

--
@sam_alba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Reviews: tweaking priorities, and continual-deployment approvals

2013-10-15 Thread Robert Collins
Hi, during the TripleO meeting today we had two distinct discussions
about reviews.

Firstly, our stats have been slipping:
http://russellbryant.net/openstack-stats/tripleo-openreviews.html


Stats since the last revision without -1 or -2 (ignoring jenkins):

Average wait time: 2 days, 16 hours, 18 minutes
1rd quartile wait time: 0 days, 11 hours, 1 minutes
Median wait time: 1 days, 9 hours, 37 minutes
3rd quartile wait time: 5 days, 1 hours, 50 minutes


Longest waiting reviews (based on oldest rev without nack, ignoring jenkins):

7 days, 16 hours, 40 minutes https://review.openstack.org/50010 (Fix a
couple of default config values)
7 days, 4 hours, 21 minutes https://review.openstack.org/50199
(Utilizie pypi-mirror from tripleo-cd)
6 days, 2 hours, 28 minutes https://review.openstack.org/50431 (Make
pypi-mirror more secure and robust)
6 days, 1 hours, 36 minutes https://review.openstack.org/50750 (Remove
obsolete redhat-eventlet.patch)
5 days, 1 hours, 50 minutes https://review.openstack.org/51032
(Updated from global requirements)

This is holding everyone up, so we want to fix it. When we discussed
it we found that there were two distinct issues:
 A - not enough cross-project reviews
 B - folk working on the kanban TripleO Continuous deployment stuff
had backed off on reviews - and they are among the most prolific
reviewers.

A: Cross project reviews are super important: even if you are only
really interested in (say) os-*-config, it's hard to think about
things in context unless you're also up to date with changing code
(and the design of code) in the rest of TripleO. *It doesn't matter*
if you aren't confident enough to do a +2 - the only way you get that
confidence is by reviewing and reading code so you can come up to
speed, and the only way we increase our team bandwidth is through folk
doing that in a consistent fashion.

So please, whether your focus is Python APIs, UI, or system plumbing
in the heart of diskiamge-builder, please take the time to review
systematically across all the projects:
https://wiki.openstack.org/wiki/TripleO#Review_team

B: While the progress we're making on delivering a production cloud is
hugely cool, we need to keep our other responsibilities in check -
https://wiki.openstack.org/wiki/TripleO#Team_responsibilities - is a
new section I've added based on the meeting. Even folk working on the
pointy end of the continuous delivery story need to keep pulling on
the common responsibilities. We said in the meeting that we might
triage it as follows:
 - review reviews for firedrills first. (Critical bugs, things
breaking the CD cloud)
 - review reviews for the CD cloud
 - then all reviews for the program
with a goal of driving them all to 0: if we're on top of things, that
should never be a burden. If we run out of time, we'll have unblocked
critical things first, unblocked folk working on the pointy edge
second - bottlenecks are important to unblock. We'll review how this
looks next week.

# The second thing

The second issue was raised during the retrospective (which will be up
at https://wiki.openstack.org/wiki/TripleO/TripleOCloud/MVP1and2Retrospective
a little later today. With a production environment, we want to ensure
that only released code is running on it - running something from a
pending review is something to avoid. But, the only way we've been
able to effectively pull things together has been to run ahead of
reviews :(. A big chunk of that is due to a lack of active +2
reviewers collaborating with the CD cloud folk - we would get a -core
putting a patch up, and a +2, but no second +2. We decided in the
retrospective to try permitting -core to +2 their own patch if it's
straightforward and part of the current CD story [or a firedrill]. We
set an explicit 'but be sure you tested this first' criteria on that :
so folk might try it locally, or even monkey patch it onto the cloud
for one run to check it really works [until we have gating on the CD
story/ies].

Cheers,
Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Retrospective up

2013-10-15 Thread Robert Collins
https://wiki.openstack.org/wiki/TripleO/TripleOCloud/MVP1and2Retrospective

If you're a TripleO contributor, please take the time to eyeball it - thanks!

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] stabilizing internal APIs

2013-10-15 Thread Morgan Fainberg
On Tue, Oct 15, 2013 at 5:05 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:


 Making updates easier would be nice, and the abstract base class work
 should help with that. On the other hand, as a deployer who has had to
 rewrite our custom integration a few times in the past 6 months or so, I
 would also welcome some stability in the plugin APIs. I understand the need
 to provide flexibility and updated features for new REST APIs, but I hope
 we can find a way to migrate more smoothly or make newer features optional
 in the plugins themselves.

 Agreed, the ABC changes that are slowing making their way in will most
assuredly help some.


 DreamHost will have several developers at the summit; is there a session
 to talk about approaches for this that we should make sure to attend?

 I do not believe there is currently a session slated for anything like
this.  You could propose a session for this (or I could); obviously we
would need enough interest to make it worth committing a whole session to.
 Maybe piggy-back this one onto another session already proposed?  Maybe
this should be a broader-than-keystone-only topic?
--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] stabilizing internal APIs

2013-10-15 Thread Robert Collins
I think everyone will have opinions :). I suggest proving it in
keystone and then bringing in a larger audience; we've had discussions
about graceful evolution for some time now but little concrete action
- I think because we're too ambitious, and the costs that result are
perceived to be too high.

+1 for doing it!

-Rob

On 16 October 2013 14:00, Morgan Fainberg m...@metacloud.com wrote:


 On Tue, Oct 15, 2013 at 5:05 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:


 Making updates easier would be nice, and the abstract base class work
 should help with that. On the other hand, as a deployer who has had to
 rewrite our custom integration a few times in the past 6 months or so, I
 would also welcome some stability in the plugin APIs. I understand the need
 to provide flexibility and updated features for new REST APIs, but I hope we
 can find a way to migrate more smoothly or make newer features optional in
 the plugins themselves.

 Agreed, the ABC changes that are slowing making their way in will most
 assuredly help some.


 DreamHost will have several developers at the summit; is there a session
 to talk about approaches for this that we should make sure to attend?

 I do not believe there is currently a session slated for anything like this.
 You could propose a session for this (or I could); obviously we would need
 enough interest to make it worth committing a whole session to.  Maybe
 piggy-back this one onto another session already proposed?  Maybe this
 should be a broader-than-keystone-only topic?
 --Morgan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-15 Thread Mike Spreitzer
Steve Baker sba...@redhat.com wrote on 10/15/2013 06:48:53 PM:

 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org, 
 Date: 10/15/2013 06:51 PM
 Subject: [openstack-dev] [Heat] HOT Software configuration proposal
 
 I've just written some proposals to address Heat's HOT software 
 configuration needs, and I'd like to use this thread to get some 
feedback:
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config

In that proposal, each component can use a different configuration 
management tool.

 
https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config


In this proposal, I get the idea that it is intended that each Compute 
instance run only one configuration management tool.  At least, most of 
the text discusses the support (e.g., the idea that each CM tool supplies 
userdata to bootstrap itself) in terms appropriate for a single CM tool 
per instance; also, there is no discussion of combining userdata from 
several CM tools.

I agree with the separation of concerns issues that have been raised.  I 
think all this software config stuff can be handled by a pre-processor 
that takes an extended template in and outputs a plain template that can 
be consumed by today's heat engine (no extension to the heat engine 
necessary).

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-15 Thread Steve Baker
On 10/16/2013 02:21 PM, Mike Spreitzer wrote:
 Steve Baker sba...@redhat.com wrote on 10/15/2013 06:48:53 PM:

  From: Steve Baker sba...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 10/15/2013 06:51 PM
  Subject: [openstack-dev] [Heat] HOT Software configuration proposal
 
  I've just written some proposals to address Heat's HOT software
  configuration needs, and I'd like to use this thread to get some
 feedback:
  https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config

 In that proposal, each component can use a different configuration
 management tool.

 
 https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config

 In this proposal, I get the idea that it is intended that each Compute
 instance run only one configuration management tool.  At least, most
 of the text discusses the support (e.g., the idea that each CM tool
 supplies userdata to bootstrap itself) in terms appropriate for a
 single CM tool per instance; also, there is no discussion of combining
 userdata from several CM tools.

I think users should be told that it is possible but foolhardy to mix CM
tools in a single server. The exception to this *might* be allowing one
Heat::CloudInit (or Heat::SoftwareConfig) component to install the other
CM tool on a pristine image before running the other components that use
that tool.

Initially I thought that each component type should be able to ensure
that its CM tool is installed before invoking that tool. The realities
of doing this the right way all the time are difficult though[1] so I've
backed away from this for now. It can always be added as an enhancement
later. In the meantime users are free to build images that have all
required prerequisites, or install them with a Heat::CloudInit (or
Heat::SoftwareConfig) component on boot.

 I agree with the separation of concerns issues that have been raised.
  I think all this software config stuff can be handled by a
 pre-processor that takes an extended template in and outputs a plain
 template that can be consumed by today's heat engine (no extension to
 the heat engine necessary).

A stated goal of the heat template format is that it be fit-for-use by
humans. Pre-processing is a perfectly valid technique for other services
that use heat and we encourage that. However if the pre-processing is to
work around usability issues in the template format I would rather focus
on fixing those issues.

[1] Currently the most reliable way of installing heat-cfntools on any
arbitrary pristine image is in a venv
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Dan Smith
 The last thing that OpenStack needs ANY more help with is velocity. I
 mean, let's be serious - we land WAY more patches in a day than is
 even close to sane.

Thanks for saying this -- it doesn't get said enough. I find it totally
amazing that we're merging 34 changes in a day (yesterday) which is like
170 per work week (just on nova). More amazing is that we're talking
about how to make it faster all the time. It's definitely the fastest
moving extremely complex thing I can recall working on.

 We MUST continue to be vigilent in getting people to care about more
 than their specific part, or else this big complex mess is going to come
 crashing down around us.

I totally agree.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How does the libvirt domain XML get created?

2013-10-15 Thread Clark Laughlin
Hello everyone,

I'm trying to figure out how videomodel type='cirrus' vram='9216' 
heads='1'//video makes it into the libvirt domain XML when creating a new 
instance.  I see where much of the content of the XML is created by the libvirt 
driver under nova/nova/virt/libvirt.  However, I am unable to find where this 
makes it in there.  Could someone point me in the right direction?

Thank you,
Clark L
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How does the libvirt domain XML get created?

2013-10-15 Thread Chuanchang Jia
Hi Clark,
I'm not familiar with nova, as usual, you can use 'virsh dumpxml domain'
to show domain XML configuration, I assume
you have installed libvirt-client rpm package, or if you successfully
defined a domain then you also may find its XML
configuration under the /etc/libvirt/qemu/domain.xml, I hope it's helpful
for you.

Good Luck!
Alex



2013/10/16 Clark Laughlin clark.laugh...@linaro.org

 Hello everyone,

 I'm trying to figure out how videomodel type='cirrus' vram='9216'
 heads='1'//video makes it into the libvirt domain XML when creating a
 new instance.  I see where much of the content of the XML is created by the
 libvirt driver under nova/nova/virt/libvirt.  However, I am unable to find
 where this makes it in there.  Could someone point me in the right
 direction?

 Thank you,
 Clark L
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Update: Nova API List for Missing Tempest Tests

2013-10-15 Thread Masayuki Igawa
Hi, Matthew
Thank you very much for your reply.

 I'm glad that others have an interest in this topic. I've started an etherpad
 for that discussion here:
 
 https://etherpad.openstack.org/p/icehouse-summit-qa-coverage-tooling
 
 Right now it's a very rough outline, without much on it. I'm planning to add
 more later. But, feel free to add any discussion points or information that 
 you
 think needs to be a part of the session.

Thanks. I'll add discussion points and information.

By the way, I think another way to increase test coverage. That is 
 Test Driven Development by using Tempest.
 https://etherpad.openstack.org/p/icehouse-summit-qa-tdd-by-tempest

I'd like to propose this topic to Icehouse design summit.(not yet)
Because we implement a Tempest test code after implementing an execution code 
such as
Nova, Cinder.. and so on now. But I think this flow is one of the missing 
Tempest test reasons.
I think we can increase the test coverage of Tempest if we write a Tempest code 
first.

Best Regards,
-- Masayuki Igawa


On 2013/10/15 23:24:56 +0900, Matthew Treinish wrote:

 On Tue, Oct 15, 2013 at 06:25:28AM +, Masayuki Igawa wrote:
  Hi, 
  
  First, thank you to an anonymous for updating this list!
  - GET /{project_id}/servers/:server_id/diagnostics
  
  And, I have updated: Nova API List for Missing Tempest Tests.

  https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc
  
  The summary of this list:
  different count from
  Tested or not # of APIs ratio   the last time
  ---
  Tested API  124  49.6%  +2
  Not Tested API   66  26.4%  -2
  Not Need to Test(*1) 60  24.0%   0
  ---
  Total(*2):  250 100.0%   0
  (*1) Because they are deprecated APIs such as nova-network and volume.
  (*2) not included v3 APIs
  
  The tempest version is:
   commit f55f4e54ceab7c6a4d330f92c8059e46233e3560
   Merge: 86ab238 062e30a
   Author: Jenkins jenk...@review.openstack.org
   Date:   Mon Oct 14 15:55:59 2013 +
  
  By the way, I saw a design summit proposal related to this topic(*3). I 
  think
  this information should be generated automatically. So I'd like to talk 
  about
  this topic at the summit session.
  (*3) Coverage analysis tooling: http://summit.openstack.org/cfp/details/171
 
 I'm glad that others have an interest in this topic. I've started an etherpad
 for that discussion here:
 
 https://etherpad.openstack.org/p/icehouse-summit-qa-coverage-tooling
 
 Right now it's a very rough outline, without much on it. I'm planning to add
 more later. But, feel free to add any discussion points or information that 
 you
 think needs to be a part of the session.
 
 -Matt Treinish
 
  
  This information would be useful for creating Tempest tests.
  Any comments/questions/suggestions are welcome.
  
  Best Regards,
  -- Masayuki Igawa
  
  
   Hi,
   
   # I'm sorry for this resending because my last mail has unnecessary 
   messages.
   
   
   I have updated: Nova API List for Missing Tempest Tests.

   https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc
   
   The summary of this list:
 different count from
   Tested or not# of APIsratio   the last time
   ---
   Tested API122  48.8%  +5
   Not Tested API 68  27.2%  -5
   Not Need to Test(*1)   60  24.0%   0
   ---
   Total(*2):250 100.0%   0
   
   (*1) Because they are deprecated APIs such as nova-network and volume.
   (*2) not included v3 APIs
   
   I hope this information would be helpful for creating Tempest tests.
   Any comments and questions are welcome.
   
   Best Regards,
   -- Masayuki Igawa
   
   
Hi, Tempest developers

I have made:
 Nova API List for Missing Tempest Tests.
 
https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc

This list shows what we should test. That is:
 * Nova has 250 APIs(not include v3 APIs).
 * 117 APIs are executed(maybe tested).
 * 73 APIs are not executed.
 * 60 APIs are not executed. But they maybe not need to test.
 - Because they are deprecated APIs such as nova-network and 
volume.

So I think we need more tempest test cases.
If this idea is acceptable, can you put your name to 'assignee' at your 
favorites,
and implement tempest tests.

Any comments are welcome.

Additional information:
 I made this API list with modification of nova's code that based on 
 https://review.openstack.org/#/c/25882/ (Abandoned).

 

Re: [openstack-dev] How does the libvirt domain XML get created?

2013-10-15 Thread Michael Still
On Wed, Oct 16, 2013 at 2:08 PM, Clark Laughlin
clark.laugh...@linaro.org wrote:
 Hello everyone,

 I'm trying to figure out how videomodel type='cirrus' vram='9216' 
 heads='1'//video makes it into the libvirt domain XML when creating a new 
 instance.  I see where much of the content of the XML is created by the 
 libvirt driver under nova/nova/virt/libvirt.  However, I am unable to find 
 where this makes it in there.  Could someone point me in the right direction?

This XML is created by helper classes in nova/virt/libvirt/config.py.
I'd look for XML which matches what you're looking for there and then
look for the class which correlates to your XML in driver.py.

Cheers,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How does the libvirt domain XML get created?

2013-10-15 Thread Clark Laughlin

I can see in config.py where VNC gets added (the graphics element), but I 
can't find any place where a video element gets added.  In fact, I've grepped 
the entire nova tree for cirrus or video and can only find it here:

./nova/tests/virt/libvirt/fakelibvirt.py

- Clark L

On Oct 15, 2013, at 10:58 PM, Michael Still mi...@stillhq.com wrote:

 On Wed, Oct 16, 2013 at 2:08 PM, Clark Laughlin
 clark.laugh...@linaro.org wrote:
 Hello everyone,
 
 I'm trying to figure out how videomodel type='cirrus' vram='9216' 
 heads='1'//video makes it into the libvirt domain XML when creating a 
 new instance.  I see where much of the content of the XML is created by the 
 libvirt driver under nova/nova/virt/libvirt.  However, I am unable to find 
 where this makes it in there.  Could someone point me in the right direction?
 
 This XML is created by helper classes in nova/virt/libvirt/config.py.
 I'd look for XML which matches what you're looking for there and then
 look for the class which correlates to your XML in driver.py.
 
 Cheers,
 Michael
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient]should administrator can see all servers of all tenants by default?

2013-10-15 Thread Christopher Yeoh
On Tue, Oct 15, 2013 at 9:34 PM, Lingxian Kong anlin.k...@gmail.com wrote:

 Have there been any changes in V3 now?



I'm currently having a look at it. There's a bit other general breakage
with all_tenants in there that should be fixed first.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-15 Thread Mike Spreitzer
The threading in the archive includes this discussion under the HOT 
Software orchestration proposal for workflows heading, and the overall 
ordering in the archive looks very mixed up to me.  I am going to reply 
here, hoping that the new subject line will be subject to less strange 
ordering in the archive; this is really a continuation of the overall 
discussion, not just Steve Baker's proposal.

What is the difference between what today's heat engine does and a 
workflow?  I am interested to hear what you experts think, I hope it will 
be clarifying.  I presume the answers will touch on things like error 
handling, state tracking, and updates.

I see the essence of Steve Baker's proposal to be that of doing the 
minimal mods necessary to enable the heat engine to orchestrate software 
components.  The observation is that not much has to change, since the 
heat engine is already in the business of calling out to things and 
passing values around.  I see a little bit of a difference, maybe because 
I am too new to already know why it is not an issue.  In today's heat 
engine, the calls are made to fixed services to do CRUD operations on 
virtual resources in the cloud, using credentials managed implicitly; the 
services have fixed endpoints, even as the virtual resources come and go. 
Software components have no fixed service endpoints; the service endpoints 
come and go as the host Compute instances come and go; I did not notice a 
story about authorization for the software component calls.

Interestingly, Steve Baker's proposal reminds me a lot of Chef.  If you 
just rename Steve's component to recipe, the alignment gets real 
obvious; I am sure it is no accident.  I am not saying it is isomorphic 
--- clearly Steve Baker's proposal has more going on, with its cross-VM 
data dependencies and synchronization.  But let me emphasize that we can 
start to see a different way of thinking here.  Rather than focusing on a 
centrally-run workflow, think of each VM as independently running its own 
series of recipes --- with the recipes invocations now able to communicate 
and synchronize between VMs as well as within VMs.

Steve Baker's proposal uses two forms of communication and synchronization 
between VMs: (1) get_attr and (2) wait conditions and handles (sugar 
coated or not).  The implementation of (1) is part of the way the heat 
engine invokes components, the implementation of (2) is independent of the 
heat engine.

Using the heat engine for orchestration is limited to the kinds of logic 
that the heat engine can run.  This may be one reason people are 
suggesting using a general workflow engine.  However, the recipes 
(components) running in the VMs can do general computation; if we allow 
general cross-VM communication and synchronization as part of those 
general computations, we clearly have a more expressive system than the 
heat engine.

Of course, a general distributed computation can get itself into trouble 
(e.g., deadlock, livelock).  If we structure that computation as a set of 
components (recipe invocations) with a DAG of dependencies then we avoid 
those troubles.  And the kind of orchestration that the heat engine does 
is sufficient to invoke such components.

Structuring software orchestration as a DAG of components also gives us a 
leg up on UPDATE.  Rather than asking the user to write a workflow for 
each different update, or a general meta-workflow that does introspection 
to decide what work needs to be done, we ask the thing that invokes the 
components to run through the components in the way that today's heat 
engine runs through resources for an UPDATE.

Lakshmi has been working on a software orchestration technique that is 
also centered on the idea of a DAG of components.  It was created before 
we got real interested in Heat.  It is implemented as a pre-processor that 
runs upstream of where today's heat engine goes, emitting fairly minimal 
userdata needed for bootstrapping.  The dependencies between recipe 
invocations are handled very smoothly in the recipes, which are written in 
Chef.  No hackery is needed in the recipe text at all (thanks to Ruby 
metaprogramming); what is needed is only an additional declaration of what 
are the cross-VM inputs and outputs of each recipe.  The propagation of 
data and synchronization between VMs is handled, under the covers, via 
simple usage of ZooKeeper (other implementations are reasonable too).  But 
the idea of heat-independent propagation of data and synchronization among 
a DAG of components is not limited to chef-based components, and can 
appear fairly smooth in any recipe language.

A value of making software orchestration independent of today's heat 
engine is that it enables the four-stage pipeline that I have sketched at 
https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U 
and whose ordering of functionality has been experimentally vetted with 
some non-trivial examples.  The first big one 

Re: [openstack-dev] [Openstack] problem fetching volume details in snapshot creation

2013-10-15 Thread Swapnil Kulkarni
Hi DInakar,

Please use volume_name, volume_size,etc. These are volume related
parameters frequently used in snapshots. You can always do *dir* to get any
specific values you need.

Best Regards,
Swapnil


On Wed, Oct 16, 2013 at 10:11 AM, Dinakar Gorti Maruti 
dinakar...@cloudbyte.co wrote:

 hi,
I am implementing a driver for cinder services and I am stuck with a
 problem in creation of snapshot , I have a need to fetch the volume details
 in snapshot creation.
 I am using Openstack-Grizzly

 I am trying to use this line of code

 def create_snapshot(self, snapshot_ref):
   ..

   volume = snapshot_ref['volume']

   ...


 Error :

 Traceback (most recent call last):
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
 line 430, in _process_data
 rval = self.proxy.dispatch(ctxt, version, method, **args)
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 return getattr(proxyobj, method)(ctxt, **kwargs)
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py, line
 564, in create_snapshot
 {'status': 'error'})
   File /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 self.gen.next()
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py, line
 554, in create_snapshot
 model_update = self.driver.create_snapshot(snapshot_ref,volume_name)
   File
 /usr/lib/python2.6/site-packages/cinder/volume/drivers/cloudbyte.py, line
 195, in create_snapshot
 LOG.debug(_(phani volume object in snapshot :
 %s),snapshot_ref['volume'])
   File /usr/lib/python2.6/site-packages/cinder/db/sqlalchemy/models.py,
 line 74, in __getitem__
 return getattr(self, key)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 168, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 453, in get
 value = self.callable_(state, passive)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py,
 line 481, in _load_for_state
 (mapperutil.state_str(state), self.key)


  DetachedInstanceError: Parent instance Snapshot at 0x30a0cd0 is not
 bound to a Session; lazy load operation of attribute 'volume' cannot proceed

 hoping for a solution

 Thanks
 Dinakar

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Team Meeting Reminder

2013-10-15 Thread Mark Washenberger
Hi Glance folks,

There will be a team meeting this week on Thursday at 20:00 UTC in
#openstack-meeting-alt. That is, in your timezone:
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20131017T20ah=1

The agenda is posted here
https://etherpad.openstack.org/p/glance-team-meeting-agenda . Since its an
etherpad, feel free to add items you'd like to discuss.

Cheers, and thanks for all your great work on getting out RC2!

-markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] problem fetching volume details in snapshot creation

2013-10-15 Thread Dinakar Gorti Maruti
Hi Swapnil,

I had tried it before , it is returning a string volume
followed by volume id

Thanks
Dinakar


On Wed, Oct 16, 2013 at 10:31 AM, Swapnil Kulkarni 
swapnilkulkarni2...@gmail.com wrote:

 Hi DInakar,

 Please use volume_name, volume_size,etc. These are volume related
 parameters frequently used in snapshots. You can always do *dir* to get
 any specific values you need.

 Best Regards,
 Swapnil


 On Wed, Oct 16, 2013 at 10:11 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 hi,
I am implementing a driver for cinder services and I am stuck with a
 problem in creation of snapshot , I have a need to fetch the volume details
 in snapshot creation.
 I am using Openstack-Grizzly

 I am trying to use this line of code

 def create_snapshot(self, snapshot_ref):
   ..

   volume = snapshot_ref['volume']

   ...


 Error :

 Traceback (most recent call last):
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
 line 430, in _process_data
 rval = self.proxy.dispatch(ctxt, version, method, **args)
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 return getattr(proxyobj, method)(ctxt, **kwargs)
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py, line
 564, in create_snapshot
 {'status': 'error'})
   File /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 self.gen.next()
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py, line
 554, in create_snapshot
 model_update = self.driver.create_snapshot(snapshot_ref,volume_name)
   File
 /usr/lib/python2.6/site-packages/cinder/volume/drivers/cloudbyte.py, line
 195, in create_snapshot
 LOG.debug(_(phani volume object in snapshot :
 %s),snapshot_ref['volume'])
   File /usr/lib/python2.6/site-packages/cinder/db/sqlalchemy/models.py,
 line 74, in __getitem__
 return getattr(self, key)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 168, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 453, in get
 value = self.callable_(state, passive)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py,
 line 481, in _load_for_state
 (mapperutil.state_str(state), self.key)


  DetachedInstanceError: Parent instance Snapshot at 0x30a0cd0 is not
 bound to a Session; lazy load operation of attribute 'volume' cannot proceed

 hoping for a solution

 Thanks
 Dinakar

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Alessandro Pilotti



On Oct 16, 2013, at 02:36 , Sean Dague s...@dague.netmailto:s...@dague.net 
wrote:

On 10/15/2013 04:54 PM, Vishvananda Ishaya wrote:
Hi Everyone,

I've been following this conversation and weighing the different sides. This is 
a tricky issue but I think it is important to decouple further and extend our 
circle of trust.

When nova started it was very easy to do feature development. As it has matured 
the pace has slowed. This is expected and necessary, but we periodically must 
make decoupling decisions or we will become mired in overhead. We did this 
already with cinder and neutron, and we have discussed doing this with virt 
drivers in the past.

We have a large number of people attempting to contribute to small sections of 
nova and getting frustrated with the process.  The perception of developers is 
much more important than the actual numbers here. If people are frustrated they 
are disincentivized to help and it hurts everyone. Suggesting that these 
contributors need to learn all of nova and help with the review queue is silly 
and makes us seem elitist. We should make it as easy as possible for new 
contributors to help.

I think our current model is breaking down at our current size and we need to 
adopt something more similar to the linux model when dealing with subsystems. 
The hyper-v team is the only one suggesting changes, but there have been 
similar concerns from the vmware team. I have no doubt that there are similar 
issues with the PowerVM, Xen, Docker, lxc and even kvm driver contributors.

The Linux kernel process works for a couple of reasons...

1) the subsystem maintainers have known each other for a solid decade (i.e. 3x 
the lifespan of the OpenStack project), over a history of 10 years, of people 
doing the right things, you build trust in their judgment.

*no one* in the Linux tree was given trust first, under the hope that it would 
work out. They had to earn it, hard, by doing community work, and not just 
playing in their corner of the world.

2) This http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ is 
completely acceptable behavior. So when someone has bad code, they are flamed 
to within an inch of their life, repeatedly, until they never ever do that 
again. This is actually a time saving measure in code review. It's a lot faster 
to just call people idiots then to help them with line by line improvements in 
their code, 10, 20, 30, or 40 iterations in gerrit.

We, as a community have decided, I think rightly, that #2 really isn't in our 
culture. But you can't start cherry picking parts of the Linux kernel community 
without considering how all the parts work together. The good and the bad are 
part of why the whole system works.

In my opinion, nova-core needs to be willing to trust the subsystem developers 
and let go of a little bit of control. I frankly don't see the drawbacks.

I actually see huge draw backs. Culture matters. Having people active and 
willing to work on real core issues matter. The long term health of Nova 
matters.

As the QA PTL I can tell you that when you look at Nova vs. Cinder vs. Neutron, 
you'll see some very clear lines about how long it takes to get to the bottom 
of a race condition, and how many deep races are in each of them. I find this 
directly related to the stance each project has taken on whether it's socially 
acceptable to only work on your own vendor code. Nova's insistence up until 
this point that if you only play in your corner, you don't get the same 
attention is important incentive for people to integrate and work beyond just 
their boundaries. I think diluting this part of the culture would be hugely 
detrimental to Nova.

Let's take an example that came up today, the compute_diagnostics API. This is 
an area where we've left it completely to the virt drivers to vomit up a random 
dictionary of the day for debugging reasons, and stamped it as an API. With a 
model where we let virt driver authors go hide in a corner, that's never going 
to become an API with any kind of contract, and given how much effort we've 
spent on ensuring RPC versioning and message formats, the idea that we are 
exposing a public rest endpoint that's randomly fluctuating data based on date 
and underlying implementation, is a bit saddening.

I'm leaning towards giving control of the subtree to the team as the best 
option because it is simple and works with our current QA system. 
Alternatively, we could split out the driver into a nova subproject (2 below) 
or we could allow them to have a separate branch and do a trusted merge of all 
changes at the end of the cycle (similar to the linux model).

I hope we can come to a solution to the summit that makes all of our 
contributors want to participate more. I believe that giving people more 
responsibility inspires them to participate more fully.

I would like nothing more than all our contributors to participate more. But 
more has to mean caring about not only your 

Re: [openstack-dev] [Openstack] problem fetching volume details in snapshot creation

2013-10-15 Thread Dinakar Gorti Maruti
display_name parameter  of volume


On Wed, Oct 16, 2013 at 10:55 AM, Swapnil Kulkarni 
swapnilkulkarni2...@gmail.com wrote:

 What exact parameters are you looking for?



 On Wed, Oct 16, 2013 at 10:48 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 Hi Swapnil,

 I had tried it before , it is returning a string volume
 followed by volume id

 Thanks
 Dinakar


 On Wed, Oct 16, 2013 at 10:31 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 Hi DInakar,

 Please use volume_name, volume_size,etc. These are volume related
 parameters frequently used in snapshots. You can always do *dir* to get
 any specific values you need.

 Best Regards,
 Swapnil


 On Wed, Oct 16, 2013 at 10:11 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 hi,
I am implementing a driver for cinder services and I am stuck with a
 problem in creation of snapshot , I have a need to fetch the volume details
 in snapshot creation.
 I am using Openstack-Grizzly

 I am trying to use this line of code

 def create_snapshot(self, snapshot_ref):
   ..

   volume = snapshot_ref['volume']

   ...


 Error :

 Traceback (most recent call last):
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
 line 430, in _process_data
 rval = self.proxy.dispatch(ctxt, version, method, **args)
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 return getattr(proxyobj, method)(ctxt, **kwargs)
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py,
 line 564, in create_snapshot
 {'status': 'error'})
   File /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 self.gen.next()
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py,
 line 554, in create_snapshot
 model_update = self.driver.create_snapshot(snapshot_ref,volume_name)
   File
 /usr/lib/python2.6/site-packages/cinder/volume/drivers/cloudbyte.py, line
 195, in create_snapshot
 LOG.debug(_(phani volume object in snapshot :
 %s),snapshot_ref['volume'])
   File
 /usr/lib/python2.6/site-packages/cinder/db/sqlalchemy/models.py, line 74,
 in __getitem__
 return getattr(self, key)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 168, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 453, in get
 value = self.callable_(state, passive)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py,
 line 481, in _load_for_state
 (mapperutil.state_str(state), self.key)


  DetachedInstanceError: Parent instance Snapshot at 0x30a0cd0 is not
 bound to a Session; lazy load operation of attribute 'volume' cannot 
 proceed

 hoping for a solution

 Thanks
 Dinakar

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] problem fetching volume details in snapshot creation

2013-10-15 Thread Swapnil Kulkarni
I think snapshot[volume_name] is what you are looking for, pls see below
link for ref.

https://github.com/openstack/cinder/blob/stable/grizzly/cinder/volume/drivers/lvm.py#L233

Best Regards,
Swapnil

On Wed, Oct 16, 2013 at 10:57 AM, Dinakar Gorti Maruti 
dinakar...@cloudbyte.co wrote:

 display_name parameter  of volume


 On Wed, Oct 16, 2013 at 10:55 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 What exact parameters are you looking for?



 On Wed, Oct 16, 2013 at 10:48 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 Hi Swapnil,

 I had tried it before , it is returning a string
 volume followed by volume id

 Thanks
 Dinakar


 On Wed, Oct 16, 2013 at 10:31 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 Hi DInakar,

 Please use volume_name, volume_size,etc. These are volume related
 parameters frequently used in snapshots. You can always do *dir* to
 get any specific values you need.

 Best Regards,
 Swapnil


 On Wed, Oct 16, 2013 at 10:11 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 hi,
I am implementing a driver for cinder services and I am stuck with
 a problem in creation of snapshot , I have a need to fetch the volume
 details in snapshot creation.
 I am using Openstack-Grizzly

 I am trying to use this line of code

 def create_snapshot(self, snapshot_ref):
   ..

   volume = snapshot_ref['volume']

   ...


 Error :

 Traceback (most recent call last):
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
 line 430, in _process_data
 rval = self.proxy.dispatch(ctxt, version, method, **args)
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 return getattr(proxyobj, method)(ctxt, **kwargs)
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py,
 line 564, in create_snapshot
 {'status': 'error'})
   File /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 self.gen.next()
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py,
 line 554, in create_snapshot
 model_update =
 self.driver.create_snapshot(snapshot_ref,volume_name)
   File
 /usr/lib/python2.6/site-packages/cinder/volume/drivers/cloudbyte.py, 
 line
 195, in create_snapshot
 LOG.debug(_(phani volume object in snapshot :
 %s),snapshot_ref['volume'])
   File
 /usr/lib/python2.6/site-packages/cinder/db/sqlalchemy/models.py, line 
 74,
 in __getitem__
 return getattr(self, key)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 168, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 453, in get
 value = self.callable_(state, passive)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py,
 line 481, in _load_for_state
 (mapperutil.state_str(state), self.key)


  DetachedInstanceError: Parent instance Snapshot at 0x30a0cd0 is not
 bound to a Session; lazy load operation of attribute 'volume' cannot 
 proceed

 hoping for a solution

 Thanks
 Dinakar

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] problem fetching volume details in snapshot creation

2013-10-15 Thread Swapnil Kulkarni
What exact parameters are you looking for?


On Wed, Oct 16, 2013 at 10:48 AM, Dinakar Gorti Maruti 
dinakar...@cloudbyte.co wrote:

 Hi Swapnil,

 I had tried it before , it is returning a string volume
 followed by volume id

 Thanks
 Dinakar


 On Wed, Oct 16, 2013 at 10:31 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 Hi DInakar,

 Please use volume_name, volume_size,etc. These are volume related
 parameters frequently used in snapshots. You can always do *dir* to get
 any specific values you need.

 Best Regards,
 Swapnil


 On Wed, Oct 16, 2013 at 10:11 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 hi,
I am implementing a driver for cinder services and I am stuck with a
 problem in creation of snapshot , I have a need to fetch the volume details
 in snapshot creation.
 I am using Openstack-Grizzly

 I am trying to use this line of code

 def create_snapshot(self, snapshot_ref):
   ..

   volume = snapshot_ref['volume']

   ...


 Error :

 Traceback (most recent call last):
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
 line 430, in _process_data
 rval = self.proxy.dispatch(ctxt, version, method, **args)
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 return getattr(proxyobj, method)(ctxt, **kwargs)
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py, line
 564, in create_snapshot
 {'status': 'error'})
   File /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 self.gen.next()
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py, line
 554, in create_snapshot
 model_update = self.driver.create_snapshot(snapshot_ref,volume_name)
   File
 /usr/lib/python2.6/site-packages/cinder/volume/drivers/cloudbyte.py, line
 195, in create_snapshot
 LOG.debug(_(phani volume object in snapshot :
 %s),snapshot_ref['volume'])
   File
 /usr/lib/python2.6/site-packages/cinder/db/sqlalchemy/models.py, line 74,
 in __getitem__
 return getattr(self, key)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 168, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 453, in get
 value = self.callable_(state, passive)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py,
 line 481, in _load_for_state
 (mapperutil.state_str(state), self.key)


  DetachedInstanceError: Parent instance Snapshot at 0x30a0cd0 is not
 bound to a Session; lazy load operation of attribute 'volume' cannot proceed

 hoping for a solution

 Thanks
 Dinakar

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] problem fetching volume details in snapshot creation

2013-10-15 Thread Swapnil Kulkarni
Correcting the typo, snapshot_ref['volume_name'] in your case.


On Wed, Oct 16, 2013 at 10:59 AM, Swapnil Kulkarni 
swapnilkulkarni2...@gmail.com wrote:

 I think snapshot[volume_name] is what you are looking for, pls see below
 link for ref.


 https://github.com/openstack/cinder/blob/stable/grizzly/cinder/volume/drivers/lvm.py#L233

 Best Regards,
 Swapnil

 On Wed, Oct 16, 2013 at 10:57 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 display_name parameter  of volume


 On Wed, Oct 16, 2013 at 10:55 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 What exact parameters are you looking for?



 On Wed, Oct 16, 2013 at 10:48 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 Hi Swapnil,

 I had tried it before , it is returning a string
 volume followed by volume id

 Thanks
 Dinakar


 On Wed, Oct 16, 2013 at 10:31 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 Hi DInakar,

 Please use volume_name, volume_size,etc. These are volume related
 parameters frequently used in snapshots. You can always do *dir* to
 get any specific values you need.

 Best Regards,
 Swapnil


 On Wed, Oct 16, 2013 at 10:11 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 hi,
I am implementing a driver for cinder services and I am stuck with
 a problem in creation of snapshot , I have a need to fetch the volume
 details in snapshot creation.
 I am using Openstack-Grizzly

 I am trying to use this line of code

 def create_snapshot(self, snapshot_ref):
   ..

   volume = snapshot_ref['volume']

   ...


 Error :

 Traceback (most recent call last):
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
 line 430, in _process_data
 rval = self.proxy.dispatch(ctxt, version, method, **args)
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 return getattr(proxyobj, method)(ctxt, **kwargs)
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py,
 line 564, in create_snapshot
 {'status': 'error'})
   File /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 self.gen.next()
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py,
 line 554, in create_snapshot
 model_update =
 self.driver.create_snapshot(snapshot_ref,volume_name)
   File
 /usr/lib/python2.6/site-packages/cinder/volume/drivers/cloudbyte.py, 
 line
 195, in create_snapshot
 LOG.debug(_(phani volume object in snapshot :
 %s),snapshot_ref['volume'])
   File
 /usr/lib/python2.6/site-packages/cinder/db/sqlalchemy/models.py, line 
 74,
 in __getitem__
 return getattr(self, key)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 168, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 453, in get
 value = self.callable_(state, passive)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py,
 line 481, in _load_for_state
 (mapperutil.state_str(state), self.key)


  DetachedInstanceError: Parent instance Snapshot at 0x30a0cd0 is
 not bound to a Session; lazy load operation of attribute 'volume' cannot
 proceed

 hoping for a solution

 Thanks
 Dinakar

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] problem fetching volume details in snapshot creation

2013-10-15 Thread Dinakar Gorti Maruti
check Snapshot class , that is the one passed as snapshot_ref
https://github.com/openstack/cinder/blob/stable/grizzly/cinder/db/sqlalchemy/models.py




On Wed, Oct 16, 2013 at 11:02 AM, Swapnil Kulkarni 
swapnilkulkarni2...@gmail.com wrote:

 Correcting the typo, snapshot_ref['volume_name'] in your case.


 On Wed, Oct 16, 2013 at 10:59 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 I think snapshot[volume_name] is what you are looking for, pls see below
 link for ref.


 https://github.com/openstack/cinder/blob/stable/grizzly/cinder/volume/drivers/lvm.py#L233

 Best Regards,
 Swapnil

 On Wed, Oct 16, 2013 at 10:57 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 display_name parameter  of volume


 On Wed, Oct 16, 2013 at 10:55 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 What exact parameters are you looking for?



 On Wed, Oct 16, 2013 at 10:48 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 Hi Swapnil,

 I had tried it before , it is returning a string
 volume followed by volume id

 Thanks
 Dinakar


 On Wed, Oct 16, 2013 at 10:31 AM, Swapnil Kulkarni 
 swapnilkulkarni2...@gmail.com wrote:

 Hi DInakar,

 Please use volume_name, volume_size,etc. These are volume related
 parameters frequently used in snapshots. You can always do *dir* to
 get any specific values you need.

 Best Regards,
 Swapnil


 On Wed, Oct 16, 2013 at 10:11 AM, Dinakar Gorti Maruti 
 dinakar...@cloudbyte.co wrote:

 hi,
I am implementing a driver for cinder services and I am stuck
 with a problem in creation of snapshot , I have a need to fetch the 
 volume
 details in snapshot creation.
 I am using Openstack-Grizzly

 I am trying to use this line of code

 def create_snapshot(self, snapshot_ref):
   ..

   volume = snapshot_ref['volume']

   ...


 Error :

 Traceback (most recent call last):
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
 line 430, in _process_data
 rval = self.proxy.dispatch(ctxt, version, method, **args)
   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 133, in dispatch
 return getattr(proxyobj, method)(ctxt, **kwargs)
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py,
 line 564, in create_snapshot
 {'status': 'error'})
   File /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 self.gen.next()
   File /usr/lib/python2.6/site-packages/cinder/volume/manager.py,
 line 554, in create_snapshot
 model_update =
 self.driver.create_snapshot(snapshot_ref,volume_name)
   File
 /usr/lib/python2.6/site-packages/cinder/volume/drivers/cloudbyte.py, 
 line
 195, in create_snapshot
 LOG.debug(_(phani volume object in snapshot :
 %s),snapshot_ref['volume'])
   File
 /usr/lib/python2.6/site-packages/cinder/db/sqlalchemy/models.py, line 
 74,
 in __getitem__
 return getattr(self, key)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 168, in __get__
 return self.impl.get(instance_state(instance),dict_)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/attributes.py,
 line 453, in get
 value = self.callable_(state, passive)
   File
 /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py,
 line 481, in _load_for_state
 (mapperutil.state_str(state), self.key)


  DetachedInstanceError: Parent instance Snapshot at 0x30a0cd0 is
 not bound to a Session; lazy load operation of attribute 'volume' cannot
 proceed

 hoping for a solution

 Thanks
 Dinakar

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V meeting Minutes

2013-10-15 Thread Vishvananda Ishaya
Hi Sean,

I'm going to top post because my response is general. I totally agree that we 
need people that understand the code base and we should encourage new people to 
be cross-functional. I guess my main issue is with how we get there. I believe 
in encouragment over punishment. In my mind giving people autonomy and control 
encourages them to contribute more. 

In my opinion giving the driver developers control over their own code will 
lead to higher quality drivers. Yes, we risk integration issues, lack of test 
coverage, and buggy implementations, but it is my opinion that the increased 
velocity that the developers will enjoy will mean faster bug fixes and more 
opportunity to improve the drivers.

I also think lowering the amount of code that nova-core has to keep an eye on 
will improve the review velocity of the rest of the code as well.

Vish

On Oct 15, 2013, at 4:36 PM, Sean Dague s...@dague.net wrote:

 On 10/15/2013 04:54 PM, Vishvananda Ishaya wrote:
 Hi Everyone,
 
 I've been following this conversation and weighing the different sides. This 
 is a tricky issue but I think it is important to decouple further and extend 
 our circle of trust.
 
 When nova started it was very easy to do feature development. As it has 
 matured the pace has slowed. This is expected and necessary, but we 
 periodically must make decoupling decisions or we will become mired in 
 overhead. We did this already with cinder and neutron, and we have discussed 
 doing this with virt drivers in the past.
 
 We have a large number of people attempting to contribute to small sections 
 of nova and getting frustrated with the process.  The perception of 
 developers is much more important than the actual numbers here. If people 
 are frustrated they are disincentivized to help and it hurts everyone. 
 Suggesting that these contributors need to learn all of nova and help with 
 the review queue is silly and makes us seem elitist. We should make it as 
 easy as possible for new contributors to help.
 
 I think our current model is breaking down at our current size and we need 
 to adopt something more similar to the linux model when dealing with 
 subsystems. The hyper-v team is the only one suggesting changes, but there 
 have been similar concerns from the vmware team. I have no doubt that there 
 are similar issues with the PowerVM, Xen, Docker, lxc and even kvm driver 
 contributors.
 
 The Linux kernel process works for a couple of reasons...
 
 1) the subsystem maintainers have known each other for a solid decade (i.e. 
 3x the lifespan of the OpenStack project), over a history of 10 years, of 
 people doing the right things, you build trust in their judgment.
 
 *no one* in the Linux tree was given trust first, under the hope that it 
 would work out. They had to earn it, hard, by doing community work, and not 
 just playing in their corner of the world.
 
 2) This http://www.wired.com/wiredenterprise/2012/06/torvalds-nvidia-linux/ 
 is completely acceptable behavior. So when someone has bad code, they are 
 flamed to within an inch of their life, repeatedly, until they never ever do 
 that again. This is actually a time saving measure in code review. It's a lot 
 faster to just call people idiots then to help them with line by line 
 improvements in their code, 10, 20, 30, or 40 iterations in gerrit.
 
 We, as a community have decided, I think rightly, that #2 really isn't in our 
 culture. But you can't start cherry picking parts of the Linux kernel 
 community without considering how all the parts work together. The good and 
 the bad are part of why the whole system works.
 
 In my opinion, nova-core needs to be willing to trust the subsystem 
 developers and let go of a little bit of control. I frankly don't see the 
 drawbacks.
 
 I actually see huge draw backs. Culture matters. Having people active and 
 willing to work on real core issues matter. The long term health of Nova 
 matters.
 
 As the QA PTL I can tell you that when you look at Nova vs. Cinder vs. 
 Neutron, you'll see some very clear lines about how long it takes to get to 
 the bottom of a race condition, and how many deep races are in each of them. 
 I find this directly related to the stance each project has taken on whether 
 it's socially acceptable to only work on your own vendor code. Nova's 
 insistence up until this point that if you only play in your corner, you 
 don't get the same attention is important incentive for people to integrate 
 and work beyond just their boundaries. I think diluting this part of the 
 culture would be hugely detrimental to Nova.
 
 Let's take an example that came up today, the compute_diagnostics API. This 
 is an area where we've left it completely to the virt drivers to vomit up a 
 random dictionary of the day for debugging reasons, and stamped it as an API. 
 With a model where we let virt driver authors go hide in a corner, that's 
 never going to become an API with any kind of contract, and given