[openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC

2014-12-09 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting Tuesday at 21:00 UTC, with the
following agenda:

* Convergence on specs process (johnthetubaguy)
  * Approval process differences
  * Path structure differences
  * specs.o.o aspect differences (toc)
* osprofiler config options (kragniz)
  * Glance uses a different name from other projects
  * Consensus on what name to use
* Open discussion  announcements

See you there !

For more details, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Query on creating multiple resources

2014-12-09 Thread Renat Akhmerov
Hey,

I think it’s a question of what the final goal is. For just creating security 
groups as a resource I think Georgy and Zane are right, just use Heat. If the 
goal is to try Mistral or have this simple workflow as part of more complex 
then it’s totally fine to use Mistral. Sorry, I’m probably biased because 
Mistral is our baby :). Anyway, Nikolay has already answered the question 
technically, this “for-each” feature will be available officially in about 2 
weeks.

 Create VM workflow was a demo example. Mistral potentially can be used by 
 Heat or other orchestration tools to do actual interaction with API, but for 
 user it might be easier to use Heat functionality.

I kind of disagree with that statement. Mistral can be used by whoever finds 
its useful for their needs. Standard “create_instance” workflow (which is in 
“resources/workflows/create_instance.yaml”) is not so a demo example as well. 
It does a lot of good stuff you may really need in your case (e.g. retry 
policies). Even though it’s true that it has some limitations we’re aware of. 
For example, when it comes to configuring a network for newly created instance 
it’s now missing network related parameters to be able to alter behavior.

One more thing: Now only will Heat be able to call Mistral somewhere underneath 
the surface. Mistral has already integration with Heat to be able to call it if 
needed and there’s a plan to make it even more useful and usable.

Thanks

Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Query on creating multiple resources

2014-12-09 Thread Renat Akhmerov
No problem, let us know if you have any other questions.

Renat Akhmerov
@ Mirantis Inc.



 On 09 Dec 2014, at 11:57, Sushma Korati sushma_kor...@persistent.com wrote:
 
 
 Hi,
 
 Thank you guys.
 
 Yes I am able to do this with heat, but I faced issues while trying the same 
 with mistral.
 As suggested will try with the latest mistral branch. Thank you once again. 
 
 Regards,
 Sushma
 
 
 
  
 From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com 
 mailto:gokrokvertsk...@mirantis.com] 
 Sent: Tuesday, December 09, 2014 6:07 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Mistral] Query on creating multiple resources
  
 Hi Sushma,
  
 Did you explore Heat templates? As Zane mentioned you can do this via Heat 
 template without writing any workflows. 
 Do you have any specific use cases which you can't solve with Heat template?
  
 Create VM workflow was a demo example. Mistral potentially can be used by 
 Heat or other orchestration tools to do actual interaction with API, but for 
 user it might be easier to use Heat functionality.
  
 Thanks,
 Georgy
  
 On Mon, Dec 8, 2014 at 7:54 AM, Nikolay Makhotkin nmakhot...@mirantis.com 
 mailto:nmakhot...@mirantis.com wrote:
 Hi, Sushma! 
 
 Can we create multiple resources using a single task, like multiple keypairs 
 or security-groups or networks etc?
  
 Yes, we can. This feature is in the development now and it is considered as 
 experimental 
 -https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections 
 https://blueprints.launchpad.net/mistral/+spec/mistral-dataflow-collections
  
 Just clone the last master branch from mistral.
 
 You can specify for-each task property and provide the array of data to 
 your workflow: 
 
  
 version: '2.0'
 
 name: secgroup_actions
 
 workflows:
   create_security_group:
 type: direct
 input:
   - array_with_names_and_descriptions
 
 tasks:
   create_secgroups:
 for-each: 
   data: $.array_with_names_and_descriptions
 action: nova.security_groups_create name={$.data.name 
 http://data.name/} description={$.data.description}
 
  
 On Mon, Dec 8, 2014 at 6:36 PM, Zane Bitter zbit...@redhat.com 
 mailto:zbit...@redhat.com wrote:
 On 08/12/14 09:41, Sushma Korati wrote:
 Can we create multiple resources using a single task, like multiple
 keypairs or security-groups or networks etc?
 
 Define them in a Heat template and create the Heat stack as a single task.
 
 - ZB
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  
 -- 
 Best Regards,
 Nikolay
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  
 -- 
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com http://www.mirantis.com/
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284
 DISCLAIMER == This e-mail may contain privileged and confidential 
 information which is the property of Persistent Systems Ltd. It is intended 
 only for the use of the individual or entity to which it is addressed. If you 
 are not the intended recipient, you are not authorized to read, retain, copy, 
 print, distribute or use this message. If you have received this 
 communication in error, please notify the sender and delete all copies of 
 this message. Persistent Systems Ltd. does not accept any liability for virus 
 infected mails.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Event Subscription

2014-12-09 Thread Renat Akhmerov
Ok, got it.

So my general suggestion here is: let's keep it as simple as possible for now, 
create something that works and then let’s see how to improve it. And yes, 
consumers may be and mostly will be 3rd parties.

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 09 Dec 2014, at 08:25, W Chan m4d.co...@gmail.com wrote:
 
 Renat,
 
 On sending events to an exchange, I mean an exchange on the transport (i.e. 
 rabbitMQ exchange https://www.rabbitmq.com/tutorials/amqp-concepts.html 
 https://www.rabbitmq.com/tutorials/amqp-concepts.html).  On implementation 
 we can probably explore the notification feature in oslo.messaging.  But on 
 second thought, this would limit the consumers to trusted subsystems or 
 services though.  If we want the event consumers to be any 3rd party, 
 including untrusted, then maybe we should keep it as HTTP calls.
 
 Winson
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Action context passed to all action executions by default

2014-12-09 Thread Renat Akhmerov
Hi Winson,

I think it makes perfect sense. The reason I think is mostly historical and 
this can be reviewed now. Can you pls file a BP and describe your suggested 
design on that? I mean how we need to alter interface Action etc.

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 09 Dec 2014, at 13:39, W Chan m4d.co...@gmail.com wrote:
 
 Renat,
 
 Is there any reason why Mistral do not pass action context such as workflow 
 ID, execution ID, task ID, and etc to all of the action executions?  I think 
 it makes a lot of sense for that information to be made available by default. 
  The action can then decide what to do with the information. It doesn't 
 require a special signature in the __init__ method of the Action classes.  
 What do you think?
 
 Thanks.
 Winson
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC

2014-12-09 Thread joehuang
Hi,
 
If time is available, how about adding one agenda to guide the direction for 
cascading to move forward? Thanks in advance.

The topic is :  Need cross-program decision to run cascading as an incubated 
project mode or register BP separately in each involved project. CI for 
cascading is quite different from traditional test environment, at least 3 
OpenStack instance required for cross OpenStack networking test cases.   


In the 40 minutes cross-project summit session Approaches for scaling out[1], 
almost 100 peoples attended the meeting, and the conclusion is that cells can 
not cover the use cases and requirements which the OpenStack cascading 
solution[2] aim to address, the background including use cases and requirements 
is also described in the mail.

After the summit, we just ported the PoC[3] source code from IceHouse based to 
Juno based.

Now, let's move forward:

The major task is to introduce new driver/agent to existing core projects, for 
the core idea of cascading is to add Nova as the hypervisor backend of Nova, 
Cinder as the block storage backend of Cinder, Neutron as the backend of 
Neutron, Glance as one image location of Glance, Ceilometer as the store of 
Ceilometer.
a). Need cross-program decision to run cascading as an incubated project mode 
or register BP separately in each involved project. CI for cascading is quite 
different from traditional test environment, at least 3 OpenStack instance 
required for cross OpenStack networking test cases.
b). Volunteer as the cross project coordinator.
c). Volunteers for implementation and CI. (Already 6 engineers working on 
cascading in the project StackForge/tricircle)

Background of OpenStack cascading vs cells:

1. Use cases
a). Vodafone use case[4](OpenStack summit speech video from 9'02 to 12'30 ), 
establishing globally addressable tenants which result in efficient services 
deployment.
b). Telefonica use case[5], create virtual DC( data center) cross multiple 
physical DCs with seamless experience.
c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV 
cloud, it's in nature the cloud will be distributed but inter-connected in many 
data centers.

2.requirements
a). The operator has multiple sites cloud; each site can use one or multiple 
vendor's OpenStack distributions.
b). Each site with its own requirements and upgrade schedule while maintaining 
standard OpenStack API
c). The multi-site cloud must provide unified resource management with global 
Open API exposed, for example create virtual DC cross multiple physical DCs 
with seamless experience.
Although a prosperity orchestration layer could be developed for the multi-site 
cloud, but it's prosperity API in the north bound interface. The cloud 
operators want the ecosystem friendly global open API for the mutli-site cloud 
for global access.

3. What problems does cascading solve that cells doesn't cover:
OpenStack cascading solution is OpenStack orchestrate OpenStacks. The core 
architecture idea of OpenStack cascading is to add Nova as the hypervisor 
backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the 
backend of Neutron, Glance as one image location of Glance, Ceilometer as the 
store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from 
different vendor's distribution, or different version ) which may located in 
different sites (or data centers ) through the OpenStack API, meanwhile the 
cloud still expose OpenStack API as the north-bound API in the cloud level.

4. Why cells can't do that:
Cells provide the scale out capability to Nova, but from the OpenStack as a 
whole point of view, it's still working like one OpenStack instance.
a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This 
approach provides the multi-site cloud with one unified API endpoint and 
unified resource management, but consolidation of multi-vendor/multi-version 
OpenStack instances across one or more data centers cannot be fulfilled.
b). Each site installed one child cell and accompanied standalone Cinder, 
Neutron(or Nova-network), Glance, Ceilometer. This approach makes 
multi-vendor/multi-version OpenStack distribution co-existence in multi-site 
seem to be feasible, but the requirement for unified API endpoint and unified 
resource management cannot be fulfilled. Cross Neutron networking automation is 
also missing, which should otherwise be done manually or use proprietary 
orchestration layer.

For more information about cascading and cells, please refer to the discussion 
thread before Paris Summit [7].

[1]Approaches for scaling out: 
https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
[2]OpenStack cascading solution: 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[3]Cascading PoC: https://github.com/stackforge/tricircle

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Roman Prykhodchenko
It is true that IPA and FuelAgent share a lot of functionality in common. 
However there is a major difference between them which is that they are 
intended to be used to solve a different problem.

IPA is a solution for provision-use-destroy-use_by_different_user use-case and 
is really great for using it for providing BM nodes for other OS services or in 
services like Rackspace OnMetal. FuelAgent itself serves for 
provision-use-use-…-use use-case like Fuel or TripleO have.

Those two use-cases require concentration on different details in first place. 
For instance for IPA proper decommissioning is more important than advanced 
disk management, but for FuelAgent priorities are opposite because of obvious 
reasons.

Putting all functionality to a single driver and a single agent may cause 
conflicts in priorities and make a lot of mess inside both the driver and the 
agent. Actually previously changes to IPA were blocked right because of this 
conflict of priorities. Therefore replacing FuelAgent by IPA in where FuelAgent 
is used currently does not seem like a good option because come people (and I’m 
not talking about Mirantis) might loose required features because of different 
priorities.

Having two separate drivers along with two separate agents for those different 
use-cases will allow to have two independent teams that are concentrated on 
what’s really important for a specific use-case. I don’t see any problem in 
overlapping functionality if it’s used differently.


P. S.
I realise that people may be also confused by the fact that FuelAgent is 
actually called like that and is used only in Fuel atm. Our point is to make it 
a simple, powerful and what’s more important a generic tool for provisioning. 
It is not bound to Fuel or Mirantis and if it will cause confusion in the 
future we will even be happy to give it a different and less confusing name.

P. P. S.
Some of the points of this integration do not look generic enough or nice 
enough. We look pragmatic on the stuff and are trying to implement what’s 
possible to implement as the first step. For sure this is going to have a lot 
more steps to make it better and more generic.


 On 09 Dec 2014, at 01:46, Jim Rollenhagen j...@jimrollenhagen.com wrote:
 
 
 
 On December 8, 2014 2:23:58 PM PST, Devananda van der Veen 
 devananda@gmail.com mailto:devananda@gmail.com wrote:
 I'd like to raise this topic for a wider discussion outside of the
 hallway
 track and code reviews, where it has thus far mostly remained.
 
 In previous discussions, my understanding has been that the Fuel team
 sought to use Ironic to manage pets rather than cattle - and doing
 so
 required extending the API and the project's functionality in ways that
 no
 one else on the core team agreed with. Perhaps that understanding was
 wrong
 (or perhaps not), but in any case, there is now a proposal to add a
 FuelAgent driver to Ironic. The proposal claims this would meet that
 teams'
 needs without requiring changes to the core of Ironic.
 
 https://review.openstack.org/#/c/138115/
 
 I think it's clear from the review that I share the opinions expressed in 
 this email.
 
 That said (and hopefully without derailing the thread too much), I'm curious 
 how this driver could do software RAID or LVM without modifying Ironic's API 
 or data model. How would the agent know how these should be built? How would 
 an operator or user tell Ironic what the disk/partition/volume layout would 
 look like?
 
 And before it's said - no, I don't think vendor passthru API calls are an 
 appropriate answer here.
 
 // jim
 
 
 The Problem Description section calls out four things, which have all
 been
 discussed previously (some are here [0]). I would like to address each
 one,
 invite discussion on whether or not these are, in fact, problems facing
 Ironic (not whether they are problems for someone, somewhere), and then
 ask
 why these necessitate a new driver be added to the project.
 
 
 They are, for reference:
 
 1. limited partition support
 
 2. no software RAID support
 
 3. no LVM support
 
 4. no support for hardware that lacks a BMC
 
 #1.
 
 When deploying a partition image (eg, QCOW format), Ironic's PXE deploy
 driver performs only the minimal partitioning necessary to fulfill its
 mission as an OpenStack service: respect the user's request for root,
 swap,
 and ephemeral partition sizes. When deploying a whole-disk image,
 Ironic
 does not perform any partitioning -- such is left up to the operator
 who
 created the disk image.
 
 Support for arbitrarily complex partition layouts is not required by,
 nor
 does it facilitate, the goal of provisioning physical servers via a
 common
 cloud API. Additionally, as with #3 below, nothing prevents a user from
 creating more partitions in unallocated disk space once they have
 access to
 their instance. Therefor, I don't see how Ironic's minimal support for
 partitioning is a problem for the project.
 
 #2.
 
 There is no 

Re: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader?

2014-12-09 Thread Peeyush Gupta
So, basically if I am using pxe driver, I would have to provide
pxelinux.0?

On 12/09/2014 03:24 PM, Fox, Kevin M wrote:
 You probably want to use the agent driver, not the pxe one. It lets you use 
 bootloaders from the image.

 
 From: Peeyush Gupta
 Sent: Monday, December 08, 2014 10:55:39 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Ironic] How to get past pxelinux.0 bootloader?

 Hi all,

 So, I have set up a devstack ironic setup for baremetal deployment. I
 have been able to deploy a baremetal node successfully using
 pxe_ipmitool driver. Now, I am trying to boot a server where I already
 have a bootloader i.e. I don't need pxelinux to go and fetch kernel and
 initrd images for me. I want to transfer them directly.

 I checked out the code and figured out that there are dhcp opts
 available, that are modified using pxe_utils.py, changing it didn't
 help. Then I moved to ironic.conf, but here also I only see an option to
 add pxe_bootfile_name, which is exactly what I want to avoid. Can anyone
 please help me with this situation? I don't want to go through
 pxelinux.0 bootloader, I just directly want to transfer kernel and
 initrd images.

 Thanks.

 --
 Peeyush Gupta
 gpeey...@linux.vnet.ibm.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Peeyush Gupta
gpeey...@linux.vnet.ibm.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC

2014-12-09 Thread Thierry Carrez
joehuang wrote:
 If time is available, how about adding one agenda to guide the direction for 
 cascading to move forward? Thanks in advance.
 
 The topic is :  Need cross-program decision to run cascading as an incubated 
 project mode or register BP separately in each involved project. CI for 
 cascading is quite different from traditional test environment, at least 3 
 OpenStack instance required for cross OpenStack networking test cases.   

Hi Joe, we close the agenda one day before the meeting to let people
arrange their attendance based on the published agenda.

I added your topic in the backlog for next week agenda:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] mid-cycle hot reviews

2014-12-09 Thread Miguel Ángel Ajo

Hi all!

  It would be great if you could use this thread to post hot reviews on stuff  
that it’s being worked out during the mid-cycle, where others from different
timezones could participate.

  I know posting reviews to the list is not permitted, but I think an exception 
 
in this case would be beneficial.

  Best regards,
Miguel Ángel Ajo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-09 Thread Tihomir Trifonov
Sorry for the late reply, just few thoughts on the matter.

IMO the REST middleware should be as thin as possible. And I mean thin in
terms of processing - it should not do pre/post processing of the requests,
but just unpack/pack. So here is an example:

instead of making AJAX calls that contain instructions:

​​
 POST --json --data {action: delete, data: [ {name:
 item1}, {name: item2}, {name: item3 ]}


I think a better approach is just to pack/unpack batch commands, and leave
execution to the frontend/backend and not middleware:


​​
 POST --json --data {
 ​batch
 :
 ​[
 {​
 
 ​
 action
 ​ : delete​
 ,
 ​payload: ​
 {name: item1}
 ​,
 {​
 
 ​
 action
 ​ : delete​
 ,
 ​
 payload:
 ​
 {name: item
 ​2
 }
 ​,
 {​
 
 ​
 action
 ​ : delete​
 ,
 ​
 payload:
 ​
 {name: item
 ​3
 }
 ​ ] ​
 ​
 ​
 }


​The idea is that the middleware should not know the actual data. It should
ideally just unpack the data:

​​responses = []


for cmd in
 ​ ​
 ​
 ​
 request.POST['batch']:​


 ​
 ​​responses
 ​.append(​
 ​
 getattr(controller, cmd['action']
 ​)(**
 cmd['​payload']
 ​)​)


 ​return responses​



​and the frontend(JS) will just send batches of simple commands, and will
receive a list of responses for each command in the batch. The error
handling will be done in the frontend​(JS) as well.

​

For the more complex example of 'put()' where we have dependent objects:

project = api.keystone.tenant_get(request, id)
 kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None)
 api.keystone.tenant_update(request, project, **kwargs)



In practice the project data should be already present in the
frontend(assuming that we already loaded it to render the project
form/view), so

​
​
POST --json --data {
​batch
:
​[
{​

​
action
​ : tenant_update​
,
​payload: ​
{project: js_project_object.id, name: some name, prop1: some
prop, prop2: other prop, etc.}
​
​ ] ​
​
​
}​

So in general we don't need to recreate the full state on each REST call,
if we make the Frontent full-featured application. This way - the frontend
will construct the object, will hold the cached value, and will just send
the needed requests as single ones or in batches, will receive the response
from the API backend, and will render the results. The whole processing
logic will be held in the Frontend(JS), while the middleware will just act
as proxy(un/packer). This way we will maintain just the logic in the
frontend, and will not need to duplicate some logic in the middleware.




On Tue, Dec 2, 2014 at 4:45 PM, Adam Young ayo...@redhat.com wrote:

  On 12/02/2014 12:39 AM, Richard Jones wrote:

 On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran tqt...@us.ibm.com wrote:

  I agree that keeping the API layer thin would be ideal. I should add
 that having discrete API calls would allow dynamic population of table.
 However, I will make a case where it *might* be necessary to add
 additional APIs. Consider that you want to delete 3 items in a given table.

 If you do this on the client side, you would need to perform: n * (1 API
 request + 1 AJAX request)
 If you have some logic on the server side that batch delete actions: n *
 (1 API request) + 1 AJAX request

 Consider the following:
 n = 1, client = 2 trips, server = 2 trips
 n = 3, client = 6 trips, server = 4 trips
 n = 10, client = 20 trips, server = 11 trips
 n = 100, client = 200 trips, server 101 trips

 As you can see, this does not scale very well something to consider...

  This is not something Horizon can fix.  Horizon can make matters worse,
 but cannot make things better.

 If you want to delete 3 users,   Horizon still needs to make 3 distinct
 calls to Keystone.

 To fix this, we need either batch calls or a standard way to do multiples
 of the same operation.

 The unified API effort it the right place to drive this.







  Yep, though in the above cases the client is still going to be hanging,
 waiting for those server-backend calls, with no feedback until it's all
 done. I would hope that the client-server call overhead is minimal, but I
 guess that's probably wishful thinking when in the land of random Internet
 users hitting some provider's Horizon :)

  So yeah, having mulled it over myself I agree that it's useful to have
 batch operations implemented in the POST handler, the most common operation
 being DELETE.

  Maybe one day we could transition to a batch call with user feedback
 using a websocket connection.


   Richard

  [image: Inactive hide details for Richard Jones ---11/27/2014 05:38:53
 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S travis.tr]Richard
 Jones ---11/27/2014 05:38:53 PM---On Fri Nov 28 2014 at 5:58:00 AM Tripp,
 Travis S travis.tr...@hp.com wrote:

 From: Richard Jones r1chardj0...@gmail.com
 To: Tripp, Travis S travis.tr...@hp.com, OpenStack List 
 openstack-dev@lists.openstack.org
 Date: 11/27/2014 05:38 PM
 Subject: Re: [openstack-dev] [horizon] REST and Django
  --




 On Fri Nov 28 2014 at 5:58:00 AM Tripp, 

[openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-09 Thread Matthew Gilliard
Sometimes, I want to ask the author of a patch about it on IRC.
However, there doesn't seem to be a reliable way to find out someone's
IRC handle.  The potential for useful conversation is sometimes
missed.  Unless there's a better alternative which I didn't find,
https://wiki.openstack.org/wiki/People seems to fulfill that purpose,
but is neither complete nor accurate.

  What do people think about this? Should we put more effort into
keeping the People wiki up-to-date?  That's a(nother) manual process
though - can we autogenerate it somehow?

  Matthew

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-09 Thread Nicolas Trangez
On Tue, 2014-12-09 at 10:46 +, Matthew Gilliard wrote:
 can we autogenerate it somehow?

Maybe some 'irc_nick' field in Stackalytic's default_data.json could be
added and used to populate such page?

Nicolas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Vladimir Kozhukalov
Just a short explanation of Fuel use case.

Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on
bare metal servers and on VMs
and then configure this OS using Puppet. We have been using Cobbler as our
OS provisioning tool since the beginning of Fuel.
However, Cobbler assumes using native OS installers (Anaconda and
Debian-installer). For some reasons we decided to
switch to image based approach for installing OS.

One of Fuel features is the ability to provide advanced partitioning
schemes (including software RAIDs, LVM).
Native installers are quite difficult to customize in the field of
partitioning
(that was one of the reasons to switch to image based approach). Moreover,
we'd like to implement even more
flexible user experience. We'd like to allow user to choose which hard
drives to use for root FS, for
allocating DB. We'd like user to be able to put root FS over LV or MD
device (including stripe, mirror, multipath).
We'd like user to be able to choose which hard drives are bootable (if
any), which options to use for mounting file systems.
Many many various cases are possible. If you ask why we'd like to support
all those cases, the answer is simple:
because our users want us to support all those cases.
Obviously, many of those cases can not be implemented as image internals,
some cases can not be also implemented on
configuration stage (placing root fs on lvm device).

As far as those use cases were rejected to be implemented in term of IPA,
we implemented so called Fuel Agent.
Important Fuel Agent features are:

* It does not have REST API
* it has executable entry point[s]
* It uses local json file as it's input
* It is planned to implement ability to download input data via HTTP (kind
of metadata service)
* It is designed to be agnostic to input data format, not only Fuel format
(data drivers)
* It is designed to be agnostic to image format (tar images, file system
images, disk images, currently fs images)
* It is designed to be agnostic to image compression algorithm (currently
gzip)
* It is designed to be agnostic to image downloading protocol (currently
local file and HTTP link)

So, it is clear that being motivated by Fuel, Fuel Agent is quite
independent and generic. And we are open for
new use cases.

According Fuel itself, our nearest plan is to get rid of Cobbler because
in the case of image based approach it is huge overhead. The question is
which tool we can use instead of Cobbler. We need power management,
we need TFTP management, we need DHCP management. That is
exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP
management tool independently, but as Devananda said, we're all working on
the same problems,
so let's do it together.  Power/TFTP/DHCP management is where we are
working on the same problems,
but IPA and Fuel Agent are about different use cases. This case is not just
Fuel, any mature
deployment case require advanced partition/fs management. However, for me
it is OK, if it is easily possible
to use Ironic with external drivers (not merged to Ironic and not tested on
Ironic CI).

AFAIU, this spec https://review.openstack.org/#/c/138115/ does not assume
changing Ironic API and core.
Jim asked about how Fuel Agent will know about advanced disk partitioning
scheme if API is not
changed. The answer is simple: Ironic is supposed to send a link to
metadata service (http or local file)
where Fuel Agent can download input json data.

As Roman said, we try to be pragmatic and suggest something which does not
break anything. All changes
are supposed to be encapsulated into a driver. No API and core changes. We
have resources to support, test
and improve this driver. This spec is just a zero step. Further steps are
supposed to improve driver
so as to make it closer to Ironic abstractions.

For Ironic that means widening use cases and user community. But, as I
already said,
we are OK if Ironic does not need this feature.

Vladimir Kozhukalov

On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko 
rprikhodche...@mirantis.com wrote:

 It is true that IPA and FuelAgent share a lot of functionality in common.
 However there is a major difference between them which is that they are
 intended to be used to solve a different problem.

 IPA is a solution for provision-use-destroy-use_by_different_user use-case
 and is really great for using it for providing BM nodes for other OS
 services or in services like Rackspace OnMetal. FuelAgent itself serves for
 provision-use-use-…-use use-case like Fuel or TripleO have.

 Those two use-cases require concentration on different details in first
 place. For instance for IPA proper decommissioning is more important than
 advanced disk management, but for FuelAgent priorities are opposite because
 of obvious reasons.

 Putting all functionality to a single driver and a single agent may cause
 conflicts in priorities and make a lot of mess inside both the driver and
 the agent. Actually previously changes to IPA 

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Yuriy Zveryanskyy

Good day Ironicers.

I do not want to discuss questions like Is feature X good for release 
Y? or Is feature Z in Ironic scope or not?.
I want to get an answer for this: Is Ironic a flexible, easy extendable 
and user-oriented solution for deployment?
Yes, it is I think. IPA is the great software, but Fuel Agent proposes a 
different and alternative way for deploying.
Devananda wrote about pets and cattle, and maybe some want to manage 
pets rather than cattle? Let

users do a choice.
We do not plan to change any Ironic API for the driver, internal or 
external (as opposed to IPA, this was done for it).
If there will be no one for Fuel Agent's driver support I think this 
driver should be removed from Ironic tree (I heard

this practice is used in Linux kernel).

On 12/09/2014 12:23 AM, Devananda van der Veen wrote:


I'd like to raise this topic for a wider discussion outside of the 
hallway track and code reviews, where it has thus far mostly remained.



In previous discussions, my understanding has been that the Fuel team 
sought to use Ironic to manage pets rather than cattle - and doing 
so required extending the API and the project's functionality in ways 
that no one else on the core team agreed with. Perhaps that 
understanding was wrong (or perhaps not), but in any case, there is 
now a proposal to add a FuelAgent driver to Ironic. The proposal 
claims this would meet that teams' needs without requiring changes to 
the core of Ironic.



https://review.openstack.org/#/c/138115/


The Problem Description section calls out four things, which have all 
been discussed previously (some are here [0]). I would like to address 
each one, invite discussion on whether or not these are, in fact, 
problems facing Ironic (not whether they are problems for someone, 
somewhere), and then ask why these necessitate a new driver be added 
to the project.



They are, for reference:


1. limited partition support

2. no software RAID support

3. no LVM support

4. no support for hardware that lacks a BMC


#1.

When deploying a partition image (eg, QCOW format), Ironic's PXE 
deploy driver performs only the minimal partitioning necessary to 
fulfill its mission as an OpenStack service: respect the user's 
request for root, swap, and ephemeral partition sizes. When deploying 
a whole-disk image, Ironic does not perform any partitioning -- such 
is left up to the operator who created the disk image.



Support for arbitrarily complex partition layouts is not required by, 
nor does it facilitate, the goal of provisioning physical servers via 
a common cloud API. Additionally, as with #3 below, nothing prevents a 
user from creating more partitions in unallocated disk space once they 
have access to their instance. Therefor, I don't see how Ironic's 
minimal support for partitioning is a problem for the project.



#2.

There is no support for defining a RAID in Ironic today, at all, 
whether software or hardware. Several proposals were floated last 
cycle; one is under review right now for DRAC support [1], and there 
are multiple call outs for RAID building in the state machine 
mega-spec [2]. Any such support for hardware RAID will necessarily be 
abstract enough to support multiple hardware vendor's driver 
implementations and both in-band creation (via IPA) and out-of-band 
creation (via vendor tools).



Given the above, it may become possible to add software RAID support 
to IPA in the future, under the same abstraction. This would closely 
tie the deploy agent to the images it deploys (the latter image's 
kernel would be dependent upon a software RAID built by the former), 
but this would necessarily be true for the proposed FuelAgent as well.



I don't see this as a compelling reason to add a new driver to the 
project. Instead, we should (plan to) add support for software RAID to 
the deploy agent which is already part of the project.



#3.

LVM volumes can easily be added by a user (after provisioning) within 
unallocated disk space for non-root partitions. I have not yet seen a 
compelling argument for doing this within the provisioning phase.



#4.

There are already in-tree drivers [3] [4] [5] which do not require a 
BMC. One of these uses SSH to connect and run pre-determined commands. 
Like the spec proposal, which states at line 122, Control via SSH 
access feature intended only for experiments in non-production 
environment, the current SSHPowerDriver is only meant for testing 
environments. We could probably extend this driver to do what the 
FuelAgent spec proposes, as far as remote power control for cheap 
always-on hardware in testing environments with a pre-shared key.



(And if anyone wonders about a use case for Ironic without external 
power control ... I can only think of one situation where I would 
rationally ever want to have a control-plane agent running inside a 
user-instance: I am both the operator and the only user of the cloud.)






In summary, as far as I can 

Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sahid Orentino Ferdjaoui
On Tue, Dec 09, 2014 at 06:39:43AM -0500, Sean Dague wrote:
 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.
 
 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).

-1, We probably want to recommend a git commit message more stronger
formatted mainly about the first line which is the most important. It
should reflect which part of the code the commit is attended to update
that gives the ability for contributors to quickly see on what the
submission is related;

An example with Nova which is quite big: api, compute,
  doc, scheduler, virt, vmware, libvirt, objects...

We should to use a prefix in the first line of commit message. There
is a large number of commits waiting for reviews, that can help
contributors with a knowledge in a particular domain to identify
quickly which one to pick.

 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm
 
 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.

   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Dmitry Tantsur

Hi folks,

Thank you for additional explanation, it does clarify things a bit. I'd 
like to note, however, that you talk a lot about how _different_ Fuel 
Agent is from what Ironic does now. I'd like actually to know how well 
it's going to fit into what Ironic does (in additional to your specific 
use cases). Hence my comments inline:


On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote:

Just a short explanation of Fuel use case.

Fuel use case is not a cloud. Fuel is a deployment tool. We install OS
on bare metal servers and on VMs
and then configure this OS using Puppet. We have been using Cobbler as
our OS provisioning tool since the beginning of Fuel.
However, Cobbler assumes using native OS installers (Anaconda and
Debian-installer). For some reasons we decided to
switch to image based approach for installing OS.

One of Fuel features is the ability to provide advanced partitioning
schemes (including software RAIDs, LVM).
Native installers are quite difficult to customize in the field of
partitioning
(that was one of the reasons to switch to image based approach).
Moreover, we'd like to implement even more
flexible user experience. We'd like to allow user to choose which hard
drives to use for root FS, for
allocating DB. We'd like user to be able to put root FS over LV or MD
device (including stripe, mirror, multipath).
We'd like user to be able to choose which hard drives are bootable (if
any), which options to use for mounting file systems.
Many many various cases are possible. If you ask why we'd like to
support all those cases, the answer is simple:
because our users want us to support all those cases.
Obviously, many of those cases can not be implemented as image
internals, some cases can not be also implemented on
configuration stage (placing root fs on lvm device).

As far as those use cases were rejected to be implemented in term of
IPA, we implemented so called Fuel Agent.
Important Fuel Agent features are:

* It does not have REST API

I would not call it a feature :-P

Speaking seriously, if you agent is a long-running thing and it gets 
it's configuration from e.g. JSON file, how can Ironic notify it of any 
changes?



* it has executable entry point[s]
* It uses local json file as it's input
* It is planned to implement ability to download input data via HTTP
(kind of metadata service)
* It is designed to be agnostic to input data format, not only Fuel
format (data drivers)
* It is designed to be agnostic to image format (tar images, file system
images, disk images, currently fs images)
* It is designed to be agnostic to image compression algorithm
(currently gzip)
* It is designed to be agnostic to image downloading protocol (currently
local file and HTTP link)
Does it support Glance? I understand it's HTTP, but it requires 
authentication.




So, it is clear that being motivated by Fuel, Fuel Agent is quite
independent and generic. And we are open for
new use cases.
My favorite use case is hardware introspection (aka getting data 
required for scheduling from a node automatically). Any ideas on this? 
(It's not a priority for this discussion, just curious).




According Fuel itself, our nearest plan is to get rid of Cobbler because
in the case of image based approach it is huge overhead. The question is
which tool we can use instead of Cobbler. We need power management,
we need TFTP management, we need DHCP management. That is
exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP
management tool independently, but as Devananda said, we're all working
on the same problems,
so let's do it together.  Power/TFTP/DHCP management is where we are
working on the same problems,
but IPA and Fuel Agent are about different use cases. This case is not
just Fuel, any mature
deployment case require advanced partition/fs management.
Taking into consideration that you're doing a generic OS installation 
tool... yeah, it starts to make some sense. For cloud advanced partition 
is definitely a pet case.


However, for

me it is OK, if it is easily possible
to use Ironic with external drivers (not merged to Ironic and not tested
on Ironic CI).

AFAIU, this spec https://review.openstack.org/#/c/138115/ does not
assume changing Ironic API and core.
Jim asked about how Fuel Agent will know about advanced disk
partitioning scheme if API is not
changed. The answer is simple: Ironic is supposed to send a link to
metadata service (http or local file)
where Fuel Agent can download input json data.
That's not about not changing Ironic. Changing Ironic is ok for 
reasonable use cases - we do a huge change right now to accommodate 
zapping, hardware introspection and RAID configuration.


I actually have problems with this particular statement. It does not 
sound like Fuel Agent will integrate enough with Ironic. This JSON file: 
who is going to generate it? In the most popular use case we're driven 
by Nova. Will Nova generate this file?


If the answer is generate it manually for every 

Re: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC

2014-12-09 Thread joehuang
Hello, Thierry,

That sounds great. 

Best Regards

Chaoyi Huang ( joehuang )


From: Thierry Carrez [thie...@openstack.org]
Sent: 09 December 2014 18:32
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Cross-Project meeting, Tue December 9th, 21:00 UTC

joehuang wrote:
 If time is available, how about adding one agenda to guide the direction for 
 cascading to move forward? Thanks in advance.

 The topic is :  Need cross-program decision to run cascading as an incubated 
 project mode or register BP separately in each involved project. CI for 
 cascading is quite different from traditional test environment, at least 3 
 OpenStack instance required for cross OpenStack networking test cases. 

Hi Joe, we close the agenda one day before the meeting to let people
arrange their attendance based on the published agenda.

I added your topic in the backlog for next week agenda:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

Regards,

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 07:32 AM, Sahid Orentino Ferdjaoui wrote:
 On Tue, Dec 09, 2014 at 06:39:43AM -0500, Sean Dague wrote:
 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.

 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).
 
 -1, We probably want to recommend a git commit message more stronger
 formatted mainly about the first line which is the most important. It
 should reflect which part of the code the commit is attended to update
 that gives the ability for contributors to quickly see on what the
 submission is related;
 
 An example with Nova which is quite big: api, compute,
   doc, scheduler, virt, vmware, libvirt, objects...
 
 We should to use a prefix in the first line of commit message. There
 is a large number of commits waiting for reviews, that can help
 contributors with a knowledge in a particular domain to identify
 quickly which one to pick.

And how exactly do you expect a machine to decide if that's done correctly?

-Sean

 
 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm

 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.

  -Sean

 -- 
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Julien Danjou
On Tue, Dec 09 2014, Sean Dague wrote:

 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).

+1

 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm

 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.

-0

Not sure it's a good idea to drop it, but I don't have strong arguments
for it.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-09 Thread Jeremy Stanley
On 2014-12-09 13:58:28 +0100 (+0100), Sahid Orentino Ferdjaoui wrote:
 We probably don't want to maintain an other page of Wiki.

Yes, the wiki is about low-overhead collaborative documentation. It
is not suitable as a database.

 We can recommend in how to contribute to fill correctly the IRC
 field in launchpad.NET since OpenStack is closely related.
[...]

OpenStack is not really closely related to Launchpad. At the moment
we use its OpenID provider and its bug tracker, both of which the
community is actively in the process of moving off of (to
openstackid.org and storyboard.openstack.org respectively).

We already have a solution for tracking the contributor-IRC
mapping--add it to your Foundation Member Profile. For example, mine
is in there already:

http://www.openstack.org/community/members/profile/5479

-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-09 Thread Daniel P. Berrange
On Tue, Dec 09, 2014 at 10:53:19AM +0100, Maxime Leroy wrote:
 I have also proposed a blueprint to have a new plugin mechanism in
 nova to load external vif driver. (nova-specs:
 https://review.openstack.org/#/c/136827/ and nova (rfc patch):
 https://review.openstack.org/#/c/136857/)
 
 From my point-of-view of a developer having a plugin framework for
 internal/external vif driver seems to be a good thing.
 It makes the code more modular and introduce a clear api for vif driver 
 classes.
 
 So far, it raises legitimate questions concerning API stability and
 public API that request a wider discussion on the ML (as asking by
 John Garbut).
 
 I think having a plugin mechanism and a clear api for vif driver is
 not going against this policy:
 http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support.
 
 There is no needs to have a stable API. It is up to the owner of the
 external VIF driver to ensure that its driver is supported by the
 latest API. And not the nova community to manage a stable API for this
 external VIF driver. Does it make senses ?

Experiance has shown that even if it is documented as unsupported, once
the extension point exists, vendors  users will ignore the small print
about support status. There will be complaints raised every time it gets
broken until we end up being forced to maintain it as stable API whether
we want to or not. That's not a route we want to go down.

 Considering the network V2 API, L2/ML2 mechanism driver and VIF driver
 need to exchange information such as: binding:vif_type and
 binding:vif_details.
 
 From my understanding, 'binding:vif_type' and 'binding:vif_details' as
 a field part of the public network api. There is no validation
 constraints for these fields (see
 http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html),
 it means that any value is accepted by the API. So, the values set in
 'binding:vif_type' and 'binding:vif_details' are not part of the
 public API. Is my understanding correct ?

The VIF parameters are mapped into the nova.network.model.VIF class
which is doing some crude validation. I would anticipate that this
validation will be increasing over time, because any functional data
flowing over the API and so needs to be carefully managed for upgrade
reasons.

Even if the Neutron impl is out of tree, I would still expect both
Nova and Neutron core to sign off on any new VIF type name and its
associated details (if any).

 What other reasons am I missing to not have VIF driver classes as a
 public extension point ?

Having to find  install VIF driver classes from countless different
vendors, each hiding their code away in their own obsecure website,
will lead to awful end user experiance when deploying Nova. Users are
better served by having it all provided when they deploy Nova IMHO

If every vendor goes off  works in their own isolated world we also
loose the scope to align the implementations, so that common concepts
work the same way in all cases and allow us to minimize the number of
new VIF types required. The proposed vhostuser VIF type is a good
example of this - it allows a single Nova VIF driver to be capable of
potentially supporting multiple different impls on the Neutron side.
If every vendor worked in their own world, we would have ended up with
multiple VIF drivers doing the same thing in Nova, each with their own
set of bugs  quirks.

I expect the quality of the code the operator receives will be lower
if it is never reviewed by anyone except the vendor who writes it in
the first place.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps

2014-12-09 Thread Doug Hellmann

On Dec 8, 2014, at 11:25 PM, Li Ma skywalker.n...@gmail.com wrote:

 Hi all, I tried to deploy zeromq by devstack and it definitely failed with 
 lots of problems, like dependencies, topics, matchmaker setup, etc. I've 
 already registered a blueprint for devstack-zeromq [1].

I added the [devstack] tag to the subject of this message so that team will see 
the thread.

 
 Besides, I suggest to build a wiki page in order to trace all the workitems 
 related with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs 
  Reviews], [Future Plan  Blueprints], [Discussions], [Resources], etc.

Coordinating the work on this via a wiki page makes sense. Please post the link 
when you’re ready.

Doug

 
 Any comments?
 
 [1] https://blueprints.launchpad.net/devstack/+spec/zeromq
 
 cheers,
 Li Ma
 
 On 2014/11/18 21:46, James Page wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 On 18/11/14 00:55, Denis Makogon wrote:
 
 So if zmq driver support in devstack is fixed, we can easily add a
 new job to run them in the same way.
 
 
 Btw this is a good question. I will take look at current state of
 zmq in devstack.
 I don't think its that far off and its broken rather than missing -
 the rpc backend code needs updating to use oslo.messaging rather than
 project specific copies of the rpc common codebase (pre oslo).
 Devstack should be able to run with the local matchmaker in most
 scenarios but it looks like there was support for the redis matchmaker
 as well.
 
 If you could take some time to fixup that would be awesome!
 
 - -- James Page
 Ubuntu and Debian Developer
 james.p...@ubuntu.com
 jamesp...@debian.org
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 
 iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg
 cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo
 45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h
 Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir
 aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe
 /cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40
 TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB
 P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb
 4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD
 bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu
 Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB
 tRDFb67u28jxnIXR16g=
 =+k0M
 -END PGP SIGNATURE-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Spec reviews this week by the neutron-drivers team

2014-12-09 Thread Kyle Mestery
The neutron-drivers team has started the process of both accepting and
rejecting specs for Kilo now. If you've submitted a spec, you will soon see
the spec either approved or land in either the abandoned or -2 category.
We're doing our best to put helpful messages when we do abandon or -2
specs, but for more detail, see the neutron-drivers wiki page [1]. Also,
you can find me on IRC with questions as well.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Neutron-drivers
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Doug Hellmann

On Dec 9, 2014, at 6:39 AM, Sean Dague s...@dague.net wrote:

 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.
 
 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).
 
 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm
 
 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.

I don’t have the hacking rules memorized. Could you describe them briefly?

Doug


 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Vladimir Kozhukalov
Vladimir Kozhukalov

On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur dtant...@redhat.com wrote:

 Hi folks,

 Thank you for additional explanation, it does clarify things a bit. I'd
 like to note, however, that you talk a lot about how _different_ Fuel Agent
 is from what Ironic does now. I'd like actually to know how well it's going
 to fit into what Ironic does (in additional to your specific use cases).
 Hence my comments inline:



 On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote:

 Just a short explanation of Fuel use case.

 Fuel use case is not a cloud. Fuel is a deployment tool. We install OS
 on bare metal servers and on VMs
 and then configure this OS using Puppet. We have been using Cobbler as
 our OS provisioning tool since the beginning of Fuel.
 However, Cobbler assumes using native OS installers (Anaconda and
 Debian-installer). For some reasons we decided to
 switch to image based approach for installing OS.

 One of Fuel features is the ability to provide advanced partitioning
 schemes (including software RAIDs, LVM).
 Native installers are quite difficult to customize in the field of
 partitioning
 (that was one of the reasons to switch to image based approach).
 Moreover, we'd like to implement even more
 flexible user experience. We'd like to allow user to choose which hard
 drives to use for root FS, for
 allocating DB. We'd like user to be able to put root FS over LV or MD
 device (including stripe, mirror, multipath).
 We'd like user to be able to choose which hard drives are bootable (if
 any), which options to use for mounting file systems.
 Many many various cases are possible. If you ask why we'd like to
 support all those cases, the answer is simple:
 because our users want us to support all those cases.
 Obviously, many of those cases can not be implemented as image
 internals, some cases can not be also implemented on
 configuration stage (placing root fs on lvm device).

 As far as those use cases were rejected to be implemented in term of
 IPA, we implemented so called Fuel Agent.
 Important Fuel Agent features are:

 * It does not have REST API

 I would not call it a feature :-P

 Speaking seriously, if you agent is a long-running thing and it gets it's
 configuration from e.g. JSON file, how can Ironic notify it of any changes?

 Fuel Agent is not long-running service. Currently there is no need to have
REST API. If we deal with kind of keep alive stuff of inventory/discovery
then we probably add API. Frankly, IPA REST API is not REST at all. However
that is not a reason to not to call it a feature and through it away. It is
a reason to work on it and improve. That is how I try to look at things
(pragmatically).

Fuel Agent has executable entry point[s] like /usr/bin/provision. You can
run this entry point with options (oslo.config) and point out where to find
input json data. It is supposed Ironic will  use ssh (currently in Fuel we
use mcollective) connection and run this waiting for exit code. If exit
code is equal to 0, provisioning is done. Extremely simple.


  * it has executable entry point[s]
 * It uses local json file as it's input
 * It is planned to implement ability to download input data via HTTP
 (kind of metadata service)
 * It is designed to be agnostic to input data format, not only Fuel
 format (data drivers)
 * It is designed to be agnostic to image format (tar images, file system
 images, disk images, currently fs images)
 * It is designed to be agnostic to image compression algorithm
 (currently gzip)
 * It is designed to be agnostic to image downloading protocol (currently
 local file and HTTP link)

 Does it support Glance? I understand it's HTTP, but it requires
 authentication.


 So, it is clear that being motivated by Fuel, Fuel Agent is quite
 independent and generic. And we are open for
 new use cases.

 My favorite use case is hardware introspection (aka getting data required
 for scheduling from a node automatically). Any ideas on this? (It's not a
 priority for this discussion, just curious).


That is exactly what we do in Fuel. Currently we use so called 'Default'
pxelinux config and all nodes being powered on are supposed to boot with so
called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent) runs
periodically and sends hardware report to Fuel master node.
User then is able to look at CPU, hard drive and network info and choose
which nodes to use for controllers, which for computes, etc. That is what
nova scheduler is supposed to do (look at hardware info and choose a
suitable node).

Talking about future, we are planning to re-implement inventory/discovery
stuff in terms of Fuel Agent (currently, this stuff is implemented as Ohai
based independent script).  Estimation for that is March 2015.




 According Fuel itself, our nearest plan is to get rid of Cobbler because
 in the case of image based approach it is huge overhead. The question is
 which tool we can use instead of Cobbler. We need power management,
 we need TFTP 

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Vladimir Kozhukalov
s/though/throw/g

Vladimir Kozhukalov

On Tue, Dec 9, 2014 at 5:40 PM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:



 Vladimir Kozhukalov

 On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur dtant...@redhat.com
 wrote:

 Hi folks,

 Thank you for additional explanation, it does clarify things a bit. I'd
 like to note, however, that you talk a lot about how _different_ Fuel Agent
 is from what Ironic does now. I'd like actually to know how well it's going
 to fit into what Ironic does (in additional to your specific use cases).
 Hence my comments inline:



 On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote:

 Just a short explanation of Fuel use case.

 Fuel use case is not a cloud. Fuel is a deployment tool. We install OS
 on bare metal servers and on VMs
 and then configure this OS using Puppet. We have been using Cobbler as
 our OS provisioning tool since the beginning of Fuel.
 However, Cobbler assumes using native OS installers (Anaconda and
 Debian-installer). For some reasons we decided to
 switch to image based approach for installing OS.

 One of Fuel features is the ability to provide advanced partitioning
 schemes (including software RAIDs, LVM).
 Native installers are quite difficult to customize in the field of
 partitioning
 (that was one of the reasons to switch to image based approach).
 Moreover, we'd like to implement even more
 flexible user experience. We'd like to allow user to choose which hard
 drives to use for root FS, for
 allocating DB. We'd like user to be able to put root FS over LV or MD
 device (including stripe, mirror, multipath).
 We'd like user to be able to choose which hard drives are bootable (if
 any), which options to use for mounting file systems.
 Many many various cases are possible. If you ask why we'd like to
 support all those cases, the answer is simple:
 because our users want us to support all those cases.
 Obviously, many of those cases can not be implemented as image
 internals, some cases can not be also implemented on
 configuration stage (placing root fs on lvm device).

 As far as those use cases were rejected to be implemented in term of
 IPA, we implemented so called Fuel Agent.
 Important Fuel Agent features are:

 * It does not have REST API

 I would not call it a feature :-P

 Speaking seriously, if you agent is a long-running thing and it gets it's
 configuration from e.g. JSON file, how can Ironic notify it of any changes?

 Fuel Agent is not long-running service. Currently there is no need to
 have REST API. If we deal with kind of keep alive stuff of
 inventory/discovery then we probably add API. Frankly, IPA REST API is not
 REST at all. However that is not a reason to not to call it a feature and
 through it away. It is a reason to work on it and improve. That is how I
 try to look at things (pragmatically).

 Fuel Agent has executable entry point[s] like /usr/bin/provision. You can
 run this entry point with options (oslo.config) and point out where to find
 input json data. It is supposed Ironic will  use ssh (currently in Fuel we
 use mcollective) connection and run this waiting for exit code. If exit
 code is equal to 0, provisioning is done. Extremely simple.


  * it has executable entry point[s]
 * It uses local json file as it's input
 * It is planned to implement ability to download input data via HTTP
 (kind of metadata service)
 * It is designed to be agnostic to input data format, not only Fuel
 format (data drivers)
 * It is designed to be agnostic to image format (tar images, file system
 images, disk images, currently fs images)
 * It is designed to be agnostic to image compression algorithm
 (currently gzip)
 * It is designed to be agnostic to image downloading protocol (currently
 local file and HTTP link)

 Does it support Glance? I understand it's HTTP, but it requires
 authentication.


 So, it is clear that being motivated by Fuel, Fuel Agent is quite
 independent and generic. And we are open for
 new use cases.

 My favorite use case is hardware introspection (aka getting data required
 for scheduling from a node automatically). Any ideas on this? (It's not a
 priority for this discussion, just curious).


 That is exactly what we do in Fuel. Currently we use so called 'Default'
 pxelinux config and all nodes being powered on are supposed to boot with so
 called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent) runs
 periodically and sends hardware report to Fuel master node.
 User then is able to look at CPU, hard drive and network info and choose
 which nodes to use for controllers, which for computes, etc. That is what
 nova scheduler is supposed to do (look at hardware info and choose a
 suitable node).

 Talking about future, we are planning to re-implement inventory/discovery
 stuff in terms of Fuel Agent (currently, this stuff is implemented as Ohai
 based independent script).  Estimation for that is March 2015.




 According Fuel itself, our nearest plan is to get rid of Cobbler because
 in 

Re: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps

2014-12-09 Thread ozamiatin

+1 To wiki page.

I also tried to deploy devstack with zmq, and met the same problems

https://bugs.launchpad.net/devstack/+bug/1397999
https://bugs.launchpad.net/oslo.messaging/+bug/1395721

We also have some unimplemented closures in zmq driver:
one of them: https://bugs.launchpad.net/oslo.messaging/+bug/1400323

On 09.12.14 16:07, Doug Hellmann wrote:

On Dec 8, 2014, at 11:25 PM, Li Ma skywalker.n...@gmail.com wrote:


Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots 
of problems, like dependencies, topics, matchmaker setup, etc. I've already 
registered a blueprint for devstack-zeromq [1].

I added the [devstack] tag to the subject of this message so that team will see 
the thread.

Besides, I suggest to build a wiki page in order to trace all the workitems related 
with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs  Reviews], 
[Future Plan  Blueprints], [Discussions], [Resources], etc.

Coordinating the work on this via a wiki page makes sense. Please post the link 
when you’re ready.

Doug


Any comments?

[1] https://blueprints.launchpad.net/devstack/+spec/zeromq

cheers,
Li Ma

On 2014/11/18 21:46, James Page wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 18/11/14 00:55, Denis Makogon wrote:

So if zmq driver support in devstack is fixed, we can easily add a
new job to run them in the same way.


Btw this is a good question. I will take look at current state of
zmq in devstack.

I don't think its that far off and its broken rather than missing -
the rpc backend code needs updating to use oslo.messaging rather than
project specific copies of the rpc common codebase (pre oslo).
Devstack should be able to run with the local matchmaker in most
scenarios but it looks like there was support for the redis matchmaker
as well.

If you could take some time to fixup that would be awesome!

- -- James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg
cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo
45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h
Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir
aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe
/cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40
TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB
P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb
4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD
bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu
Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB
tRDFb67u28jxnIXR16g=
=+k0M
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Dmitry Tantsur

On 12/09/2014 03:40 PM, Vladimir Kozhukalov wrote:



Vladimir Kozhukalov

On Tue, Dec 9, 2014 at 3:51 PM, Dmitry Tantsur dtant...@redhat.com
mailto:dtant...@redhat.com wrote:

Hi folks,

Thank you for additional explanation, it does clarify things a bit.
I'd like to note, however, that you talk a lot about how _different_
Fuel Agent is from what Ironic does now. I'd like actually to know
how well it's going to fit into what Ironic does (in additional to
your specific use cases). Hence my comments inline:



On 12/09/2014 01:01 PM, Vladimir Kozhukalov wrote:

Just a short explanation of Fuel use case.

Fuel use case is not a cloud. Fuel is a deployment tool. We
install OS
on bare metal servers and on VMs
and then configure this OS using Puppet. We have been using
Cobbler as
our OS provisioning tool since the beginning of Fuel.
However, Cobbler assumes using native OS installers (Anaconda and
Debian-installer). For some reasons we decided to
switch to image based approach for installing OS.

One of Fuel features is the ability to provide advanced partitioning
schemes (including software RAIDs, LVM).
Native installers are quite difficult to customize in the field of
partitioning
(that was one of the reasons to switch to image based approach).
Moreover, we'd like to implement even more
flexible user experience. We'd like to allow user to choose
which hard
drives to use for root FS, for
allocating DB. We'd like user to be able to put root FS over LV
or MD
device (including stripe, mirror, multipath).
We'd like user to be able to choose which hard drives are
bootable (if
any), which options to use for mounting file systems.
Many many various cases are possible. If you ask why we'd like to
support all those cases, the answer is simple:
because our users want us to support all those cases.
Obviously, many of those cases can not be implemented as image
internals, some cases can not be also implemented on
configuration stage (placing root fs on lvm device).

As far as those use cases were rejected to be implemented in term of
IPA, we implemented so called Fuel Agent.
Important Fuel Agent features are:

* It does not have REST API

I would not call it a feature :-P

Speaking seriously, if you agent is a long-running thing and it gets
it's configuration from e.g. JSON file, how can Ironic notify it of
any changes?

Fuel Agent is not long-running service. Currently there is no need to
have REST API. If we deal with kind of keep alive stuff of
inventory/discovery then we probably add API. Frankly, IPA REST API is
not REST at all. However that is not a reason to not to call it a
feature and through it away. It is a reason to work on it and improve.
That is how I try to look at things (pragmatically).

Fuel Agent has executable entry point[s] like /usr/bin/provision. You
can run this entry point with options (oslo.config) and point out where
to find input json data. It is supposed Ironic will  use ssh (currently
in Fuel we use mcollective) connection and run this waiting for exit
code. If exit code is equal to 0, provisioning is done. Extremely simple.

* it has executable entry point[s]
* It uses local json file as it's input
* It is planned to implement ability to download input data via HTTP
(kind of metadata service)
* It is designed to be agnostic to input data format, not only Fuel
format (data drivers)
* It is designed to be agnostic to image format (tar images,
file system
images, disk images, currently fs images)
* It is designed to be agnostic to image compression algorithm
(currently gzip)
* It is designed to be agnostic to image downloading protocol
(currently
local file and HTTP link)

Does it support Glance? I understand it's HTTP, but it requires
authentication.


So, it is clear that being motivated by Fuel, Fuel Agent is quite
independent and generic. And we are open for
new use cases.

My favorite use case is hardware introspection (aka getting data
required for scheduling from a node automatically). Any ideas on
this? (It's not a priority for this discussion, just curious).


That is exactly what we do in Fuel. Currently we use so called 'Default'
pxelinux config and all nodes being powered on are supposed to boot with
so called 'Bootstrap' ramdisk where Ohai based agent (not Fuel Agent)
runs periodically and sends hardware report to Fuel master node.
User then is able to look at CPU, hard drive and network info and choose
which nodes to use for controllers, which for computes, etc. That is
what nova scheduler is supposed to do 

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Jim Rollenhagen
On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote:
 Just a short explanation of Fuel use case.
 
 Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on
 bare metal servers and on VMs
 and then configure this OS using Puppet. We have been using Cobbler as our
 OS provisioning tool since the beginning of Fuel.
 However, Cobbler assumes using native OS installers (Anaconda and
 Debian-installer). For some reasons we decided to
 switch to image based approach for installing OS.
 
 One of Fuel features is the ability to provide advanced partitioning
 schemes (including software RAIDs, LVM).
 Native installers are quite difficult to customize in the field of
 partitioning
 (that was one of the reasons to switch to image based approach). Moreover,
 we'd like to implement even more
 flexible user experience. We'd like to allow user to choose which hard
 drives to use for root FS, for
 allocating DB. We'd like user to be able to put root FS over LV or MD
 device (including stripe, mirror, multipath).
 We'd like user to be able to choose which hard drives are bootable (if
 any), which options to use for mounting file systems.
 Many many various cases are possible. If you ask why we'd like to support
 all those cases, the answer is simple:
 because our users want us to support all those cases.
 Obviously, many of those cases can not be implemented as image internals,
 some cases can not be also implemented on
 configuration stage (placing root fs on lvm device).
 
 As far as those use cases were rejected to be implemented in term of IPA,
 we implemented so called Fuel Agent.

This is *precisely* why I disagree with adding this driver.

Nearly every feature that is listed here has been talked about before,
within the Ironic community. Software RAID, LVM, user choosing the
partition layout. These were reected from IPA because they do not fit in
*Ironic*, not because they don't fit in IPA.

If the Fuel team can convince enough people that Ironic should be
managing pets, then I'm almost okay with adding this driver (though I
still think adding those features to IPA is the right thing to do).

// jim

 Important Fuel Agent features are:
 
 * It does not have REST API
 * it has executable entry point[s]
 * It uses local json file as it's input
 * It is planned to implement ability to download input data via HTTP (kind
 of metadata service)
 * It is designed to be agnostic to input data format, not only Fuel format
 (data drivers)
 * It is designed to be agnostic to image format (tar images, file system
 images, disk images, currently fs images)
 * It is designed to be agnostic to image compression algorithm (currently
 gzip)
 * It is designed to be agnostic to image downloading protocol (currently
 local file and HTTP link)
 
 So, it is clear that being motivated by Fuel, Fuel Agent is quite
 independent and generic. And we are open for
 new use cases.
 
 According Fuel itself, our nearest plan is to get rid of Cobbler because
 in the case of image based approach it is huge overhead. The question is
 which tool we can use instead of Cobbler. We need power management,
 we need TFTP management, we need DHCP management. That is
 exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP
 management tool independently, but as Devananda said, we're all working on
 the same problems,
 so let's do it together.  Power/TFTP/DHCP management is where we are
 working on the same problems,
 but IPA and Fuel Agent are about different use cases. This case is not just
 Fuel, any mature
 deployment case require advanced partition/fs management. However, for me
 it is OK, if it is easily possible
 to use Ironic with external drivers (not merged to Ironic and not tested on
 Ironic CI).
 
 AFAIU, this spec https://review.openstack.org/#/c/138115/ does not assume
 changing Ironic API and core.
 Jim asked about how Fuel Agent will know about advanced disk partitioning
 scheme if API is not
 changed. The answer is simple: Ironic is supposed to send a link to
 metadata service (http or local file)
 where Fuel Agent can download input json data.
 
 As Roman said, we try to be pragmatic and suggest something which does not
 break anything. All changes
 are supposed to be encapsulated into a driver. No API and core changes. We
 have resources to support, test
 and improve this driver. This spec is just a zero step. Further steps are
 supposed to improve driver
 so as to make it closer to Ironic abstractions.
 
 For Ironic that means widening use cases and user community. But, as I
 already said,
 we are OK if Ironic does not need this feature.
 
 Vladimir Kozhukalov
 
 On Tue, Dec 9, 2014 at 1:09 PM, Roman Prykhodchenko 
 rprikhodche...@mirantis.com wrote:
 
  It is true that IPA and FuelAgent share a lot of functionality in common.
  However there is a major difference between them which is that they are
  intended to be used to solve a different problem.
 
  IPA is a solution for 

Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 09:11 AM, Doug Hellmann wrote:
 
 On Dec 9, 2014, at 6:39 AM, Sean Dague s...@dague.net wrote:
 
 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.

 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).

 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm

 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.
 
 I don’t have the hacking rules memorized. Could you describe them briefly?

Sure, the H8* group is git commit messages. It's checking for line
length in the commit message.

- [H802] First, provide a brief summary of 50 characters or less.  Summaries
  of greater then 72 characters will be rejected by the gate.

- [H801] The first line of the commit message should provide an accurate
  description of the change, not just a reference to a bug or
  blueprint.


H802 is mechanically enforced (though not the 50 characters part, so the
code isn't the same as the rule).

H801 is enforced by a regex that looks to see if the first line is a
launchpad bug and fails on it. You can't mechanically enforce that
english provides an accurate description.


H3* are all the module import rules:

Imports
---
- [H302] Do not import objects, only modules (*)
- [H301] Do not import more than one module per line (*)
- [H303] Do not use wildcard ``*`` import (*)
- [H304] Do not make relative imports
- Order your imports by the full module path
- [H305 H306 H307] Organize your imports according to the `Import order
  template`_ and `Real-world Import Order Examples`_ below.

I think these remain reasonable guidelines, but H302 is exceptionally
tricky to get right, and we keep not getting it right.

H305-307 are actually impossible to get right. Things come in and out of
stdlib in python all the time.


I think it's time to just decide to be reasonable Humans and that these
are guidelines.

The H3* set of rules is also why you have to install *all* of
requirements.txt and test-requirements.txt in your pep8 tox target,
because H302 actually inspects the sys.modules to attempt to figure out
if things are correct.

-Sean

 
 Doug
 - [H802] First, provide a brief summary of 50 characters or less.  Summaries
  of greater then 72 characters will be rejected by the gate.

- [H801] The first line of the commit message should provide an accurate
  description of the change, not just a reference to a bug or
  blueprint.

 

  -Sean

 -- 
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Chmouel Boudjnah
On Tue, Dec 9, 2014 at 12:39 PM, Sean Dague s...@dague.net wrote:

 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally.


I do run them locally using git-review custom script features which would
launch a flake8 before sending the review but I guess it's not a common
usage.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Monty Taylor
On 12/09/2014 03:39 AM, Sean Dague wrote:
 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.
 
 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).

+1

I DO like something warning about commit subject length ... but maybe
that should be a git-review function or something.

 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm
 
 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.

+1

flake8 does the important one now - no wildcard imports The others are
ones where I find myself dancing to meet the style more often than not.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time

2014-12-09 Thread Kurt Taylor
All of the feedback so far has supported moving the existing IRC
Third-party CI meeting to better fit a worldwide audience.

The consensus is that we will have only 1 meeting per week at alternating
times. You can see examples of other teams with alternating meeting times
at: https://wiki.openstack.org/wiki/Meetings

This way, one week we are good for one part of the world, the next week for
the other. You will not need to attend both meetings, just the meeting time
every other week that fits your schedule.

Proposed times in UTC are being voted on here:
https://www.google.com/moderator/#16/e=21b93c

Please vote on the time that is best for you. I would like to finalize the
new times this week.

Thanks!
Kurt Taylor (krtaylor)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-09 Thread Dmitry Guryanov
Hello!

There is a feature in HypervisorSupportMatrix 
(https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get Guest 
Info. Does anybody know, what does it mean? I haven't found anything like 
this neither in nova api nor in horizon and nova command line.

-- 
Thanks,
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Yuriy Zveryanskyy

On 12/09/2014 05:00 PM, Jim Rollenhagen wrote:

On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote:

Just a short explanation of Fuel use case.

Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on
bare metal servers and on VMs
and then configure this OS using Puppet. We have been using Cobbler as our
OS provisioning tool since the beginning of Fuel.
However, Cobbler assumes using native OS installers (Anaconda and
Debian-installer). For some reasons we decided to
switch to image based approach for installing OS.

One of Fuel features is the ability to provide advanced partitioning
schemes (including software RAIDs, LVM).
Native installers are quite difficult to customize in the field of
partitioning
(that was one of the reasons to switch to image based approach). Moreover,
we'd like to implement even more
flexible user experience. We'd like to allow user to choose which hard
drives to use for root FS, for
allocating DB. We'd like user to be able to put root FS over LV or MD
device (including stripe, mirror, multipath).
We'd like user to be able to choose which hard drives are bootable (if
any), which options to use for mounting file systems.
Many many various cases are possible. If you ask why we'd like to support
all those cases, the answer is simple:
because our users want us to support all those cases.
Obviously, many of those cases can not be implemented as image internals,
some cases can not be also implemented on
configuration stage (placing root fs on lvm device).

As far as those use cases were rejected to be implemented in term of IPA,
we implemented so called Fuel Agent.

This is *precisely* why I disagree with adding this driver.

Nearly every feature that is listed here has been talked about before,
within the Ironic community. Software RAID, LVM, user choosing the
partition layout. These were reected from IPA because they do not fit in
*Ironic*, not because they don't fit in IPA.


Yes, they do not fit in Ironic *core* but this is a *driver*.
There is iLO driver for example. Good or bad is iLO management technology?
I don't know. But it is an existing vendor's solution. I should buy or rent
HP server for tests or experiments with iLO driver. Fuel is widely used
solution for deployment, and it is open-source. I think to have Fuel Agent
driver in Ironic will be better than driver for rare hardware XYZ for
example.


If the Fuel team can convince enough people that Ironic should be
managing pets, then I'm almost okay with adding this driver (though I
still think adding those features to IPA is the right thing to do).

// jim


Important Fuel Agent features are:

* It does not have REST API
* it has executable entry point[s]
* It uses local json file as it's input
* It is planned to implement ability to download input data via HTTP (kind
of metadata service)
* It is designed to be agnostic to input data format, not only Fuel format
(data drivers)
* It is designed to be agnostic to image format (tar images, file system
images, disk images, currently fs images)
* It is designed to be agnostic to image compression algorithm (currently
gzip)
* It is designed to be agnostic to image downloading protocol (currently
local file and HTTP link)

So, it is clear that being motivated by Fuel, Fuel Agent is quite
independent and generic. And we are open for
new use cases.

According Fuel itself, our nearest plan is to get rid of Cobbler because
in the case of image based approach it is huge overhead. The question is
which tool we can use instead of Cobbler. We need power management,
we need TFTP management, we need DHCP management. That is
exactly what Ironic is able to do. Frankly, we can implement power/TFTP/DHCP
management tool independently, but as Devananda said, we're all working on
the same problems,
so let's do it together.  Power/TFTP/DHCP management is where we are
working on the same problems,
but IPA and Fuel Agent are about different use cases. This case is not just
Fuel, any mature
deployment case require advanced partition/fs management. However, for me
it is OK, if it is easily possible
to use Ironic with external drivers (not merged to Ironic and not tested on
Ironic CI).

AFAIU, this spec https://review.openstack.org/#/c/138115/ does not assume
changing Ironic API and core.
Jim asked about how Fuel Agent will know about advanced disk partitioning
scheme if API is not
changed. The answer is simple: Ironic is supposed to send a link to
metadata service (http or local file)
where Fuel Agent can download input json data.

As Roman said, we try to be pragmatic and suggest something which does not
break anything. All changes
are supposed to be encapsulated into a driver. No API and core changes. We
have resources to support, test
and improve this driver. This spec is just a zero step. Further steps are
supposed to improve driver
so as to make it closer to Ironic abstractions.

For Ironic that means widening use cases and user community. But, as I
already 

[openstack-dev] new library oslo.context released

2014-12-09 Thread Doug Hellmann
The Oslo team is pleased to announce the release of oslo.context 0.1.0. This is 
the first version of oslo.context, the library containing the base class for 
the request context object. This is a relatively small module, but we’ve placed 
it in a separate library so oslo.messaging and oslo.log can both influence its 
API without “owning” it or causing circular dependencies.

This initial release includes enough support for projects to start adopting the 
oslo_context.RequestContext as the base class for existing request context 
implementations. The documentation [1] covers the basic API that is present 
now. We expect to find a few holes as we start integrating the library into 
existing projects, so liaisons please work with us to identify and plug the 
holes by reporting bugs or raising issues in #openstack-oslo.

There are a few more API additions planned for the library, but those will come 
later in the release cycle. Please see the spec for details if you are 
interested [2].

Thanks!
Doug

[1] http://docs.openstack.org/developer/oslo.context/
[2] 
http://specs.openstack.org/openstack/oslo-specs/specs/kilo/graduate-oslo-context.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V Meeting Canceled for Today

2014-12-09 Thread Peter Pouliot
Hi All,

I'm canceling the hyper-v meeting today due to a conflicting shedules, we will 
resume next week.

Peter J. Pouliot CISSP
Microsoft Enterprise Cloud Solutions
C:\OpenStack
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] OSAAA-Policy

2014-12-09 Thread Brad Topol
+1!  Makes sense.

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Morgan Fainberg morgan.fainb...@gmail.com
To: Adam Young ayo...@redhat.com, OpenStack Development Mailing 
List (not for usage questions) openstack-dev@lists.openstack.org
Date:   12/08/2014 06:07 PM
Subject:Re: [openstack-dev] [Keystone] OSAAA-Policy



I agree that this library should not have “Keystone” in the name. This is 
more along the lines of pycadf, something that is housed under the 
OpenStack Identity Program but it is more interesting for general use-case 
than exclusively something that is tied to Keystone specifically.

Cheers,
Morgan

-- 
Morgan Fainberg

On December 8, 2014 at 4:55:20 PM, Adam Young (ayo...@redhat.com) wrote:
The Policy libraray has been nominated for promotion from Oslo 
incubator. The Keystone team was formally known as the Identity 
Program, but now is Authentication, Authorization, and Audit, or AAA. 

Does the prefeix OSAAA for the library make sense? It should not be 
Keystone-policy. 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] HA issues

2014-12-09 Thread Dulko, Michal
And what about no recovery in case of failure mid-task? I can see that there's 
some TaskFlow integration done. This lib seems to address these issues (if used 
with taskflow.persistent submodule, which Cinder isn't using). Any plans for 
further integration with TaskFlow?

-Original Message-
From: John Griffith [mailto:john.griffi...@gmail.com] 
Sent: Monday, December 8, 2014 11:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] HA issues

On Mon, Dec 8, 2014 at 8:18 AM, Dulko, Michal michal.du...@intel.com wrote:
 Hi all!



 At the summit during crossproject HA session there were multiple 
 Cinder issues mentioned. These can be found in this etherpad:
 https://etherpad.openstack.org/p/kilo-crossproject-ha-integration



 Is there any ongoing effort to fix these issues? Is there an idea how 
 to approach any of them?


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks for the nudge on this, personally I hadn't seen this.  So the items are 
pretty vague, there are def plans to try and address a number of race 
conditions etc.  I'm not aware of any specific plans to focus on HA from this 
perspective, or anybody stepping up to work on it but certainly would be great 
for somebody to dig in and start flushing this out.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Brian Curtin
On Tue, Dec 9, 2014 at 9:05 AM, Sean Dague s...@dague.net wrote:
 - [H305 H306 H307] Organize your imports according to the `Import order
   template`_ and `Real-world Import Order Examples`_ below.

 I think these remain reasonable guidelines, but H302 is exceptionally
 tricky to get right, and we keep not getting it right.

 H305-307 are actually impossible to get right. Things come in and out of
 stdlib in python all the time.

Do you have concrete examples of where this has been an issue? Modules
are only added roughly every 18 months and only on the 3.x line as of
the middle of 2010 when 2.7.0 was released. Nothing should have left
the 2.x line within that time as well, and I don't recall anything
having completed a deprecation cycle on the 3.x side.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Vladimir Kozhukalov
We assume next step will be to put provision data (disk partition
 scheme, maybe other data) into driver_info and make Fuel Agent driver
 able to serialize those data (special format) and implement a
 corresponding data driver in Fuel Agent for this format. Again very
 simple. Maybe it is time to think of having Ironic metadata service
 (just maybe).


I'm ok with the format, my question is: what and how is going to collect
 all the data and put into say driver_info?


Fuel has a web service which stores nodes info in its database. When user
clicks Deploy button, this web service serializes deployment task and
puts this task into task runner (another Fuel component). Then this task
runner parses task and adds a node into Ironic via REST API (including
driver_info). Then it calls Ironic deploy method and Ironic uses Fuel Agent
driver to deploy a node. Corresponding Fuel spec is here
https://review.openstack.org/#/c/138301/. Again it is zero step
implementation.


Honestly, I think writing roadmap right now is not very rational as far as
 I am not even sure people are interested in widening Ironic use cases. Some
 of the comments were not even constructive like I don't understand what
 your use case is, please use IPA.



 Please don't be offended by this. We did put a lot of effort into IPA and
 it's reasonable to look for a good use cases before having one more smart
 ramdisk. Nothing personal, just estimating cost vs value :)
 Also why not use IPA is a fair question for me and the answer is about
 use cases (as you stated it before), not about missing features of IPA,
 right?


You are right it is a fair question, and answer is exactly about *missing
features*.


Nova is not our case. Fuel is totally about deployment. There is some in
 common


Here when we have a difficult point. Major use case for Ironic is to be
 driven by Nova (and assisted by Neutron). Without these two it's hard to
 understand how Fuel Agent is going to fit into the infrastructure. And
 hence my question above about where your json comes from. In the current
 Ironic world the same data is received partly from Nova flavor, partly
 managed by Neutron completely.
 I'm not saying it can't change - we do want to become more stand-alone.
 E.g. we can do without Neutron right now. I think specifying the source of
 input data for Fuel Agent in the Ironic infrastructure would help a lot
 understand, how well Ironic and Fuel Agent could play together.


According to the information I have, correct me if I'm wrong, Ironic
currently is on the stage of becoming stand-alone service. That is the
reason why this spec has been brought up. Again we need something to manage
power/tftp/dhcp to substitute Cobbler. Ironic looks like a suitable tool,
but we need this driver. We are not going to break anything. We have
resources to test and support this driver. And I can not use IPA *right
now* because it does not have features I need. I can not wait for next half
a year for these features to be implemented. Why can't we add this (Fuel
Agent) driver and then if IPA implements what we need we can switch to IPA.
The only alternative for me right now is to implement my own
power/tftp/dhcp management solution like I did with Fuel Agent when I did
not get approve for including advanced disk partitioning.

Questions are: Is Ironic interested in this use case or not? Is Ironic
interested to get more development resources? The only case when it's
rational for us to spend our resources to develop Ironic is when we get
something back. We are totally pragmatic, we just address our user's wishes
and issues. It is ok for us to use any tool which provides what we need
(IPA, Fuel Agent, any other).

We need advanced disk partitioning and power/tftp/dhcp management by March
2015. Is it possible to get this from Ironic + IPA? I doubt it. Is it
possible to get this form Ironic + Fuel Agent? Yes it is. Is it possible to
get this from Fuel power/tftp/dhcp management + Fuel Agent? Yes it is. So,
I have two options right now: Ironic + Fuel Agent or Fuel power/tftp/dhcp
management + Fuel Agent.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 11:15 AM, Brian Curtis wrote:
 On Tue, Dec 9, 2014 at 9:05 AM, Sean Dague s...@dague.net wrote:
 - [H305 H306 H307] Organize your imports according to the `Import order
   template`_ and `Real-world Import Order Examples`_ below.

 I think these remain reasonable guidelines, but H302 is exceptionally
 tricky to get right, and we keep not getting it right.

 H305-307 are actually impossible to get right. Things come in and out of
 stdlib in python all the time.
 
 Do you have concrete examples of where this has been an issue? Modules
 are only added roughly every 18 months and only on the 3.x line as of
 the middle of 2010 when 2.7.0 was released. Nothing should have left
 the 2.x line within that time as well, and I don't recall anything
 having completed a deprecation cycle on the 3.x side.

argparse - which is stdlib in 2.7, not in 2.6. So hacking on 2.6 would
give different results from 2.7. Less of an issue now that 2.6 support
in OpenStack has been dropped for most projects, but it's a very
concrete example.

This check should run on any version of python and give the same
results. It does not, because it queries python to know what's in stdlib
vs. not.

Having a deprecation cycle isn't the concern here, it's the checks
working the same on python 2.7, 3.3, 3.4, 3.5

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Jeremy Stanley
On 2014-12-09 07:29:31 -0800 (-0800), Monty Taylor wrote:
 I DO like something warning about commit subject length ... but maybe
 that should be a git-review function or something.
[...]

How about a hook in Gerrit to refuse commits based on some simple
(maybe even project-specific) rules?
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] OSAAA-Policy

2014-12-09 Thread Adam Young

On 12/09/2014 10:57 AM, Brad Topol wrote:

+1!  Makes sense.

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From: Morgan Fainberg morgan.fainb...@gmail.com
To: Adam Young ayo...@redhat.com, OpenStack Development Mailing 
List (not for usage questions) openstack-dev@lists.openstack.org

Date: 12/08/2014 06:07 PM
Subject: Re: [openstack-dev] [Keystone] OSAAA-Policy




I agree that this library should not have “Keystone” in the name. This 
is more along the lines of pycadf, something that is housed under the 
OpenStack Identity Program but it is more interesting for general 
use-case than exclusively something that is tied to Keystone specifically.


openstack-policy?  osid-policy?  It really should not position itself as 
a standard.  pycadf is more general purpose, but we are not looking to 
replace all of the rules languages out there.








Cheers,
Morgan

--
Morgan Fainberg

On December 8, 2014 at 4:55:20 PM, Adam Young (_ayoung@redhat.com_ 
mailto:ayo...@redhat.com) wrote:


The Policy libraray has been nominated for promotion from Oslo
incubator. The Keystone team was formally known as the Identity
Program, but now is Authentication, Authorization, and Audit, or AAA.

Does the prefeix OSAAA for the library make sense? It should not be
Keystone-policy.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] HA issues

2014-12-09 Thread Duncan Thomas
There are some significant limitations to the pure taskflow approach,
however some combination of atomic micro-state management and taskflow
persistence is being looked at

Duncan Thomas
On Dec 9, 2014 6:24 PM, Dulko, Michal michal.du...@intel.com wrote:

 And what about no recovery in case of failure mid-task? I can see that
 there's some TaskFlow integration done. This lib seems to address these
 issues (if used with taskflow.persistent submodule, which Cinder isn't
 using). Any plans for further integration with TaskFlow?

 -Original Message-
 From: John Griffith [mailto:john.griffi...@gmail.com]
 Sent: Monday, December 8, 2014 11:28 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [cinder] HA issues

 On Mon, Dec 8, 2014 at 8:18 AM, Dulko, Michal michal.du...@intel.com
 wrote:
  Hi all!
 
 
 
  At the summit during crossproject HA session there were multiple
  Cinder issues mentioned. These can be found in this etherpad:
  https://etherpad.openstack.org/p/kilo-crossproject-ha-integration
 
 
 
  Is there any ongoing effort to fix these issues? Is there an idea how
  to approach any of them?
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Thanks for the nudge on this, personally I hadn't seen this.  So the items
 are pretty vague, there are def plans to try and address a number of race
 conditions etc.  I'm not aware of any specific plans to focus on HA from
 this perspective, or anybody stepping up to work on it but certainly would
 be great for somebody to dig in and start flushing this out.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Johannes Erdfelt
On Tue, Dec 09, 2014, Sean Dague s...@dague.net wrote:
 This check should run on any version of python and give the same
 results. It does not, because it queries python to know what's in stdlib
 vs. not.

Just to underscore that it's difficult to get right, I found out recently
that hacking doesn't do a great job of figuring out what is a standard
library.

I've installed some libraries in 'develop' mode and recent hacking
thinks they are standard libraries and complains about the order.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Matthew Treinish
On Tue, Dec 09, 2014 at 10:15:34AM -0600, Brian Curtin wrote:
 On Tue, Dec 9, 2014 at 9:05 AM, Sean Dague s...@dague.net wrote:
  - [H305 H306 H307] Organize your imports according to the `Import order
template`_ and `Real-world Import Order Examples`_ below.
 
  I think these remain reasonable guidelines, but H302 is exceptionally
  tricky to get right, and we keep not getting it right.
 
  H305-307 are actually impossible to get right. Things come in and out of
  stdlib in python all the time.
 
 Do you have concrete examples of where this has been an issue? Modules
 are only added roughly every 18 months and only on the 3.x line as of
 the middle of 2010 when 2.7.0 was released. Nothing should have left
 the 2.x line within that time as well, and I don't recall anything
 having completed a deprecation cycle on the 3.x side.
 

I don't have any examples of stdlib removals (and there may not be any) but that
isn't the only issue with the import grouping rules. The reverse will also
cause issues, adding a library to stdlib which was previously a third-party
module. The best example I've found is pathlib which was added to stdlib in 3.4:

https://docs.python.org/3/library/pathlib.html

but a third-party module on all the previous releases:

https://pypi.python.org/pypi/pathlib

So, the hacking rule will behave differently depending on which version of
python you're running with. There really isn't a way around that, if the rule
can't behave consistently and enforce the same behavior between releases we
shouldn't be using it. Especially as things are trying to migrate to use python
3 where possible.

I've seen proposals to hard code the list of stdlib in the rule to a specific
python version which would make the behavior consistent, but I very much opposed
to that because it means we're not actually enforcing the correct thing which I
think is as big an issue. We don't want the hacking checks to error out and say
that pathlib is a 3rd party module even if we're running it on python 3.4, that
would just be very confusing.

The middle ground I proposed was to not differentiate the third-party and stdlib
import groups and just check local project's import grouping against the others.
This would make the behavior consistent between python versions and still
provide some useful feedback. But if the consensus is to just remove the rules
I'm fine with that too.


-Matt Treinish


pgpQEUl84Nl7Y.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-09 Thread Salvatore Orlando
I would like to chime into this discussion wearing my plugin developer hat.

We (the VMware team) have looked very carefully at the current proposal for
splitting off drivers and plugins from the main source code tree. Therefore
the concerns you've heard from Gary are not just ramblings but are the
results of careful examination of this proposal.

While we agree with the final goal, the feeling is that for many plugin
maintainers this process change might be too much for what can be
accomplished in a single release cycle. As a member of the drivers team, I
am still very supportive of the split, I just want to make sure that it’s
made in a sustainable way; I also understand that “sustainability” has been
one of the requirements of the current proposal, and therefore we should
all be on the same page on this aspect.

However, we did a simple exercise trying to assess the amount of work
needed to achieve something which might be acceptable to satisfy the
process. Without going into too many details, this requires efforts for:

- refactor the code to achieve a plugin module simple and thin enough to
satisfy the requirements. Unfortunately a radical approach like the one in
[1] with a reference to an external library is not pursuable for us

- maintaining code repositories outside of the neutron scope and the
necessary infrastructure

- reinforcing our CI infrastructure, and improve our error detection and
log analysis capabilities to improve reaction times upon failures triggered
by upstream changes. As you know, even if the plugin interface is
solid-ish, the dependency on the db base class increases the chances of
upstream changes breaking 3rd party plugins.

The feedback from our engineering team is that satisfying the requirements
of this new process might not be feasible in the Kilo timeframe, both for
existing plugins and for new plugins and drivers that should be upstreamed
(there are a few proposed on neutron-specs at the moment, which are all in
-2 status considering the impending approval of the split out).

The questions I would like to bring to the wider community are therefore
the following:

1 - Is there a possibility of making a further concession on the current
proposal, where maintainers are encouraged to experiment with the plugin
split in Kilo, but will actually required to do it in the next release?

2 - What could be considered as a acceptable as a new plugin? I understand
that they would be accepted only as “thin integration modules”, which
ideally should just be a pointer to code living somewhere else. I’m not
questioning the validity of this approach, but it has been brought to my
attention that this will actually be troubling for teams which have made an
investment in the previous release cycles to upstream plugins following the
“old” process

3 - Regarding the above discussion on ML2 or not ML2. The point on
co-gating is well taken. Eventually we'd like to remove this binding -
because I believe the ML2 subteam would also like to have more freedom on
their plugin. Do we already have an idea about how doing that without
completely moving away from the db_base class approach?

Thanks for your attention and for reading through this

Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22

On 8 December 2014 at 21:51, Maru Newby ma...@redhat.com wrote:


 On Dec 7, 2014, at 10:51 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi Kyle,
  I am not missing the point. I understand the proposal. I just think that
 it has some shortcomings (unless I misunderstand, which will certainly not
 be the first time and most definitely not the last). The thinning out is to
 have a shim in place. I understand this and this will be the entry point
 for the plugin. I do not have a concern for this. My concern is that we are
 not doing this with the ML2 off the bat. That should lead by example as it
 is our reference architecture. Lets not kid anyone, but we are going  to
 hit some problems with the decomposition. I would prefer that it be done
 with the default implementation. Why?

 The proposal is to move vendor-specific logic out of the tree to increase
 vendor control over such code while decreasing load on reviewers.  ML2
 doesn’t contain vendor-specific logic - that’s the province of ML2 drivers
 - so it is not a good target for the proposed decomposition by itself.


• Cause we will fix them quicker as it is something that prevent
 Neutron from moving forwards
• We will just need to fix in one place first and not in N (where
 N is the vendor plugins)
• This is a community effort – so we will have a lot more eyes on
 it
• It will provide a reference architecture for all new plugins
 that want to be added to the tree
• It will provide a working example for plugin that are already in
 tree and are to be replaced by the shim
  If we really want to do this, we can say freeze all development (which
 is just approvals 

Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Johannes Erdfelt
On Tue, Dec 09, 2014, Sean Dague s...@dague.net wrote:
 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.
 
 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).

One of the problems with the H8* tests is that it can reject a commit
message generated by git itself.

I had a 'git revert' rejected because the first line was too long :(

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-09 Thread Daniel P. Berrange
On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote:
 Hello!
 
 There is a feature in HypervisorSupportMatrix 
 (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get Guest 
 Info. Does anybody know, what does it mean? I haven't found anything like 
 this neither in nova api nor in horizon and nova command line.

I've pretty much no idea what the intention was for that field. I've
been working on formally documenting all those things, but draw a blank
for that

FYI:

  https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 11:41 AM, Johannes Erdfelt wrote:
 On Tue, Dec 09, 2014, Sean Dague s...@dague.net wrote:
 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.

 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).
 
 One of the problems with the H8* tests is that it can reject a commit
 message generated by git itself.
 
 I had a 'git revert' rejected because the first line was too long :(

+1

I've had the gerrit revert button reject me for the same reason.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 11:28 AM, Jeremy Stanley wrote:
 On 2014-12-09 07:29:31 -0800 (-0800), Monty Taylor wrote:
 I DO like something warning about commit subject length ... but maybe
 that should be a git-review function or something.
 [...]
 
 How about a hook in Gerrit to refuse commits based on some simple
 (maybe even project-specific) rules?
 

Honestly, any hard rejection ends up problematic. For instance, it means
it's impossible to include actual urls in commit messages to reference
things without a url shortener much of the time.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Kevin L. Mitchell
On Tue, 2014-12-09 at 10:05 -0500, Sean Dague wrote:
 Sure, the H8* group is git commit messages. It's checking for line
 length in the commit message.

I agree the H8* group should be dropped.  It would be appropriate to
create a new gate check job that validated that, but it should not be
part of hacking.

 H3* are all the module import rules:
 
 Imports
 ---
 - [H302] Do not import objects, only modules (*)
 - [H301] Do not import more than one module per line (*)
 - [H303] Do not use wildcard ``*`` import (*)
 - [H304] Do not make relative imports
 - Order your imports by the full module path
 - [H305 H306 H307] Organize your imports according to the `Import order
   template`_ and `Real-world Import Order Examples`_ below.
 
 I think these remain reasonable guidelines, but H302 is exceptionally
 tricky to get right, and we keep not getting it right.
 
 H305-307 are actually impossible to get right. Things come in and out of
 stdlib in python all the time.
 
 
 I think it's time to just decide to be reasonable Humans and that these
 are guidelines.
 
 The H3* set of rules is also why you have to install *all* of
 requirements.txt and test-requirements.txt in your pep8 tox target,
 because H302 actually inspects the sys.modules to attempt to figure out
 if things are correct.

I agree that dropping H302 and the grouping checks makes sense.  I think
we should keep the H301, H303, H304, and the basic ordering checks,
however; it doesn't seem to me that these would be that difficult to
implement or maintain.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] OSAAA-Policy

2014-12-09 Thread Morgan Fainberg
On December 9, 2014 at 10:43:51 AM, Adam Young (ayo...@redhat.com) wrote:
On 12/09/2014 10:57 AM, Brad Topol wrote:
+1!  Makes sense.

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:        Morgan Fainberg morgan.fainb...@gmail.com
To:        Adam Young ayo...@redhat.com, OpenStack Development Mailing List 
(not for usage questions) openstack-dev@lists.openstack.org
Date:        12/08/2014 06:07 PM
Subject:        Re: [openstack-dev] [Keystone] OSAAA-Policy



I agree that this library should not have “Keystone” in the name. This is more 
along the lines of pycadf, something that is housed under the OpenStack 
Identity Program but it is more interesting for general use-case than 
exclusively something that is tied to Keystone specifically.

openstack-policy?  osid-policy?  It really should not position itself as a 
standard.  pycadf is more general purpose, but we are not looking to replace 
all of the rules languages out there.



Just keep in mind we’re a policy rules enforcement library (with whatever name 
we end up with). This is obviously one of the hard computer science issues 
(naming things).

I wasn’t clear, I didn’t mean to imply usage would be like pycadf (a more 
global standard), but just that it was not exclusive (at least within the 
OpenStack world) to be used with Keystone.

Cheers,
Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecating exceptions

2014-12-09 Thread Jeremy Stanley
On 2014-12-09 16:50:48 +0100 (+0100), Jakub Ruzicka wrote:
 On 22.9.2014 17:24, Ihar Hrachyshka wrote:
[...]
  Aren't clients supposed to be backwards compatible? Isn't it the exact
  reason why we don't maintain stable branches for client modules?
 
 Supposed, yes. However, it's not ensured/enforced any way, so it's as
 good as an empty promise.
[...]

We do test changes to the client libraries against currently
supported stable branches. If distros want to perform similar
regression testing against their supported releases this is also
welcome.

See for example, this python-novaclient change last week being
tested against a stable/icehouse branch DevStack environment:

URL: 
http://logs.openstack.org/81/139381/1/check/gate-tempest-dsvm-neutron-src-python-novaclient-icehouse/bcd210c/
 

That is a voting job. If it hadn't succeeded the change couldn't
have been approved to merge.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 11:58 AM, Kevin L. Mitchell wrote:
 On Tue, 2014-12-09 at 10:05 -0500, Sean Dague wrote:
 Sure, the H8* group is git commit messages. It's checking for line
 length in the commit message.
 
 I agree the H8* group should be dropped.  It would be appropriate to
 create a new gate check job that validated that, but it should not be
 part of hacking.
 
 H3* are all the module import rules:

 Imports
 ---
 - [H302] Do not import objects, only modules (*)
 - [H301] Do not import more than one module per line (*)
 - [H303] Do not use wildcard ``*`` import (*)
 - [H304] Do not make relative imports
 - Order your imports by the full module path
 - [H305 H306 H307] Organize your imports according to the `Import order
   template`_ and `Real-world Import Order Examples`_ below.

 I think these remain reasonable guidelines, but H302 is exceptionally
 tricky to get right, and we keep not getting it right.

 H305-307 are actually impossible to get right. Things come in and out of
 stdlib in python all the time.


 I think it's time to just decide to be reasonable Humans and that these
 are guidelines.

 The H3* set of rules is also why you have to install *all* of
 requirements.txt and test-requirements.txt in your pep8 tox target,
 because H302 actually inspects the sys.modules to attempt to figure out
 if things are correct.
 
 I agree that dropping H302 and the grouping checks makes sense.  I think
 we should keep the H301, H303, H304, and the basic ordering checks,
 however; it doesn't seem to me that these would be that difficult to
 implement or maintain.

Well, be careful what you think is easy -
https://github.com/openstack-dev/hacking/blob/master/hacking/checks/imports.py
:)

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Jeremy Stanley
On 2014-12-09 11:56:54 -0500 (-0500), Sean Dague wrote:
 Honestly, any hard rejection ends up problematic. For instance, it
 means it's impossible to include actual urls in commit messages to
 reference things without a url shortener much of the time.

Fair enough. I think this makes it a human problem which we're not
going to solve by applying more technology. Drop all of H8XX, make
Gerrit preserve votes on commit-message-only patchset updates,
decree no more commit message -1s from reviewers, and make it
socially acceptable to just edit commit messages of changes you
review to bring them up to acceptable standards.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-09 Thread Markus Zoeller
  On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote:
  
  Hello!
  
  There is a feature in HypervisorSupportMatrix 
  (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get 
Guest 
  Info. Does anybody know, what does it mean? I haven't found anything 
like 
  this neither in nova api nor in horizon and nova command line.

I think this maps to the nova driver function get_info:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4054

I believe (and didn't double-check) that this is used e.g. by the 
Nova CLI via `nova show [--minimal] server` command.

I tried to map the features of the hypervisor support matrix to 
specific nova driver functions on this wiki page:
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DriverAPI

 On Tue Dec 9 15:39:35 UTC 2014, Daniel P. Berrange wrote:
 I've pretty much no idea what the intention was for that field. I've 
 been working on formally documenting all those things, but draw a blank 
 for that
 
 FYI:
 
 https://review.openstack.org/#/c/136380/1/doc/hypervisor-support.ini
 
 Regards, Daniel 

Nice! I will keep an eye on that :)


Regards,
Markus Zoeller
IRC: markus_z
Launchpad: mzoeller


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][Keystone] Policy graduation

2014-12-09 Thread Morgan Fainberg
I would like to propose that we keep the policy library under the oslo program. 
As with other graduated projects we will maintain a core team (that while 
including the oslo-core team) will be comprised of the expected individuals 
from the Identity and other security related teams.

The change in direction is due to the policy library being more generic and not 
exactly a clean fit with the OpenStack Identity program. This is the policy 
rules engine, which is currently used by all (or almost all) OpenStack 
projects. Based on the continued conversation, it doesn’t make sense to take it 
out of the “common” namespace.

If there are no concerns with this change of direction we will update the 
spec[1] to reflect this proposal and continue with the plans to graduate as 
soon as possible.

[1] https://review.openstack.org/#/c/140161/

-- 
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Kevin L. Mitchell
On Tue, 2014-12-09 at 12:05 -0500, Sean Dague wrote:
  I agree that dropping H302 and the grouping checks makes sense.  I
 think
  we should keep the H301, H303, H304, and the basic ordering checks,
  however; it doesn't seem to me that these would be that difficult to
  implement or maintain.
 
 Well, be careful what you think is easy -
 https://github.com/openstack-dev/hacking/blob/master/hacking/checks/imports.py
 :)

So, hacking_import_rules() is very complex.  However, it implements H302
as well as H301, H303, and H304.  I feel it can be simplified to just a
textual match rule if we remove the H302 implementation: H301 just needs
to exclude imports with ',', H303 needs to exclude imports with '*', and
H304 is already implemented as a regular expression match.  It looks
like the basic ordering check I was referring to is H306, which isn't
all that complicated.  It seems like the rest of the code is related to
the checks which I just agreed should be dropped :)  Am I missing
anything?
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][Keystone] Policy graduation

2014-12-09 Thread Adam Young

On 12/09/2014 12:18 PM, Morgan Fainberg wrote:
I would like to propose that we keep the policy library under the oslo 
program. As with other graduated projects we will maintain a core team 
(that while including the oslo-core team) will be comprised of the 
expected individuals from the Identity and other security related teams.


The change in direction is due to the policy library being more 
generic and not exactly a clean fit with the OpenStack Identity 
program. This is the policy rules engine, which is currently used by 
all (or almost all) OpenStack projects. Based on the continued 
conversation, it doesn’t make sense to take it out of the “common” 
namespace.
Agreed. I think the original design was quite clean, and could easily be 
used by projects not even in Open Stack.  While we don't want to be 
challenge any of the python based replacements for Prolog, the name 
should reflect its general purpose.


Is the congress program still planning on using the same rules engine?




If there are no concerns with this change of direction we will update 
the spec[1] to reflect this proposal and continue with the plans to 
graduate as soon as possible.


[1] https://review.openstack.org/#/c/140161/

--
Morgan Fainberg


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-09 Thread Armando M.
On 9 December 2014 at 09:41, Salvatore Orlando sorla...@nicira.com wrote:

 I would like to chime into this discussion wearing my plugin developer hat.

 We (the VMware team) have looked very carefully at the current proposal
 for splitting off drivers and plugins from the main source code tree.
 Therefore the concerns you've heard from Gary are not just ramblings but
 are the results of careful examination of this proposal.

 While we agree with the final goal, the feeling is that for many plugin
 maintainers this process change might be too much for what can be
 accomplished in a single release cycle.

We actually gave a lot more than a cycle:

https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst
LINE 416

And in all honestly, I can only tell that getting this done by such an
experienced team like the Neutron team @VMware shouldn't take that long.

By the way, if Kyle can do it in his teeny tiny time that he has left after
his PTL duties, then anyone can do it! :)

https://review.openstack.org/#/c/140191/

 As a member of the drivers team, I am still very supportive of the split,
 I just want to make sure that it’s made in a sustainable way; I also
 understand that “sustainability” has been one of the requirements of the
 current proposal, and therefore we should all be on the same page on this
 aspect.

 However, we did a simple exercise trying to assess the amount of work
 needed to achieve something which might be acceptable to satisfy the
 process. Without going into too many details, this requires efforts for:

 - refactor the code to achieve a plugin module simple and thin enough to
 satisfy the requirements. Unfortunately a radical approach like the one in
 [1] with a reference to an external library is not pursuable for us

 - maintaining code repositories outside of the neutron scope and the
 necessary infrastructure

 - reinforcing our CI infrastructure, and improve our error detection and
 log analysis capabilities to improve reaction times upon failures triggered
 by upstream changes. As you know, even if the plugin interface is
 solid-ish, the dependency on the db base class increases the chances of
 upstream changes breaking 3rd party plugins.


No-one is advocating for approach laid out in [1], but a lot of code can be
moved elsewhere (like the nsxlib) without too much effort. Don't forget
that not so long ago I was the maintainer of this plugin and the one who
built the VMware NSX CI; I know very well what it takes to scope this
effort, and I can support you in the process.

 The feedback from our engineering team is that satisfying the requirements
 of this new process might not be feasible in the Kilo timeframe, both for
 existing plugins and for new plugins and drivers that should be upstreamed
 (there are a few proposed on neutron-specs at the moment, which are all in
 -2 status considering the impending approval of the split out).

No new plugins can and will be accepted if they do not adopt the proposed
model, let's be very clear about this.

 The questions I would like to bring to the wider community are therefore
 the following:

 1 - Is there a possibility of making a further concession on the current
 proposal, where maintainers are encouraged to experiment with the plugin
 split in Kilo, but will actually required to do it in the next release?

This is exactly what the spec is proposing: get started now, and it does
not matter if you don't finish in time.

 2 - What could be considered as a acceptable as a new plugin? I understand
 that they would be accepted only as “thin integration modules”, which
 ideally should just be a pointer to code living somewhere else. I’m not
 questioning the validity of this approach, but it has been brought to my
 attention that this will actually be troubling for teams which have made an
 investment in the previous release cycles to upstream plugins following the
 “old” process

You are not alone. Other efforts went through the same process [1, 2, 3].
Adjusting is a way of life. No-one is advocating for throwing away existing
investment. This proposal actually promotes new and pre-existing investment.

[1] https://review.openstack.org/#/c/104452/
[2] https://review.openstack.org/#/c/103728/
[3] https://review.openstack.org/#/c/136091/

 3 - Regarding the above discussion on ML2 or not ML2. The point on
 co-gating is well taken. Eventually we'd like to remove this binding -
 because I believe the ML2 subteam would also like to have more freedom on
 their plugin. Do we already have an idea about how doing that without
 completely moving away from the db_base class approach?

Sure, if you like to participate in the process, we can only welcome you!

 Thanks for your attention and for reading through this

 Salvatore

 [1]
 http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/vmware/plugin.py#n22

 On 8 December 2014 at 21:51, Maru Newby ma...@redhat.com wrote:


 On Dec 7, 2014, at 10:51 AM, Gary 

Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Doug Hellmann

On Dec 9, 2014, at 10:05 AM, Sean Dague s...@dague.net wrote:

 On 12/09/2014 09:11 AM, Doug Hellmann wrote:
 
 On Dec 9, 2014, at 6:39 AM, Sean Dague s...@dague.net wrote:
 
 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.
 
 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).
 
 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm
 
 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.
 
 I don’t have the hacking rules memorized. Could you describe them briefly?
 
 Sure, the H8* group is git commit messages. It's checking for line
 length in the commit message.
 
 - [H802] First, provide a brief summary of 50 characters or less.  Summaries
  of greater then 72 characters will be rejected by the gate.
 
 - [H801] The first line of the commit message should provide an accurate
  description of the change, not just a reference to a bug or
  blueprint.
 
 
 H802 is mechanically enforced (though not the 50 characters part, so the
 code isn't the same as the rule).
 
 H801 is enforced by a regex that looks to see if the first line is a
 launchpad bug and fails on it. You can't mechanically enforce that
 english provides an accurate description.

Those all seem like things it would be reasonable to drop, especially for the 
reason you gave that they are frequently not tested locally anyway.

 
 
 H3* are all the module import rules:
 
 Imports
 ---
 - [H302] Do not import objects, only modules (*)
 - [H301] Do not import more than one module per line (*)
 - [H303] Do not use wildcard ``*`` import (*)
 - [H304] Do not make relative imports
 - Order your imports by the full module path
 - [H305 H306 H307] Organize your imports according to the `Import order
  template`_ and `Real-world Import Order Examples`_ below.
 
 I think these remain reasonable guidelines, but H302 is exceptionally
 tricky to get right, and we keep not getting it right.

I definitely agree with that. I thought we had it right now, but maybe there’s 
still a case where it’s broken? In any case, I’d like to be able to make the 
Oslo namespace changes API compatible without worrying about if they are 
hacking-rule-compatible. That does get pretty ugly.

 
 H305-307 are actually impossible to get right. Things come in and out of
 stdlib in python all the time.

+1

 
 
 I think it's time to just decide to be reasonable Humans and that these
 are guidelines.

I assume we have the guidelines written down in the review instructions 
somewhere already, if they are implemented in hacking?

 
 The H3* set of rules is also why you have to install *all* of
 requirements.txt and test-requirements.txt in your pep8 tox target,
 because H302 actually inspects the sys.modules to attempt to figure out
 if things are correct.

Yeah, that’s pretty gross.

Doug

 
   -Sean
 
 
 Doug
 - [H802] First, provide a brief summary of 50 characters or less.  Summaries
  of greater then 72 characters will be rejected by the gate.
 
 - [H801] The first line of the commit message should provide an accurate
  description of the change, not just a reference to a bug or
  blueprint.
 
 
 
 -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Yuriy Zveryanskyy

Vladimir,
IMO there is more global problem. Anyone who wants to use baremetal deploy
service should resolve problems with power management, PXE/iPXE support,
DHCP, etc. Or he/she can use Ironic. User has his own vision of deploy 
workflow
and features needed for it. He hears from Ironic people: Feature X 
should be

only after release Y or This don't fit in Ironic at all.
Fuel Agent + driver is the answer. I see Fuel Agent + driver as a solution
for anyone who wants custom features.

On 12/09/2014 06:24 PM, Vladimir Kozhukalov wrote:


We assume next step will be to put provision data (disk partition
scheme, maybe other data) into driver_info and make Fuel Agent
driver
able to serialize those data (special format) and implement a
corresponding data driver in Fuel Agent for this format. Again
very
simple. Maybe it is time to think of having Ironic metadata
service
(just maybe).


I'm ok with the format, my question is: what and how is going to
collect all the data and put into say driver_info?


Fuel has a web service which stores nodes info in its database. When 
user clicks Deploy button, this web service serializes deployment 
task and puts this task into task runner (another Fuel component). 
Then this task runner parses task and adds a node into Ironic via REST 
API (including driver_info). Then it calls Ironic deploy method and 
Ironic uses Fuel Agent driver to deploy a node. Corresponding Fuel 
spec is here https://review.openstack.org/#/c/138301/. Again it is 
zero step implementation.



Honestly, I think writing roadmap right now is not very
rational as far as I am not even sure people are interested in
widening Ironic use cases. Some of the comments were not even
constructive like I don't understand what your use case is,
please use IPA.

Please don't be offended by this. We did put a lot of effort into
IPA and it's reasonable to look for a good use cases before having
one more smart ramdisk. Nothing personal, just estimating cost vs
value :)
Also why not use IPA is a fair question for me and the answer is
about use cases (as you stated it before), not about missing
features of IPA, right?

You are right it is a fair question, and answer is exactly about 
*missing features*.


Nova is not our case. Fuel is totally about deployment. There
is some in
common


Here when we have a difficult point. Major use case for Ironic is
to be driven by Nova (and assisted by Neutron). Without these two
it's hard to understand how Fuel Agent is going to fit into the
infrastructure. And hence my question above about where your json
comes from. In the current Ironic world the same data is received
partly from Nova flavor, partly managed by Neutron completely.
I'm not saying it can't change - we do want to become more
stand-alone. E.g. we can do without Neutron right now. I think
specifying the source of input data for Fuel Agent in the Ironic
infrastructure would help a lot understand, how well Ironic and
Fuel Agent could play together.


According to the information I have, correct me if I'm wrong, Ironic 
currently is on the stage of becoming stand-alone service. That is the 
reason why this spec has been brought up. Again we need something to 
manage power/tftp/dhcp to substitute Cobbler. Ironic looks like a 
suitable tool, but we need this driver. We are not going to break 
anything. We have resources to test and support this driver. And I can 
not use IPA *right now* because it does not have features I need. I 
can not wait for next half a year for these features to be 
implemented. Why can't we add this (Fuel Agent) driver and then if IPA 
implements what we need we can switch to IPA. The only alternative for 
me right now is to implement my own power/tftp/dhcp management 
solution like I did with Fuel Agent when I did not get approve for 
including advanced disk partitioning.


Questions are: Is Ironic interested in this use case or not? Is Ironic 
interested to get more development resources? The only case when it's 
rational for us to spend our resources to develop Ironic is when we 
get something back. We are totally pragmatic, we just address our 
user's wishes and issues. It is ok for us to use any tool which 
provides what we need (IPA, Fuel Agent, any other).


We need advanced disk partitioning and power/tftp/dhcp management by 
March 2015. Is it possible to get this from Ironic + IPA? I doubt it. 
Is it possible to get this form Ironic + Fuel Agent? Yes it is. Is it 
possible to get this from Fuel power/tftp/dhcp management + Fuel 
Agent? Yes it is. So, I have two options right now: Ironic + Fuel 
Agent or Fuel power/tftp/dhcp management + Fuel Agent.







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Fox, Kevin M
We've been interested in Ironic as a replacement for Cobbler for some of our 
systems and have been kicking the tires a bit recently.

While initially I thought this thread was probably another Fuel not playing 
well with the community kind of thing, I'm not thinking that any more. Its 
deeper then that.

Cloud provisioning is great. I really REALLY like it. But one of the things 
that makes it great is the nice, pretty, cute, uniform, standard hardware the 
vm gives the user. Ideally, the physical hardware would behave the same. But, 
“No Battle Plan Survives Contact With the Enemy”.  The sad reality is, most 
hardware is different from each other. Different drivers, different firmware, 
different different different.

One way the cloud enables this isolation is by forcing the cloud admin's to 
install things and deal with the grungy hardware to make the interface nice and 
clean for the user. For example, if you want greater mean time between failures 
of nova compute nodes, you probably use a raid 1. Sure, its kind of a pet kind 
of thing todo, but its up to the cloud admin to decide what's better, buying 
more hardware, or paying for more admin/user time. Extra hard drives are dirt 
cheep...

So, in reality Ironic is playing in a space somewhere between I want to use 
cloud tools to deploy hardware, yay! and ewww.., physical hardware's nasty. 
you have to know all these extra things and do all these extra things that you 
don't have to do with a vm... I believe Ironic's going to need to be able to 
deal with this messiness in as clean a way as possible.  But that's my opinion. 
If the team feels its not a valid use case, then we'll just have to use 
something else for our needs. I really really want to be able to use heat to 
deploy whole physical distributed systems though.

Today, we're using software raid over two disks to deploy our nova compute. 
Why? We have some very old disks we recovered for one of our clouds and they 
fail often. nova-compute is pet enough to benefit somewhat from being able to 
swap out a disk without much effort. If we were to use Ironic to provision the 
compute nodes, we need to support a way to do the same.

We're looking into ways of building an image that has a software raid presetup, 
and expand it on boot. This requires each image to be customized for this case 
though. I can see Fuel not wanting to provide two different sets of images, 
hardware raid and software raid, that have the same contents in them, with 
just different partitioning layouts... If we want users to not have to care 
about partition layout, this is also not ideal...

Assuming Ironic can be convinced that these features really would be needed, 
perhaps the solution is a middle ground between the pxe driver and the agent?

Associate partition information at the flavor level. The admin can decide the 
best partitioning layout for a given hardware... The user doesn't have to care 
any more. Two flavors for the same hardware could be 4 9's or 5 9's or 
something that way.
Modify the agent to support a pxe style image in addition to full layout, and 
have the agent partition/setup raid and lay down the image into it.
Modify the agent to support running grub2 at the end of deployment.

Or at least make the agent plugable to support adding these options.

This does seem a bit backwards from the way the agent has been going. the pxe 
driver was kind of linux specific. the agent is not... So maybe that does imply 
a 3rd driver may be beneficial... But it would be nice to have one driver, the 
agent, in the end that supports everything.

Anyway, some things to think over.

Thanks,
Kevin

From: Jim Rollenhagen [j...@jimrollenhagen.com]
Sent: Tuesday, December 09, 2014 7:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] Fuel agent proposal

On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote:
 Just a short explanation of Fuel use case.

 Fuel use case is not a cloud. Fuel is a deployment tool. We install OS on
 bare metal servers and on VMs
 and then configure this OS using Puppet. We have been using Cobbler as our
 OS provisioning tool since the beginning of Fuel.
 However, Cobbler assumes using native OS installers (Anaconda and
 Debian-installer). For some reasons we decided to
 switch to image based approach for installing OS.

 One of Fuel features is the ability to provide advanced partitioning
 schemes (including software RAIDs, LVM).
 Native installers are quite difficult to customize in the field of
 partitioning
 (that was one of the reasons to switch to image based approach). Moreover,
 we'd like to implement even more
 flexible user experience. We'd like to allow user to choose which hard
 drives to use for root FS, for
 allocating DB. We'd like user to be able to put root FS over LV or MD
 device (including stripe, mirror, multipath).
 We'd like user to be able to choose which hard 

Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-09 Thread Salvatore Orlando
On 9 December 2014 at 17:32, Armando M. arma...@gmail.com wrote:



 On 9 December 2014 at 09:41, Salvatore Orlando sorla...@nicira.com
 wrote:

 I would like to chime into this discussion wearing my plugin developer
 hat.

 We (the VMware team) have looked very carefully at the current proposal
 for splitting off drivers and plugins from the main source code tree.
 Therefore the concerns you've heard from Gary are not just ramblings but
 are the results of careful examination of this proposal.

 While we agree with the final goal, the feeling is that for many plugin
 maintainers this process change might be too much for what can be
 accomplished in a single release cycle.

 We actually gave a lot more than a cycle:


 https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst
 LINE 416

 And in all honestly, I can only tell that getting this done by such an
 experienced team like the Neutron team @VMware shouldn't take that long.


We are probably not experienced enough. We always love to learn new things.



 By the way, if Kyle can do it in his teeny tiny time that he has left
 after his PTL duties, then anyone can do it! :)

 https://review.openstack.org/#/c/140191/


I think I should be able to use mv  git push as well - I think however
there's a bit more than that to it.



 As a member of the drivers team, I am still very supportive of the split,
 I just want to make sure that it’s made in a sustainable way; I also
 understand that “sustainability” has been one of the requirements of the
 current proposal, and therefore we should all be on the same page on this
 aspect.

 However, we did a simple exercise trying to assess the amount of work
 needed to achieve something which might be acceptable to satisfy the
 process. Without going into too many details, this requires efforts for:

 - refactor the code to achieve a plugin module simple and thin enough to
 satisfy the requirements. Unfortunately a radical approach like the one in
 [1] with a reference to an external library is not pursuable for us

 - maintaining code repositories outside of the neutron scope and the
 necessary infrastructure

 - reinforcing our CI infrastructure, and improve our error detection and
 log analysis capabilities to improve reaction times upon failures triggered
 by upstream changes. As you know, even if the plugin interface is
 solid-ish, the dependency on the db base class increases the chances of
 upstream changes breaking 3rd party plugins.


 No-one is advocating for approach laid out in [1], but a lot of code can
 be moved elsewhere (like the nsxlib) without too much effort. Don't forget
 that not so long ago I was the maintainer of this plugin and the one who
 built the VMware NSX CI; I know very well what it takes to scope this
 effort, and I can support you in the process.


Thanks for this clarification. I was sure that you guys were not advocating
for a ninja-split thing, but I wanted just to be sure of that.
I'm also pretty sure our engineering team values your support.

 The feedback from our engineering team is that satisfying the requirements
 of this new process might not be feasible in the Kilo timeframe, both for
 existing plugins and for new plugins and drivers that should be upstreamed
 (there are a few proposed on neutron-specs at the moment, which are all in
 -2 status considering the impending approval of the split out).

 No new plugins can and will be accepted if they do not adopt the proposed
 model, let's be very clear about this.


This is also what I gathered from the proposal. It seems that you're
however stating that there might be some flexibility in defining how much a
plugin complies with the new model. I will need to go back to the drawing
board with the rest of my team and see in which way this can work for us.


 The questions I would like to bring to the wider community are therefore
 the following:

 1 - Is there a possibility of making a further concession on the current
 proposal, where maintainers are encouraged to experiment with the plugin
 split in Kilo, but will actually required to do it in the next release?

 This is exactly what the spec is proposing: get started now, and it does
 not matter if you don't finish in time.


I think the deprecation note at line 416 still scares people off a bit. To
me your word is enough, no change is needed.

 2 - What could be considered as a acceptable as a new plugin? I understand
 that they would be accepted only as “thin integration modules”, which
 ideally should just be a pointer to code living somewhere else. I’m not
 questioning the validity of this approach, but it has been brought to my
 attention that this will actually be troubling for teams which have made an
 investment in the previous release cycles to upstream plugins following the
 “old” process

 You are not alone. Other efforts went through the same process [1, 2, 3].
 Adjusting is a way of life. No-one is advocating for throwing away existing
 

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Vladimir Kozhukalov
Kevin,

Just to make sure everyone understands what Fuel Agent is about. Fuel Agent
is agnostic to image format. There are 3 possibilities for image format
1) DISK IMAGE contains GPT/MBR table and all partitions and metadata in
case of md or lvm. That is just something like what you get when run 'dd
if=/dev/sda of=disk_image.raw'
2) FS IMAGE contains fs. Disk contains some partitions which then could be
used to create md device or volume group contains logical volumes. We then
can put a file system over plain partition or md device or logical volume.
This type of image is what you get when run 'dd if=/dev/sdaN
of=fs_image.raw'
3) TAR IMAGE contains files. It is when you run 'tar cf tar_image.tar /'

Currently in Fuel we use FS images. Fuel Agent creates partitions, md and
lvm devices and then downloads FS images and put them on partition devices
(/dev/sdaN) or on lvm device (/dev/mapper/vgname/lvname) or md device
(/dev/md0)

Fuel Agent is also able to install and configure grub.

Here is the code of Fuel Agent
https://github.com/stackforge/fuel-web/tree/master/fuel_agent




Vladimir Kozhukalov

On Tue, Dec 9, 2014 at 8:41 PM, Fox, Kevin M kevin@pnnl.gov wrote:

 We've been interested in Ironic as a replacement for Cobbler for some of
 our systems and have been kicking the tires a bit recently.

 While initially I thought this thread was probably another Fuel not
 playing well with the community kind of thing, I'm not thinking that any
 more. Its deeper then that.

 Cloud provisioning is great. I really REALLY like it. But one of the
 things that makes it great is the nice, pretty, cute, uniform, standard
 hardware the vm gives the user. Ideally, the physical hardware would
 behave the same. But,
 “No Battle Plan Survives Contact With the Enemy”.  The sad reality is,
 most hardware is different from each other. Different drivers, different
 firmware, different different different.

 One way the cloud enables this isolation is by forcing the cloud admin's
 to install things and deal with the grungy hardware to make the interface
 nice and clean for the user. For example, if you want greater mean time
 between failures of nova compute nodes, you probably use a raid 1. Sure,
 its kind of a pet kind of thing todo, but its up to the cloud admin to
 decide what's better, buying more hardware, or paying for more admin/user
 time. Extra hard drives are dirt cheep...

 So, in reality Ironic is playing in a space somewhere between I want to
 use cloud tools to deploy hardware, yay! and ewww.., physical hardware's
 nasty. you have to know all these extra things and do all these extra
 things that you don't have to do with a vm... I believe Ironic's going to
 need to be able to deal with this messiness in as clean a way as possible.
 But that's my opinion. If the team feels its not a valid use case, then
 we'll just have to use something else for our needs. I really really want
 to be able to use heat to deploy whole physical distributed systems though.

 Today, we're using software raid over two disks to deploy our nova
 compute. Why? We have some very old disks we recovered for one of our
 clouds and they fail often. nova-compute is pet enough to benefit somewhat
 from being able to swap out a disk without much effort. If we were to use
 Ironic to provision the compute nodes, we need to support a way to do the
 same.

 We're looking into ways of building an image that has a software raid
 presetup, and expand it on boot. This requires each image to be customized
 for this case though. I can see Fuel not wanting to provide two different
 sets of images, hardware raid and software raid, that have the same
 contents in them, with just different partitioning layouts... If we want
 users to not have to care about partition layout, this is also not ideal...

 Assuming Ironic can be convinced that these features really would be
 needed, perhaps the solution is a middle ground between the pxe driver and
 the agent?

 Associate partition information at the flavor level. The admin can decide
 the best partitioning layout for a given hardware... The user doesn't have
 to care any more. Two flavors for the same hardware could be 4 9's or 5
 9's or something that way.
 Modify the agent to support a pxe style image in addition to full layout,
 and have the agent partition/setup raid and lay down the image into it.
 Modify the agent to support running grub2 at the end of deployment.

 Or at least make the agent plugable to support adding these options.

 This does seem a bit backwards from the way the agent has been going. the
 pxe driver was kind of linux specific. the agent is not... So maybe that
 does imply a 3rd driver may be beneficial... But it would be nice to have
 one driver, the agent, in the end that supports everything.

 Anyway, some things to think over.

 Thanks,
 Kevin
 
 From: Jim Rollenhagen [j...@jimrollenhagen.com]
 Sent: Tuesday, December 09, 2014 7:00 AM
 To: 

[openstack-dev] [Keystone] No Meeting Today

2014-12-09 Thread Morgan Fainberg
This is a quick note that the Keystone team will not be holding a meeting 
today. Based upon last week’s meeting the goals for today are to review open 
specs[1] and blocking code reviews for the k1 milestone[2] instead.

We will continue with the normal meeting schedule next week.

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/keystone-specs,n,z
[2] https://gist.github.com/dolph/651c6a1748f69637abd0


-- 
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral]

2014-12-09 Thread Nikolay Makhotkin
Guys,

May be I misunderstood something here but what is the difference between
this one and
https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
?

On Tue, Dec 9, 2014 at 5:35 PM, Dmitri Zimine dzim...@stackstorm.com
wrote:

 Winson,

 thanks for filing the blueprint:
 https://blueprints.launchpad.net/mistral/+spec/mistral-global-context,

 some clarification questions:
 1) how exactly would the user describe these global variables
 syntactically? In DSL? What can we use as syntax? In the initial workflow
 input?
 2) what is the visibility scope: this and child workflows, or truly
 global”?
 3) What are the good default behavior?

 Let’s detail it a bit more.

 DZ



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Code pointer for processing cinder backend config

2014-12-09 Thread Mike Perez
On 09:36 Sat 06 Dec , Pradip Mukhopadhyay wrote:
 Where this config info is getting parsed out in the cinder code?

https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/netapp/options.py
https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/netapp/common.py#L76

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Tap-aaS Spec for Kilo

2014-12-09 Thread Anil Rao
Hi,

The latest version of the Tap-aaS spec is available at:

https://review.openstack.org/#/c/140292/

It was uploaded last night and we are hoping that it will be considered as a 
candidate for the Kilo release.

Thanks,
Anil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-09 Thread Maru Newby
On Dec 9, 2014, at 7:04 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Tue, Dec 09, 2014 at 10:53:19AM +0100, Maxime Leroy wrote:
 I have also proposed a blueprint to have a new plugin mechanism in
 nova to load external vif driver. (nova-specs:
 https://review.openstack.org/#/c/136827/ and nova (rfc patch):
 https://review.openstack.org/#/c/136857/)
 
 From my point-of-view of a developer having a plugin framework for
 internal/external vif driver seems to be a good thing.
 It makes the code more modular and introduce a clear api for vif driver 
 classes.
 
 So far, it raises legitimate questions concerning API stability and
 public API that request a wider discussion on the ML (as asking by
 John Garbut).
 
 I think having a plugin mechanism and a clear api for vif driver is
 not going against this policy:
 http://docs.openstack.org/developer/nova/devref/policies.html#out-of-tree-support.
 
 There is no needs to have a stable API. It is up to the owner of the
 external VIF driver to ensure that its driver is supported by the
 latest API. And not the nova community to manage a stable API for this
 external VIF driver. Does it make senses ?
 
 Experiance has shown that even if it is documented as unsupported, once
 the extension point exists, vendors  users will ignore the small print
 about support status. There will be complaints raised every time it gets
 broken until we end up being forced to maintain it as stable API whether
 we want to or not. That's not a route we want to go down.

Is the support contract for a given API have to be binary - ‘supported’ vs 
‘unsupported’?  The stability requirements for REST API’s that end users and 
all kinds of tooling consume are arguably different from those of an internal 
API, and recognizing this difference could be useful.


 
 Considering the network V2 API, L2/ML2 mechanism driver and VIF driver
 need to exchange information such as: binding:vif_type and
 binding:vif_details.
 
 From my understanding, 'binding:vif_type' and 'binding:vif_details' as
 a field part of the public network api. There is no validation
 constraints for these fields (see
 http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html),
 it means that any value is accepted by the API. So, the values set in
 'binding:vif_type' and 'binding:vif_details' are not part of the
 public API. Is my understanding correct ?
 
 The VIF parameters are mapped into the nova.network.model.VIF class
 which is doing some crude validation. I would anticipate that this
 validation will be increasing over time, because any functional data
 flowing over the API and so needs to be carefully managed for upgrade
 reasons.
 
 Even if the Neutron impl is out of tree, I would still expect both
 Nova and Neutron core to sign off on any new VIF type name and its
 associated details (if any).
 
 What other reasons am I missing to not have VIF driver classes as a
 public extension point ?
 
 Having to find  install VIF driver classes from countless different
 vendors, each hiding their code away in their own obsecure website,
 will lead to awful end user experiance when deploying Nova. Users are
 better served by having it all provided when they deploy Nova IMHO

Shipping drivers in-tree makes sense for a purely open source solution for the 
reasons you mention.  The logic doesn’t necessarily extend to deployment of a 
proprietary solution, though.  If a given OpenStack deployment is intended to 
integrate with such a solution, it is likely that the distro/operator/deployer 
will have a direct relationship with the solution provider and the required 
software (including VIF driver(s), if necessary) is likely to have a 
well-defined distribution channel.


 
 If every vendor goes off  works in their own isolated world we also
 loose the scope to align the implementations, so that common concepts
 work the same way in all cases and allow us to minimize the number of
 new VIF types required. The proposed vhostuser VIF type is a good
 example of this - it allows a single Nova VIF driver to be capable of
 potentially supporting multiple different impls on the Neutron side.
 If every vendor worked in their own world, we would have ended up with
 multiple VIF drivers doing the same thing in Nova, each with their own
 set of bugs  quirks.

I’m not sure the suggestion is that every vendor go off and do their own thing. 
 Rather, the option for out-of-tree drivers could be made available to those 
that are pursuing initiatives that aren’t found to be in-keeping with Nova’s 
current priorities.  I believe that allowing out-of-tree extensions is 
essential to ensuring the long-term viability of OpenStack.  There is only so 
much experimental work that is going to be acceptable in core OpenStack 
projects, if only to ensure stability.  Yes, there is the potential for 
duplicative effort with results of varying quality, but that’s the price of 
competitive innovation whether in the field of ideas or 

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Clint Byrum
Excerpts from Yuriy Zveryanskyy's message of 2014-12-09 04:05:03 -0800:
 Good day Ironicers.
 
 I do not want to discuss questions like Is feature X good for release 
 Y? or Is feature Z in Ironic scope or not?.
 I want to get an answer for this: Is Ironic a flexible, easy extendable 
 and user-oriented solution for deployment?

I surely hope it is.

 Yes, it is I think. IPA is the great software, but Fuel Agent proposes a 
 different and alternative way for deploying.

It's not fundamentally different, it is just capable of other things.

 Devananda wrote about pets and cattle, and maybe some want to manage 
 pets rather than cattle? Let
 users do a choice.

IMO this is too high-level of a discussion for Ironic to get bogged
down in. Disks can have partitions and be hosted in RAID controllers,
and these things _MUST_ come before an OS is put on the disks, but after
power control happens. Since Ironic does put OS's on disks, and control
power, I believe it is obligated to provide an interface for rich disk
configuration.

There are valid use cases for _both_ of those things in cattle, which
is a higher level problem that should not cloud the low level interface
discussion.

So IMO, Ironic needs to provide an interface for agents to richly
configure disks, whether IPA supports it or not.

Would I like to see these things in IPA so that there isn't a mismatch
of features? Yes. Does that matter _now_? Not really. The FuelAgent can
prove out the interface while the features migrate into IPA.

 We do not plan to change any Ironic API for the driver, internal or 
 external (as opposed to IPA, this was done for it).
 If there will be no one for Fuel Agent's driver support I think this 
 driver should be removed from Ironic tree (I heard
 this practice is used in Linux kernel).
 

We have a _hyperv_ driver in Nova.. I think we can have a something
we're not entirely 100% on board with in Ironic.

All of that said, I would admonish FuelAgent developers to work to
commit to combine their agent with IPA long term. I would admonish Ironic
developers to be receptive to things that users want. It doesn't always
mean taking responsibility for implementations, but you _do_ need to
consider the pain of not providing interfaces and of forcing people to
remain out of tree (remember when Ironic's driver wasn't in Nova's tree?)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 12:20 PM, Kevin L. Mitchell wrote:
 On Tue, 2014-12-09 at 12:05 -0500, Sean Dague wrote:
 I agree that dropping H302 and the grouping checks makes sense.  I
 think
 we should keep the H301, H303, H304, and the basic ordering checks,
 however; it doesn't seem to me that these would be that difficult to
 implement or maintain.

 Well, be careful what you think is easy -
 https://github.com/openstack-dev/hacking/blob/master/hacking/checks/imports.py
 :)
 
 So, hacking_import_rules() is very complex.  However, it implements H302
 as well as H301, H303, and H304.  I feel it can be simplified to just a
 textual match rule if we remove the H302 implementation: H301 just needs
 to exclude imports with ',', H303 needs to exclude imports with '*', and
 H304 is already implemented as a regular expression match.  It looks
 like the basic ordering check I was referring to is H306, which isn't
 all that complicated.  It seems like the rest of the code is related to
 the checks which I just agreed should be dropped :)  Am I missing
 anything?

Yes, the following fails H305 and H306.

nova/tests/fixtures.py

Fixtures for Nova tests.
from __future__ import absolute_import

import gettext
import logging
import os
import uuid

import fixtures
from oslo.config import cfg

from nova.db import migration
from nova.db.sqlalchemy import api as session
from nova import service


Because name normalization is hard (fixtures is normalized to
nova.tests.fixtures so H305 thinks it should be in group 3, and H306
thinks it should be after oslo.config import cfg).

To sort things you have to normalize them.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 12:07 PM, Jeremy Stanley wrote:
 On 2014-12-09 11:56:54 -0500 (-0500), Sean Dague wrote:
 Honestly, any hard rejection ends up problematic. For instance, it
 means it's impossible to include actual urls in commit messages to
 reference things without a url shortener much of the time.
 
 Fair enough. I think this makes it a human problem which we're not
 going to solve by applying more technology. Drop all of H8XX, make
 Gerrit preserve votes on commit-message-only patchset updates,
 decree no more commit message -1s from reviewers, and make it
 socially acceptable to just edit commit messages of changes you
 review to bring them up to acceptable standards.

I still think -1 for commit message is fine, but it's a human thing, not
a computer thing. Because the consumers of the commit messages are humans.

And I also think that if a commit message change doesn't retrigger all
the tests, people will be a lot happier updating them.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [hacking] hacking package upgrade dependency with setuptools

2014-12-09 Thread Surojit Pathak

Hi all,

On a RHEL system, as I upgrade hacking package from 0.8.0 to 0.9.5, I 
see flake8 stops working. Upgrading setuptools resolves the issue. But I 
do not see a change in version for pep8 or setuptools, with the upgrade 
in setuptools.


Any issue in packaging? Any explanation of this behavior?

Snippet -
[suro@poweredsoured ~]$ pip list | grep hacking
hacking (0.8.0)
[suro@poweredsoured ~]$
[suro@poweredsoured app]$ sudo pip install hacking==0.9.5
... Successful installation
[suro@poweredsoured app]$ flake8 neutron/
...
  File /usr/lib/python2.6/site-packages/pkg_resources.py, line 546, 
in resolve

raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pep8=1.4.6
[suro@poweredsoured app]$ pip list | grep pep8
pep8 (1.5.6)
[suro@poweredsoured app]$ pip list | grep setuptools
setuptools (0.6c11)
[suro@poweredsoured app]$ sudo pip install -U setuptools
...
Successfully installed setuptools
Cleaning up...
[suro@poweredsoured app]$ pip list | grep pep8
pep8 (1.5.6)
[suro@poweredsoured app]$ pip list | grep setuptools
setuptools (0.6c11)
[suro@poweredsoured app]$ flake8 neutron/
[suro@poweredsoured app]$

--
Regards,
Surojit Pathak


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] mid-cycle hot reviews

2014-12-09 Thread Carl Baldwin
On Tue, Dec 9, 2014 at 3:33 AM, Miguel Ángel Ajo majop...@redhat.com wrote:

 Hi all!

   It would be great if you could use this thread to post hot reviews on
 stuff
 that it’s being worked out during the mid-cycle, where others from different
 timezones could participate.

I think we've used the etherpad [1] in the past to put hot reviews.
I've added some reviews.  I don't know if others here are doing the
same.

Carl

[1] https://etherpad.openstack.org/p/neutron-mid-cycle-sprint-dec-2014

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-09 Thread Stefano Maffulli
On 12/09/2014 06:04 AM, Jeremy Stanley wrote:
 We already have a solution for tracking the contributor-IRC
 mapping--add it to your Foundation Member Profile. For example, mine
 is in there already:
 
 http://www.openstack.org/community/members/profile/5479

I recommend updating the openstack.org member profile and add IRC
nickname there (and while you're there, update your affiliation history).

There is also a search engine on:

http://www.openstack.org/community/members/

/stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][Keystone] Policy graduation

2014-12-09 Thread Doug Hellmann

On Dec 9, 2014, at 12:18 PM, Morgan Fainberg morgan.fainb...@gmail.com wrote:

 I would like to propose that we keep the policy library under the oslo 
 program. As with other graduated projects we will maintain a core team (that 
 while including the oslo-core team) will be comprised of the expected 
 individuals from the Identity and other security related teams.
 
 The change in direction is due to the policy library being more generic and 
 not exactly a clean fit with the OpenStack Identity program. This is the 
 policy rules engine, which is currently used by all (or almost all) OpenStack 
 projects. Based on the continued conversation, it doesn’t make sense to take 
 it out of the “common” namespace.
 
 If there are no concerns with this change of direction we will update the 
 spec[1] to reflect this proposal and continue with the plans to graduate as 
 soon as possible.

I know a few of the Oslo cores are already offline, but I think it’s safe to 
update the spec and we can hash out details there. Please make sure to document 
why we changed the plan we discussed at the summit in the spec.

Doug

 
 [1] https://review.openstack.org/#/c/140161/
 
 -- 
 Morgan Fainberg
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Jeremy Stanley
On 2014-12-09 13:49:00 -0500 (-0500), Sean Dague wrote:
[...]
 And I also think that if a commit message change doesn't retrigger all
 the tests, people will be a lot happier updating them.

Agreed--though this will need a newer Gerrit plus a new feature in
Zuul so it recognizes the difference in the stream.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Sean Dague
On 12/09/2014 02:46 PM, Jeremy Stanley wrote:
 On 2014-12-09 13:49:00 -0500 (-0500), Sean Dague wrote:
 [...]
 And I also think that if a commit message change doesn't retrigger all
 the tests, people will be a lot happier updating them.
 
 Agreed--though this will need a newer Gerrit plus a new feature in
 Zuul so it recognizes the difference in the stream.
 

Yes, it's not a tomorrow thing (I know we need Gerrit 2.9 first). But I
think it's the way we should evolve the system.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] updated instructions for creating a library repository

2014-12-09 Thread Doug Hellmann
Now that the infra manual includes the “Project Creator’s Guide”, I have 
updated our wiki page to refer to it. I could use a sanity check to make sure I 
don’t have things in a bad order. If you have a few minutes to help with that, 
please look over https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary and 
http://docs.openstack.org/infra/manual/creators.html together.

Thanks!
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Devananda van der Veen
Thank you for explaining in detail what Fuel's use case is. I was lacking
this information, and taking the FuelAgent proposal in isolation. Allow me
to respond to several points inline...

On Tue Dec 09 2014 at 4:08:45 AM Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 Just a short explanation of Fuel use case.

 Fuel use case is not a cloud.


This is a fairly key point, and thank you for bringing it up. Ironic's
primary aim is to better OpenStack, and as such, to be part of an Open
Source Cloud Computing platform. [0]

Meeting a non-cloud use case has not been a priority for the project as a
whole. It is from that perspective that my initial email was written, and I
stand by what I said there -- FuelAgent does not appear to be significantly
different from IPA when used within a cloudy use case. But, as you've
pointed out, that's not your use case :)

Enabling use outside of OpenStack has been generally accepted by the team,
though I don't believe anyone on the core team has put a lot of effort into
developing that yet. As I read this thread, I'm pleased to see more details
about Fuel's architecture and goals -- I think there is a potential fit for
Ironic here, though several points need further discussion.


 Fuel is a deployment tool. We install OS on bare metal servers and on VMs
 and then configure this OS using Puppet. We have been using Cobbler as our
 OS provisioning tool since the beginning of Fuel.
 However, Cobbler assumes using native OS installers (Anaconda and
 Debian-installer). For some reasons we decided to
 switch to image based approach for installing OS.

 One of Fuel features is the ability to provide advanced partitioning
 schemes (including software RAIDs, LVM).
 Native installers are quite difficult to customize in the field of
 partitioning
 (that was one of the reasons to switch to image based approach). Moreover,
 we'd like to implement even more
 flexible user experience.


The degree of customization and flexibility which you describe is very
understandable within traditional IT shops. Don't get me wrong -- there's
nothing inherently bad about wanting to give such flexibility to your
users. However, infinite flexibility is counter-productive to two of the
primary benefits of cloud computing: repeatability, and consistency.

[snip]

According Fuel itself, our nearest plan is to get rid of Cobbler because
 in the case of image based approach it is huge overhead. The question is
 which tool we can use instead of Cobbler. We need power management,
 we need TFTP management, we need DHCP management. That is
 exactly what Ironic is able to do.


You're only partly correct here. Ironic provides a vendor-neutral
abstraction for power management and image deployment, but Ironic does not
implement any DHCP management - Neutron is responsible for that, and Ironic
calls out to Neutron's API only to adjust dhcpboot parameters. At no point
is Ironic responsible for IP or DNS assignment.

This same view is echoed in the spec [1] which I have left comments on:

 Cobbler manages DHCP, DNS, TFTP services ...
 OpenStack has Ironic in its core which is capable to do the same ...
 Ironic can manage DHCP and it is planned to implement dnsmasq plugin.

To reiterate, Ironic does not manage DHCP or DNS, it never has, and such is
not on the roadmap for Kilo [2]. Two specs related to this were proposed
last month [3] -- but a spec proposal does not equal project plans. One of
the specs has been abandoned, and I am still waiting for the author to
rewrite the other one. Neither are approved nor targeted to Kilo.


In summary, if I understand correctly, it seems as though you're trying to
fit Ironic into Cobbler's way of doing things, rather than recognize that
Ironic approaches provisioning in a fundamentally different way.

Your use case:
* is not cloud-like
* does not include Nova or Neutron, but will duplicate functionality of
both (you need a scheduler and all the logic within nova.virt.ironic, and
something to manage DHCP and DNS assignment)
* would use Ironic to manage diverse hardware, which naturally requires
some operator-driven customization, but still exposes the messy
configuration bits^D^Dchoices to users at deploy time
* duplicates some of the functionality already available in other drivers

There are certain aspects of the proposal which I like, though:
* using SSH rather than HTTP for remote access to the deploy agent
* support for putting the root partition on a software RAID
* integration with another provisioning system, without any API changes

Regards,
-Devananda


[0] https://wiki.openstack.org/wiki/Main_Page

[1]
https://review.openstack.org/#/c/138301/8/specs/6.1/substitution-cobbler-with-openstack-ironic.rst

[2] https://launchpad.net/ironic/kilo

[3] https://review.openstack.org/#/c/132511/ and
https://review.openstack.org/#/c/132744/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova] question about Get Guest Info row in HypervisorSupportMatrix

2014-12-09 Thread Daniel P. Berrange
On Tue, Dec 09, 2014 at 06:15:01PM +0100, Markus Zoeller wrote:
   On Tue, Dec 09, 2014 at 06:33:47PM +0300, Dmitry Guryanov wrote:
   
   Hello!
   
   There is a feature in HypervisorSupportMatrix 
   (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) called Get 
 Guest 
   Info. Does anybody know, what does it mean? I haven't found anything 
 like 
   this neither in nova api nor in horizon and nova command line.
 
 I think this maps to the nova driver function get_info:
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4054
 
 I believe (and didn't double-check) that this is used e.g. by the 
 Nova CLI via `nova show [--minimal] server` command.

Ah yes, that would make sense

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Query on creating multiple resources

2014-12-09 Thread Zane Bitter

On 09/12/14 03:48, Renat Akhmerov wrote:

Hey,

I think it’s a question of what the final goal is. For just creating security 
groups as a resource I think Georgy and Zane are right, just use Heat. If the 
goal is to try Mistral or have this simple workflow as part of more complex 
then it’s totally fine to use Mistral. Sorry, I’m probably biased because 
Mistral is our baby :). Anyway, Nikolay has already answered the question 
technically, this “for-each” feature will be available officially in about 2 
weeks.


:)

They're not mutually exclusive, of course, and to clarify I wasn't 
suggesting replacing Mistral with Heat, I was suggesting replacing a 
bunch of 'create security group' steps in a larger workflow with a 
single 'create stack' step.


In general, though:
- When you are just trying to get to a particular end state and it 
doesn't matter how you get there, Heat is a good solution.
- When you need to carry out a particular series of steps, and it is the 
steps that are well-defined, not the end state, then Mistral is a good 
solution.
- When you have a well-defined end state but some steps need to be done 
in a particular way that isn't supported by Heat, then Mistral can be a 
solution (it's not a _good_ solution, but that isn't a criticism because 
it isn't Mistral's job to make up for deficiencies in Heat).
- Both services are _highly_ complementary. For example, let's say you 
have a batch job to run regularly: you want to provision a server, do 
some work on it, and then remove the server when the work is complete. 
(An example that a lot of people will be doing pretty regularly might be 
building a custom VM image and uploading it to Glance.) This is a 
classic example of a workflow, and you should use Mistral to implement 
it. Now let's say that rather than just a single server you have a 
complex group of resources that need to be set up prior to running the 
job. You could encode all of the steps required to correctly set up and 
tear down all of those resources in the Mistral workflow, but that would 
be a mistake. While the overall process is still a workflow, the desired 
state after creating all of the resources but before running the job is 
known, and it doesn't matter how you get there. Therefore it's better to 
define the resources in a Heat template: unless you are doing something 
really weird it will Just Work(TM) for creating them all in the right 
order with optimal parallelisation, it knows how to delete them 
afterwards too without having to write it again backwards, and you can 
easily test it in isolation from the rest of the workflow. So you would 
replace the steps in the workflow that create and delete the server with 
steps that create and delete a stack.



Create VM workflow was a demo example. Mistral potentially can be used by Heat 
or other orchestration tools to do actual interaction with API, but for user it 
might be easier to use Heat functionality.


I kind of disagree with that statement. Mistral can be used by whoever finds 
its useful for their needs. Standard “create_instance” workflow (which is in 
“resources/workflows/create_instance.yaml”) is not so a demo example as well. 
It does a lot of good stuff you may really need in your case (e.g. retry 
policies). Even though it’s true that it has some limitations we’re aware of. 
For example, when it comes to configuring a network for newly created instance 
it’s now missing network related parameters to be able to alter behavior.


I agree that it's unlikely that Heat should replace Mistral in many of 
the Mistral demo scenarios. I do think you could make a strong argument 
that Heat should replace *Nova* in many of those scenarios though.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-09 Thread Kevin L. Mitchell
On Tue, 2014-12-09 at 13:46 -0500, Sean Dague wrote:
 Yes, the following fails H305 and H306.
 
 nova/tests/fixtures.py
 
 Fixtures for Nova tests.
 from __future__ import absolute_import
 
 import gettext
 import logging
 import os
 import uuid
 
 import fixtures
 from oslo.config import cfg
 
 from nova.db import migration
 from nova.db.sqlalchemy import api as session
 from nova import service
 
 
 Because name normalization is hard (fixtures is normalized to
 nova.tests.fixtures so H305 thinks it should be in group 3, and H306
 thinks it should be after oslo.config import cfg).
 
 To sort things you have to normalize them.

I agree you have to normalize imports to sort them, but to my mind the
appropriate normalization here is purely textual; we shouldn't be
expecting any relative imports (and should raise an error if there are
any).  Still, that does show that some work needs to be done to the
simpler H306 test (probably involving changes to the core import
normalization)…
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Devananda van der Veen
On Tue Dec 09 2014 at 7:49:32 AM Yuriy Zveryanskyy 
yzveryans...@mirantis.com wrote:

 On 12/09/2014 05:00 PM, Jim Rollenhagen wrote:
  On Tue, Dec 09, 2014 at 04:01:07PM +0400, Vladimir Kozhukalov wrote:

  Many many various cases are possible. If you ask why we'd like to
 support
  all those cases, the answer is simple:
  because our users want us to support all those cases.
  Obviously, many of those cases can not be implemented as image
 internals,
  some cases can not be also implemented on
  configuration stage (placing root fs on lvm device).
 
  As far as those use cases were rejected to be implemented in term of
 IPA,
  we implemented so called Fuel Agent.
  This is *precisely* why I disagree with adding this driver.
 
  Nearly every feature that is listed here has been talked about before,
  within the Ironic community. Software RAID, LVM, user choosing the
  partition layout. These were reected from IPA because they do not fit in
  *Ironic*, not because they don't fit in IPA.

 Yes, they do not fit in Ironic *core* but this is a *driver*.
 There is iLO driver for example. Good or bad is iLO management technology?
 I don't know. But it is an existing vendor's solution. I should buy or rent
 HP server for tests or experiments with iLO driver. Fuel is widely used
 solution for deployment, and it is open-source. I think to have Fuel Agent
 driver in Ironic will be better than driver for rare hardware XYZ for
 example.


This argument is completely hollow. Fuel is not a vendor-specific
hardware-enablement driver. It *is* an open-source deployment driver
providing much the same functionality as another open-source deployment
driver which is already integrated with the project.

To make my point another way, could I use Fuel with HP iLO driver? (the
answer should be yes because they fill different roles within Ironic).
But, on the other hand, could I use Fuel with the IPA driver? (nope -
definitely not - they do the same thing.)

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-09 Thread Ivar Lazzaro
I agree with Salvatore that the split is not an easy thing to achieve for
vendors, and I would like to bring up my case to see if there are ways to
make this at least a bit simpler.

At some point I had the need to backport vendor code from Juno to Icehouse
(see first attempt here [0]). That in [0] was some weird approach that put
unnecessary burden on infra, neutron cores and even packagers, so I decided
to move to a more decoupled approach that was basically completely
splitting my code from Neutron. You can find the result here [1].
The focal points of this approach are:

* **all** the vendor code is removed;
* Neutron is used as a dependency, pulled directly from github for UTs (see
test-requirements [2]) and explicitly required when installing the plugin;
* The Database and Schema is the same as Neutron's;
* A migration script exists for this driver, which uses a different (and
unique) version_table (see env.py [3]);
* Entry points are properly used in setup.cfg [4] in order to provide
migration scripts and Driver/Plugin shortcuts for Neutron;
* UTs are run by including Neutron in the venv [2].
* The boilerplate is taken care by cookiecutter [5].

The advantage of the above approach, is that it's very simple to pull off
(only thing you need is cookiecutter, a repo, and then you can just
replicate the same tree structure that existed in Neutron for your own
vendor code). Also it has the advantage to remove all the vendor code from
Neutron (did I say that already?). As far as the CI is concerned, it just
needs to learn how to install the new plugin, which will require Neutron
to be pre-existent.

The typical installation workflow would be:
- Install Neutron normally;
- pull from pypi the vendor driver;
- run the vendor db migration script;
- Do everything else (configuration and execution) just like it was done
before.

Note that this same satellite approach is used by GBP (I know this is a bad
word that once brought hundreds of ML replies, but that's just an example
:) ) for the Juno timeframe [6]. This shows that the very same thing can be
easily done for services.

As far as ML2 is concerned, I think we should split it as well in order to
treat all the plugins equally, but with the following caveats:

* ML2 will be in a openstack repo under the networking program (kind of
obvious);
* The drivers can decide wether to stay in tree with ML2 or not (for a
better community effort, but they will definitively evolve slower);
* Don't care about the governance, Neutron will be in charge of this repo
and will have the ability to promote whoever they want when needed.

As far as cogating is concerned, I think that using the above approach the
breaking will exist just as long as the infra job understands how to
install the ML2 driver from it's own repo. I don't see it as a big issue,
But maybe it's just me, and my fabulous world where stuff works for no good
reason. We could at least ask the infra team to understand it it's feasible.
Moreover, this is a work that we may need to do anyway! So it's better to
just start it now thus creating an example for all the vendors that have to
go through the split (back on Gary's point).

Appreciate your feedback,
Ivar.

[0] https://review.openstack.org/#/c/123596/
[1] https://github.com/noironetworks/apic-ml2-driver/tree/icehouse
[2]
https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/test-requirements.txt
[3]
https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/apic_ml2/neutron/db/migration/alembic_migrations/env.py
[4] https://github.com/noironetworks/apic-ml2-driver/blob/icehouse/setup.cfg
[5] https://github.com/openstack-dev/cookiecutter
[6] https://github.com/stackforge/group-based-policy

On Tue, Dec 9, 2014 at 9:53 AM, Salvatore Orlando sorla...@nicira.com
wrote:



 On 9 December 2014 at 17:32, Armando M. arma...@gmail.com wrote:



 On 9 December 2014 at 09:41, Salvatore Orlando sorla...@nicira.com
 wrote:

 I would like to chime into this discussion wearing my plugin developer
 hat.

 We (the VMware team) have looked very carefully at the current proposal
 for splitting off drivers and plugins from the main source code tree.
 Therefore the concerns you've heard from Gary are not just ramblings but
 are the results of careful examination of this proposal.

 While we agree with the final goal, the feeling is that for many plugin
 maintainers this process change might be too much for what can be
 accomplished in a single release cycle.

 We actually gave a lot more than a cycle:


 https://review.openstack.org/#/c/134680/16/specs/kilo/core-vendor-decomposition.rst
 LINE 416

 And in all honestly, I can only tell that getting this done by such an
 experienced team like the Neutron team @VMware shouldn't take that long.


 We are probably not experienced enough. We always love to learn new things.



 By the way, if Kyle can do it in his teeny tiny time that he has left
 after his PTL duties, then anyone can do it! :)

 

Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Devananda van der Veen
On Tue Dec 09 2014 at 10:13:52 AM Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 Kevin,

 Just to make sure everyone understands what Fuel Agent is about. Fuel
 Agent is agnostic to image format. There are 3 possibilities for image
 format
 1) DISK IMAGE contains GPT/MBR table and all partitions and metadata in
 case of md or lvm. That is just something like what you get when run 'dd
 if=/dev/sda of=disk_image.raw'


This is what IPA driver does today.


 2) FS IMAGE contains fs. Disk contains some partitions which then could be
 used to create md device or volume group contains logical volumes. We then
 can put a file system over plain partition or md device or logical volume.
 This type of image is what you get when run 'dd if=/dev/sdaN
 of=fs_image.raw'


This is what PXE driver does today, but it does so over a remote iSCSI
connection.

Work is being done to add support for this to IPA [0]


 3) TAR IMAGE contains files. It is when you run 'tar cf tar_image.tar /'

 Currently in Fuel we use FS images. Fuel Agent creates partitions, md and
 lvm devices and then downloads FS images and put them on partition devices
 (/dev/sdaN) or on lvm device (/dev/mapper/vgname/lvname) or md device
 (/dev/md0)


I believe the IPA team would welcome contributions that add support for
software RAID for the root partition.


 Fuel Agent is also able to install and configure grub.


Again, I think this would be welcomed by the IPA team...

If this is what FuelAgent is about, why is there so much resistance to
contributing that functionality to the component which is already
integrated with Ironic? Why complicate matters for both users and
developers by adding *another* deploy agent that does (or will soon do) the
same things?

-Deva

[0]
https://blueprints.launchpad.net/ironic/+spec/partition-image-support-for-agent-driver
https://review.openstack.org/137363
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] hacking package upgrade dependency with setuptools

2014-12-09 Thread Joe Gordon
On Tue, Dec 9, 2014 at 11:05 AM, Surojit Pathak suro.p...@gmail.com wrote:

 Hi all,

 On a RHEL system, as I upgrade hacking package from 0.8.0 to 0.9.5, I see
 flake8 stops working. Upgrading setuptools resolves the issue. But I do not
 see a change in version for pep8 or setuptools, with the upgrade in
 setuptools.

 Any issue in packaging? Any explanation of this behavior?

 Snippet -
 [suro@poweredsoured ~]$ pip list | grep hacking
 hacking (0.8.0)
 [suro@poweredsoured ~]$
 [suro@poweredsoured app]$ sudo pip install hacking==0.9.5
 ... Successful installation
 [suro@poweredsoured app]$ flake8 neutron/
 ...
   File /usr/lib/python2.6/site-packages/pkg_resources.py, line 546, in
 resolve
 raise DistributionNotFound(req)
 pkg_resources.DistributionNotFound: pep8=1.4.6
 [suro@poweredsoured app]$ pip list | grep pep8
 pep8 (1.5.6)
 [suro@poweredsoured app]$ pip list | grep setuptools
 setuptools (0.6c11)
 [suro@poweredsoured app]$ sudo pip install -U setuptools
 ...
 Successfully installed setuptools
 Cleaning up...
 [suro@poweredsoured app]$ pip list | grep pep8
 pep8 (1.5.6)
 [suro@poweredsoured app]$ pip list | grep setuptools
 setuptools (0.6c11)
 [suro@poweredsoured app]$ flake8 neutron/
 [suro@poweredsoured app]$


Could this be pbr related?

-pbr=0.5.21,1.0
+pbr=0.6,!=0.7,1.0





 --
 Regards,
 Surojit Pathak


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state?

2014-12-09 Thread Joe Gordon
On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) dannc...@cisco.com
wrote:

  Hi,

  I have a VM which is in ERROR state.


 +--+--+++-++

 | ID   | Name
 | Status | Task State | Power State | Networks   |


 +--+--+++-++

 | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 |
 cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR  | -  |
 NOSTATE ||

  I tried in both CLI “nova delete” and Horizon “terminate instance”.
 Both accepted the delete command without any error.
 However, the VM never got deleted.

  Is there a way to remove the VM?


What version of nova are you using? This is definitely a serious bug, you
should be able to delete an instance in error state. Can you file a bug
that includes steps on how to reproduce the bug along with all relevant
logs.

bugs.launchpad.net/nova



  Thanks,
 Danny

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Fuel agent proposal

2014-12-09 Thread Devananda van der Veen
On Tue Dec 09 2014 at 9:45:51 AM Fox, Kevin M kevin@pnnl.gov wrote:

 We've been interested in Ironic as a replacement for Cobbler for some of
 our systems and have been kicking the tires a bit recently.

 While initially I thought this thread was probably another Fuel not
 playing well with the community kind of thing, I'm not thinking that any
 more. Its deeper then that.


There are aspects to both conversations here, and you raise many valid
points.

Cloud provisioning is great. I really REALLY like it. But one of the things
 that makes it great is the nice, pretty, cute, uniform, standard hardware
 the vm gives the user. Ideally, the physical hardware would behave the
 same. But,
 “No Battle Plan Survives Contact With the Enemy”.  The sad reality is,
 most hardware is different from each other. Different drivers, different
 firmware, different different different.


Indeed, hardware is different. And no matter how homogeneous you *think* it
is, at some point, some hardware is going to fail^D^D^Dbehave differently
than some other piece of hardware.

One of the primary goals of Ironic is to provide a common *abstraction* to
all the vendor differences, driver differences, and hardware differences.
There's no magic in that -- underneath the covers, each driver is going to
have to deal with the unpleasant realities of actual hardware that is
actually different.


 One way the cloud enables this isolation is by forcing the cloud admin's
 to install things and deal with the grungy hardware to make the interface
 nice and clean for the user. For example, if you want greater mean time
 between failures of nova compute nodes, you probably use a raid 1. Sure,
 its kind of a pet kind of thing todo, but its up to the cloud admin to
 decide what's better, buying more hardware, or paying for more admin/user
 time. Extra hard drives are dirt cheep...

 So, in reality Ironic is playing in a space somewhere between I want to
 use cloud tools to deploy hardware, yay! and ewww.., physical hardware's
 nasty. you have to know all these extra things and do all these extra
 things that you don't have to do with a vm... I believe Ironic's going to
 need to be able to deal with this messiness in as clean a way as possible.


If by clean you mean, expose a common abstraction on top of all those
messy differences -- then we're on the same page. I would welcome any
feedback as to where that abstraction leaks today, and on both spec and
code reviews that would degrade or violate that abstraction layer. I think
it is one of, if not *the*, defining characteristic of the project.


 But that's my opinion. If the team feels its not a valid use case, then
 we'll just have to use something else for our needs. I really really want
 to be able to use heat to deploy whole physical distributed systems though.

 Today, we're using software raid over two disks to deploy our nova
 compute. Why? We have some very old disks we recovered for one of our
 clouds and they fail often. nova-compute is pet enough to benefit somewhat
 from being able to swap out a disk without much effort. If we were to use
 Ironic to provision the compute nodes, we need to support a way to do the
 same.


I have made the (apparently incorrect) assumption that anyone running
anything sensitive to disk failures in production would naturally have a
hardware RAID, and that, therefor, Ironic should be capable of setting up
that RAID in accordance with a description in the Nova flavor metadata --
but did not need to be concerned with software RAIDs.

Clearly, there are several folks who have the same use-case in mind, but do
not have hardware RAID cards in their servers, so my initial assumption was
incorrect :)

I'm fairly sure that the IPA team would welcome contributions to this
effect.

We're looking into ways of building an image that has a software raid
 presetup, and expand it on boot.


Awesome! I hope that work will make its way into diskimage-builder ;)

(As an aside, I suggested this to the Fuel team back in Atlanta...)


 This requires each image to be customized for this case though. I can see
 Fuel not wanting to provide two different sets of images, hardware raid
 and software raid, that have the same contents in them, with just
 different partitioning layouts... If we want users to not have to care
 about partition layout, this is also not ideal...


End-users are probably not generating their own images for bare metal
(unless user == operator, in which case, it should be fine).


 Assuming Ironic can be convinced that these features really would be
 needed, perhaps the solution is a middle ground between the pxe driver and
 the agent?


I've been rallying for a convergence between the feature sets of these
drivers -- specifically, that the agent should support partition-based
images, and also support copy-over-iscsi as a deployment model. In
parallel, Lucas had started working on splitting the deploy interface into
both boot and deploy, which point we 

Re: [openstack-dev] [keystone][all] Max Complexity Check Considered Harmful

2014-12-09 Thread Joe Gordon
On Mon, Dec 8, 2014 at 5:03 PM, Brant Knudson b...@acm.org wrote:


 Not too long ago projects added a maximum complexity check to tox.ini, for
 example keystone has max-complexity=24. Seemed like a good idea at the
 time, but in a recent attempt to lower the maximum complexity check in
 keystone[1][2], I found that the maximum complexity check can actually lead
 to less understandable code. This is because the check includes an embedded
 function's complexity in the function that it's in.


This behavior is expected.

Nested functions cannot be unit tested on there own.  Part of the issue is
that nested functions can access variables scoped to the outer function, so
the following function is valid:

 def outer():
var = outer
def inner():
print var
inner()


Because nested functions cannot easily be unit tested, and can be harder to
reason about since they can access variables that are part of the outer
function, I don't think they are easier to understand (there are still
cases where a nested function makes sense though).


 The way I would have lowered the complexity of the function in keystone is
 to extract the complex part into a new function. This can make the existing
 function much easier to understand for all the reasons that one defines a
 function for code. Since this new function is obviously only called from
 the function it's currently in, it makes sense to keep the new function
 inside the existing function. It's simpler to think about an embedded
 function because then you know it's only called from one place. The problem
 is, because of the existing complexity check behavior, this doesn't lower
 the complexity according to the complexity check, so you wind up putting
 the function as a new top-level, and now a reader is has to assume that the
 function could be called from anywhere and has to be much more cautious
 about changes to the function.


 Since the complexity check can lead to code that's harder to understand,
 it must be considered harmful and should be removed, at least until the
 incorrect behavior is corrected.


Why do you think the max complexity check is harmful? because it prevents
large amounts of nested functions?




 [1] https://review.openstack.org/#/c/139835/
 [2] https://review.openstack.org/#/c/139836/
 [3] https://review.openstack.org/#/c/140188/

 - Brant


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches

2014-12-09 Thread Joe Gordon
On Fri, Dec 5, 2014 at 11:43 AM, Ian Main im...@redhat.com wrote:

 Sean Dague wrote:
  On 12/04/2014 05:38 PM, Matt Riedemann wrote:
  
  
   On 12/4/2014 4:06 PM, Michael Still wrote:
   +Eric and Ian
  
   On Fri, Dec 5, 2014 at 8:31 AM, Matt Riedemann
   mrie...@linux.vnet.ibm.com wrote:
   This came up in the nova meeting today, I've opened a bug [1] for it.
   Since
   this isn't maintained by infra we don't have log indexing so I can't
 use
   logstash to see how pervasive it us, but multiple people are
   reporting the
   same thing in IRC.
  
   Who is maintaining the nova-docker CI and can look at this?
  
   It also looks like the log format for the nova-docker CI is a bit
   weird, can
   that be cleaned up to be more consistent with other CI log results?
  
   [1] https://bugs.launchpad.net/nova-docker/+bug/1399443
  
   --
  
   Thanks,
  
   Matt Riedemann
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   Also, according to the 3rd party CI requirements [1] I should see
   nova-docker CI in the third party wiki page [2] so I can get details on
   who to contact when this fails but that's not done.
  
   [1] http://ci.openstack.org/third_party.html#requirements
   [2] https://wiki.openstack.org/wiki/ThirdPartySystems
 
  It's not the 3rd party CI job we are talking about, it's the one in the
  check queue which is run by infra.
 
  But, more importantly, jobs in those queues need shepards that will fix
  them. Otherwise they will get deleted.
 
  Clarkb provided the fix for the log structure right now -
  https://review.openstack.org/#/c/139237/1 so at least it will look
  vaguely sane on failures
 
-Sean

 This is one of the reasons we might like to have this in nova core.
 Otherwise
 we will just keep addressing issues as they come up.  We would likely be
 involved doing this if it were part of nova core anyway.


While gating on nova-docker will prevent patches that cause nova-docker to
break 100% to land, it won't do a lot to prevent transient failures. To fix
those we need people dedicated to making sure nova-docker is working.



 Ian

  --
  Sean Dague
  http://dague.net
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Max Complexity Check Considered Harmful

2014-12-09 Thread Angus Salkeld
On Wed, Dec 10, 2014 at 8:43 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Mon, Dec 8, 2014 at 5:03 PM, Brant Knudson b...@acm.org wrote:


 Not too long ago projects added a maximum complexity check to tox.ini,
 for example keystone has max-complexity=24. Seemed like a good idea at
 the time, but in a recent attempt to lower the maximum complexity check in
 keystone[1][2], I found that the maximum complexity check can actually lead
 to less understandable code. This is because the check includes an embedded
 function's complexity in the function that it's in.


 This behavior is expected.

 Nested functions cannot be unit tested on there own.  Part of the issue is
 that nested functions can access variables scoped to the outer function, so
 the following function is valid:

  def outer():
 var = outer
 def inner():
 print var
 inner()


 Because nested functions cannot easily be unit tested, and can be harder
 to reason about since they can access variables that are part of the outer
 function, I don't think they are easier to understand (there are still
 cases where a nested function makes sense though).


I think the improvement in ease of unit testing is a huge plus from my
point of view (when splitting the function to the same level).
This seems in the balance to be far more helpful than harmful.

-Angus



 The way I would have lowered the complexity of the function in keystone
 is to extract the complex part into a new function. This can make the
 existing function much easier to understand for all the reasons that one
 defines a function for code. Since this new function is obviously only
 called from the function it's currently in, it makes sense to keep the new
 function inside the existing function. It's simpler to think about an
 embedded function because then you know it's only called from one place.
 The problem is, because of the existing complexity check behavior, this
 doesn't lower the complexity according to the complexity check, so you
 wind up putting the function as a new top-level, and now a reader is has to
 assume that the function could be called from anywhere and has to be much
 more cautious about changes to the function.


 Since the complexity check can lead to code that's harder to understand,
 it must be considered harmful and should be removed, at least until the
 incorrect behavior is corrected.


 Why do you think the max complexity check is harmful? because it prevents
 large amounts of nested functions?




 [1] https://review.openstack.org/#/c/139835/
 [2] https://review.openstack.org/#/c/139836/
 [3] https://review.openstack.org/#/c/140188/

 - Brant


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches

2014-12-09 Thread Eric Windisch


 While gating on nova-docker will prevent patches that cause nova-docker to
 break 100% to land, it won't do a lot to prevent transient failures. To fix
 those we need people dedicated to making sure nova-docker is working.



What would be helpful for me is a way to know that our tests are breaking
without manually checking Kibana, such as an email.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] pxe-boot

2014-12-09 Thread Joe Gordon
On Wed, Dec 3, 2014 at 1:16 AM, Pasquale Porreca 
pasquale.porr...@dektech.com.au wrote:

 The use case we were thinking about is a Network Function (e.g. IMS Nodes)
 implementation in which the high availability is based on OpenSAF. In this
 scenario there is an Active/Standby cluster of 2 System Controllers (SC)
 plus several Payloads (PL) that boot from network, controlled by the SC.
 The logic of which service to deploy on each payload is inside the SC.

 In OpenStack both SCs and PLs will be instances running in the cloud,
 anyway the PLs should still boot from network under the control of the SC.
 In fact to use Glance to store the image for the PLs and keep the control
 of the PLs in the SC, the SC should trigger the boot of the PLs with
 requests to Nova/Glance, but an application running inside an instance
 should not directly interact with a cloud infrastructure service like
 Glance or Nova.


Why not? This is a fairly common practice.



 We know that it is yet possible to achieve network booting in OpenStack
 using an image stored in Glance that acts like a pxe client, anyway this
 workaround has some drawbacks, mainly due to the fact it won't be possible
 to choose the specific virtual NIC on which the network boot will happen,
 causing DHCP requests to flow on networks where they don't belong to and
 possible delays in the boot of the instances.


 On 11/27/14 00:32, Steve Gordon wrote:

 - Original Message -

 From: Angelo Matarazzo angelo.matara...@dektech.com.au
 To: OpenStack Development Mailing openstack-dev@lists.openstack.org,
 openstack-operat...@lists.openstack.org


 Hi all,
 my team and I are working on pxe boot feature very similar to the
 Discless VM one  in Active blueprint list[1]
 The blueprint [2] is no longer active and we created a new spec [3][4].

 Nova core reviewers commented our spec and the first and the most
 important objection is that there is not a compelling reason to
 provide this kind of feature : booting from network.

 Aside from the specific implementation, I think that some members of
 TelcoWorkingGroup could be interested in  and provide a use case.
 I would also like to add this item to the agenda of next meeting

 Any thought?

 We did discuss this today, and granted it is listed as a blueprint
 someone in the group had expressed interest in at a point in time - though
 I don't believe any further work was done. The general feeling was that
 there isn't anything really NFV or Telco specific about this over and above
 the more generic use case of legacy applications. Are you able to further
 elaborate on the reason it's NFV or Telco specific other than because of
 who is requesting it in this instance?

 Thanks!

 -Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] People of OpenStack (and their IRC nicks)

2014-12-09 Thread Angus Salkeld
On Wed, Dec 10, 2014 at 5:11 AM, Stefano Maffulli stef...@openstack.org
wrote:

 On 12/09/2014 06:04 AM, Jeremy Stanley wrote:
  We already have a solution for tracking the contributor-IRC
  mapping--add it to your Foundation Member Profile. For example, mine
  is in there already:
 
  http://www.openstack.org/community/members/profile/5479

 I recommend updating the openstack.org member profile and add IRC
 nickname there (and while you're there, update your affiliation history).

 There is also a search engine on:

 http://www.openstack.org/community/members/


Except that info doesn't appear nicely in review. Some people put their
nick in their Full Name in
gerrit. Hopefully Clint doesn't mind:

https://review.openstack.org/#/q/owner:%22Clint+%27SpamapS%27+Byrum%22+status:open,n,z

I *think* that's done here: https://review.openstack.org/#/settings/contact

At least with that it is really obvious without having to go to another
site what your nick is.

-Angus


 /stef

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches

2014-12-09 Thread Joe Gordon
On Tue, Dec 9, 2014 at 3:18 PM, Eric Windisch e...@windisch.us wrote:


 While gating on nova-docker will prevent patches that cause nova-docker
 to break 100% to land, it won't do a lot to prevent transient failures. To
 fix those we need people dedicated to making sure nova-docker is working.



 What would be helpful for me is a way to know that our tests are breaking
 without manually checking Kibana, such as an email.



There is also graphite [0], but since the docker-job is running on the
check queue the data we are producing is very dirty. Since check jobs often
run on broken patches.

[0]
http://graphite.openstack.org/render/?from=-10daysheight=500until=nowwidth=1200bgcolor=fffgcolor=00yMax=100yMin=0target=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.check-tempest-dsvm-docker.FAILURE,sum(stats.zuul.pipeline.check.job.check-tempest-dsvm-docker.{SUCCESS,FAILURE})),%2736hours%27),%20%27check-tempest-dsvm-docker%27),%27orange%27)title=Docker%20Failure%20Rates%20(10%20days)_t=0.3702208176255226



 Regards,
 Eric Windisch


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >