[openstack-dev] [nova][all] Tracking bug and patch statuses

2014-06-06 Thread Joe Gordon
Hi All,

In the nova meeting this week, we discussed some of the shortcomings of our
recent bug day, one of the ideas that was brought up was to do a better job
of keeping track of stale bugs (assigned but not worked on) [0]. To that
end I put something together based on what infra uses for there bug days to
go through all the open bugs in a project and list the related gerrit
patches and there state [1].

I ran this on nova [2] (just the first 750 bugs or so) and
python-novaclient [3].  From the looks of it we can be doing a much better
job of keeping bug states in sync with patches etc.

[0]
http://eavesdrop.openstack.org/meetings/nova/2014/nova.2014-06-05-21.01.log.html
[1] https://github.com/jogo/openstack-infra-scripts
[2] http://paste.openstack.org/show/83055/
[3] http://paste.openstack.org/show/83057
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Can nova manage both ironic bare metal nodes and kvm VMs ?

2014-06-06 Thread Jander lu
still not, I did not successfully set up my Ironic environment.

has your Ironic environment run well now?


2014-06-06 11:53 GMT+08:00 严超 yanchao...@gmail.com:

 Hi, Jander:
 Thank you very much .
 Does this work after you follow these steps ?

 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*


 2014-06-06 11:40 GMT+08:00 Jander lu lhcxx0...@gmail.com:

 hi chao
 I have met the same problem, I read this article
 http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/


 2014-06-05 19:26 GMT+08:00 严超 yanchao...@gmail.com:

  Hi, All:
 In deploying with devstack and Ironic+Nova, we set:
 compute_driver = nova.virt.ironic.IronicDriver
 This means we can no longer use nova to boot VMs.
 Is there a way to manage both ironic bare metal nodes and kvm VMs in
 Nova ?
  I followed this Link:
 https://etherpad.openstack.org/p/IronicDeployDevstack


 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] two confused part about Ironic

2014-06-06 Thread Jander lu
Hi Devananda

I have 16 compute nodes, as your suggestion (you should use host aggregates
to differentiate the nova-compute services configured to use different
hypervisor drivers (eg, nova.virt.libvirt vs nova.virt.ironic) .

(1)I  can set 4 of them with nova.virt.ironic(for bare metal provision) and
left 12 of them with nova.virt.libvirt (for VM provision), they can work
well together both for VM provision and Ironic provision ?  of course I
should use host aggregates to make 4 nodes as one aggregate and left 12
nodes as another aggregate.
(2 ) should I replace the nova sheduler? the default nova scheduler(Filter
Scheduler) can support this?


2014-06-06 1:27 GMT+08:00 Devananda van der Veen devananda@gmail.com:

 There is documentation available here:
   http://docs.openstack.org/developer/ironic/deploy/install-guide.html

 On Thu, Jun 5, 2014 at 1:25 AM, Jander lu lhcxx0...@gmail.com wrote:
  Hi, Devvananda
 
  I searched a lot about the installation of Ironic, but there is little
  metarial about this,  there is only devstack with
  ironic(
 http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html)
 
  is there any docs about how to deploy Ironic on production physical node
  enviroment?
 
  thx
 
 
 
  2014-05-30 1:49 GMT+08:00 Devananda van der Veen 
 devananda@gmail.com:
 
  On Wed, May 28, 2014 at 8:14 PM, Jander lu lhcxx0...@gmail.com wrote:
 
  Hi, guys, I have two confused part in Ironic.
 
 
 
  (1) if I use nova boot api to launch an physical instance, how does
 nova
  boot command differentiate whether VM or physical node provision? From
 this
  article, nova bare metal use PlacementFilter instead of
 FilterScheduler.so
  does Ironic use the same method?
  (
 http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/
 )
 
 
  That blog post is now more than three releases old. I would strongly
  encourage you to use Ironic, instead of nova-baremetal, today. To my
  knowledge, that PlacementFilter was not made publicly available. There
 are
  filters available for the FilterScheduler that work with Ironic.
 
  As I understand it, you should use host aggregates to differentiate the
  nova-compute services configured to use different hypervisor drivers
 (eg,
  nova.virt.libvirt vs nova.virt.ironic).
 
 
 
  (2)does Ironic only support Flat network? If not, how does Ironic
  implement tenant isolation in virtual network? say,if one tenant has
 two
  vritual network namespace,how does the created bare metal node
 instance send
  the dhcp request to the right namespace?
 
 
  Ironic does not yet perform tenant isolation when using the PXE driver,
  and should not be used in an untrusted multitenant environment today.
 There
  are other issues with untrusted tenants as well (such as firmware
 exploits)
  that make it generally unsuitable to untrusted multitenancy (though
  specialized hardware platforms may mitigate this).
 
  There have been discussions with Neutron, and work is being started to
  perform physical network isolation, but this is still some ways off.
 
  Regards,
  Devananda
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC] Glance Functional API and Cross-project API Consistency

2014-06-06 Thread Mark McLoughlin
On Fri, 2014-05-30 at 18:22 +, Hemanth Makkapati wrote:
 Hello All,
 I'm writing to notify you of the approach the Glance community has
 decided to take for doing functional API.  Also, I'm writing to
 solicit your feedback on this approach in the light of cross-project
 API consistency.
 
 At the Atlanta Summit, the Glance team has discussed introducing
 functional API in Glance so as to be able to expose operations/actions
 that do not naturally fit into the CRUD-style. A few approaches are
 proposed and discussed here. We have all converged on the approach to
 include 'action' and action type in the URL. For instance,
 'POST /images/{image_id}/actions/{action_type}'.
 
 However, this is different from the way Nova does actions. Nova
 includes action type in the payload. For instance,
 'POST /servers/{server_id}/action {type: action_type, ...}'. At
 this point, we hit a cross-project API consistency issue mentioned
 here (under the heading 'How to act on resource - cloud perform on
 resources'). Though we are differing from the way Nova does actions
 and hence another source of cross-project API inconsistency , we have
 a few reasons to believe that Glance's way is helpful in certain ways.
 
 The reasons are as following:
 1. Discoverability of operations.  It'll be easier to expose permitted
 actions through schemas a json home document living
 at /images/{image_id}/actions/.
 2. More conducive for rate-limiting. It'll be easier to rate-limit
 actions in different ways if the action type is available in the URL.
 3. Makes more sense for functional actions that don't require a
 request body (e.g., image deactivation).
 
 At this point we are curious to see if the API conventions group
 believes this is a valid and reasonable approach.

It's obviously preferable if new APIs follow conventions established by
existing APIs, but I think you've laid out pretty compelling rationale
for not following Nova's lead on this.

The question is whether Nova should plan on adopting this approach in a
future version of its API?

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Thoughts about how to enforce DB and object models mapping

2014-06-06 Thread Sylvain Bauza
Hi,

By working on providing a new scheduler client for compute nodes, I
began to use the ComputeNode object instead of placing a call to
conductor directly.

Unfortunately, I recently discovered that some changes have been done in
the DB model for ComputeNode that haven't been populated on the
corresponding Object model.

As there is no current code in Nova using ComputeNode objects, I can
understand that, as the first user of the object, I have to find the
lacks and fix them, so I'm OK with that.

That said, I'm thinking about how we could make sure that any change in
a DB model would have to be also done on the corresponding Object model.
A possible approach would be to create a test class for each object and
provide a check against the DB model to make sure that at least the
fields are all there.

Does that sound reasonable to you ? Should we provide another way for
gating this ?

Thanks,
-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [TC] Program Mission Statement and the Catalog

2014-06-06 Thread Mark McLoughlin
On Wed, 2014-06-04 at 18:03 -0700, Mark Washenberger wrote:
 Hi folks,
 
 
 I'd like to propose the Images program to adopt a mission statement
 [1] and then change it to reflect our new aspirations of acting as a
 Catalog that works with artifacts beyond just disk images [2]. 
 
 
 Since the Glance mini summit early this year, momentum has been
 building significantly behind catalog effort and I think its time we
 recognize it officially, to ensure further growth can proceed and to
 clarify the interactions the Glance Catalog will have with other
 OpenStack projects.
 
 
 Please see the linked openstack/governance changes, and provide your
 feedback either in this thread, on the changes themselves, or in the
 next TC meeting when we get a chance to discuss.
 
 
 Thanks to Georgy Okrokvertskhov for coming up with the new mission
 statement.

Just quoting the proposal here to make the idea slightly more
accessible, perhaps triggering some discussion here:

  https://review.openstack.org/98002

  Artifact Repository Service:
codename: Glance
mission:
  To provide services to store, browse, share, distribute, and manage
  artifacts consumable by OpenStack services in a unified manner. An 
artifact
  is any strongly-typed, versioned collection of document and bulk,
  unstructured data and is immutable once the artifact is published in the
  repository.

Thanks,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Mid cycle meetup

2014-06-06 Thread Thierry Carrez
Michael Still wrote:
 Nova will hold its Juno mid cycle meetup between July 28 and 30, at an
 Intel campus in Beaverton, OR (near Portland). There is a wiki page
 with more details here:
 
 https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint

On https://wiki.openstack.org/wiki/Sprints it shows that this would be a
Nova+Ironic thing, and there is a TripleO+Ironic thing in Raleigh over
the same dates.

Should one of those drop its Ironic focus to encourage convergence of
Ironic devs on a single area ?

cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Too much shim rest proxy mechanism drivers in ML2

2014-06-06 Thread henry hly
ML2 mechanism drivers are becoming another kind of plugins. Although they
can be loaded together, but can not work with each other.

Today, there are more and more drivers, supporting all kinds of networking
hardware and middleware (sdn controller). Unfortunately, they are designed
exclusively as chimney REST proxies.

A very general use case of heterogeneous networking: we have OVS controlled
by ovs agent, and  switchs from different vendor, some of them are
controller by their own driver/agent directly, others are controlled by a
sdn controller middleware. Can we create a vxlan network, across all these
sw/hw switchs?

It's not so easy: neutron ovs use l2 population mech driver, sdn
controllers have their own population way, today most dedicated switch
driver can only support vlan. sdn controller people may say: it's ok, just
put everything under the control of my controller, leaving ml2 plugin as a
shim rest proxy layer. But, shouldn't Openstack Neutron itself be the first
class citizen even if there is not controller involved?

Could we remove all device related adaption(rest/ssh/netconf/of... proxy)
from these mechanism driver to the agent side, leaving only necessary code
in the plugin? Heterogeneous networking may become easier, while ofagent
give a good example, it can co-exist with native neutron OVS agent in vxlan
l2 population. And with the help of coming ML2 agent framework, hardware
device or middleware controller adaption agent could be more simplified.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Ironic] vendor_passthru testing

2014-06-06 Thread Gopi Krishna Saripuri
Hi,

I'm using icehouse devstack version. I'm testing the vendor_passthru methods 
behavior using curl , But it is failing with 404 not found error.
Here is the query/response. 

curl -H X-Auth-Token:${token}  
http://10.105.214.179:6385/v1/nodes/2d70d135-85b5-4f75-b741-0ead90a42b29/vendor_passthru/get_firmware_info

fails with

{error_message: html\n head\n  title404 Not Found/title\n /head\n 
body\n  h1404 Not Found/h1\n  The resource could not be found.br /br 
/\n\n\n\n /body\n/html}


Is there a way to test vendor_passthru from ironic cli, also I didn't see any 
support for this in python-ironicclient?

I'm able to retrieve chassis/nodes/ports. But while testing vendor_passthru 
method, it is failing with 404 error. 

Can someone help me with testing the vendor_passthru methods.

Regards
Gopi Krishna S
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] vendor_passthru testing

2014-06-06 Thread Lucas Alvares Gomes
Hi,



On Fri, Jun 6, 2014 at 9:44 AM, Gopi Krishna Saripuri
saripurig...@outlook.com wrote:
 Hi,

 I'm using icehouse devstack version. I'm testing the vendor_passthru methods
 behavior using curl , But it is failing with 404 not found error.
 Here is the query/response.

 curl -H X-Auth-Token:${token}
 http://10.105.214.179:6385/v1/nodes/2d70d135-85b5-4f75-b741-0ead90a42b29/vendor_passthru/get_firmware_info

 fails with

 {error_message: html\n head\n  title404 Not Found/title\n
 /head\n body\n  h1404 Not Found/h1\n  The resource could not be
 found.br /br /\n\n\n\n /body\n/html}

The vendor passthru methods only support POST right now[1].



 Is there a way to test vendor_passthru from ironic cli, also I didn't see
 any support for this in python-ironicclient?

Unfortunately not, the CLI doesn't support vendor_passthru.


 I'm able to retrieve chassis/nodes/ports. But while testing vendor_passthru
 method, it is failing with 404 error.

 Can someone help me with testing the vendor_passthru methods.

 Regards
 Gopi Krishna S

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[1] 
https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L452-L479

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] vendor_passthru testing

2014-06-06 Thread Ramakrishnan G
Hi,

vendor_passthru is a POST request and that might be missing here.
curl -i -X POST -H 'Content-Type: application/json' -H 'Accept:
application/json' -H X-Auth-Token:${token}
http://10.105.214.179:6385/v1/nodes/2d70d135-85b5-4f75-b741-0ead90a42b29/vendor_passthru/get_firmware_info

Can you just check if the above works for you ?

If not, I guess trying you can try a GET on the node to check if the GET
listing atleast works.
curl -i -X GET -H 'Content-Type: application/json' -H 'Accept:
application/json' -H X-Auth-Token:${token}
http://10.105.214.179:6385/v1/nodes/2d70d135-85b5-4f75-b741-0ead90a42b29
http://10.105.214.179:6385/v1/nodes/2d70d135-85b5-4f75-b741-0ead90a42b29/vendor_passthru/get_firmware_info




On Fri, Jun 6, 2014 at 2:14 PM, Gopi Krishna Saripuri 
saripurig...@outlook.com wrote:

 Hi,

 I'm using icehouse devstack version. I'm testing the vendor_passthru
 methods behavior using curl , But it is failing with 404 not found error.
 Here is the query/response.

 curl -H X-Auth-Token:${token}
 http://10.105.214.179:6385/v1/nodes/2d70d135-85b5-4f75-b741-0ead90a42b29/vendor_passthru/get_firmware_info

 fails with

 {error_message: html\n head\n  title404 Not Found/title\n
 /head\n body\n  h1404 Not Found/h1\n  The resource could not be
 found.br /br /\n\n\n\n /body\n/html}


 Is there a way to test vendor_passthru from ironic cli, also I didn't see
 any support for this in python-ironicclient?

 I'm able to retrieve chassis/nodes/ports. But while testing
 vendor_passthru method, it is failing with 404 error.

 Can someone help me with testing the vendor_passthru methods.

 Regards
 Gopi Krishna S

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ramesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Too much shim rest proxy mechanism drivers in ML2

2014-06-06 Thread Kevin Benton
Could we remove all device related adaption(rest/ssh/netconf/of... proxy)
from these mechanism driver to the agent side, leaving only necessary code
in the plugin?

I'm not sure I understand what you are advocating removing vs leaving in
the plugin. I know that some of the drivers are used to configure the
physical fabric to push vlans down to the port that the hypervisor is
plugged into and are meant to be used in conjunction with the openvswitch
driver.

While it might make sense for the port-level operations to be done at an
agent level, it doesn't work very well if they require a lot of queries to
driver specific DBs. It makes less sense when it comes to network and
subnet-level operations because there isn't really a mapping between a
network and specific agent, so it would just have to be fulfilled by a
random agent which again may require several DB calls back to the central
neutron DB.

Could you elaborate a little more on what types of code should be stripped
out of drivers and moved to agents?

--
Kevin Benton


On Fri, Jun 6, 2014 at 1:17 AM, henry hly henry4...@gmail.com wrote:

 ML2 mechanism drivers are becoming another kind of plugins. Although
 they can be loaded together, but can not work with each other.

 Today, there are more and more drivers, supporting all kinds of networking
 hardware and middleware (sdn controller). Unfortunately, they are designed
 exclusively as chimney REST proxies.

 A very general use case of heterogeneous networking: we have OVS
 controlled by ovs agent, and  switchs from different vendor, some of them
 are controller by their own driver/agent directly, others are controlled by
 a sdn controller middleware. Can we create a vxlan network, across all
 these sw/hw switchs?

 It's not so easy: neutron ovs use l2 population mech driver, sdn
 controllers have their own population way, today most dedicated switch
 driver can only support vlan. sdn controller people may say: it's ok, just
 put everything under the control of my controller, leaving ml2 plugin as a
 shim rest proxy layer. But, shouldn't Openstack Neutron itself be the first
 class citizen even if there is not controller involved?

 Could we remove all device related adaption(rest/ssh/netconf/of... proxy)
 from these mechanism driver to the agent side, leaving only necessary code
 in the plugin? Heterogeneous networking may become easier, while ofagent
 give a good example, it can co-exist with native neutron OVS agent in vxlan
 l2 population. And with the help of coming ML2 agent framework, hardware
 device or middleware controller adaption agent could be more simplified.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-06-06 Thread Stig Telfer
Apologies for a late response. I agree, it’s an unfortunate aspect of Ironic’s 
hardware-focused role in the project that means CI test coverage is hard to 
achieve.  And it is misleading to present our drivers as continuously tested 
when they are not.

Nevertheless, I add a +1 in favour of keeping the drivers within Ironic’s tree. 
 Testing may be only on a best-effort basis, but taking drivers out of tree 
would result in an Ironic that by default would only support hardware fitting 
the profile of the test systems, and that could be quite limiting.

Can we combine the segregation of untested/unmaintained/unloved drivers with 
counterbalancing efforts to promote better testing, maintenance and status on 
third party drivers?  For example:


-  Promotion of third party CI testing within Ironic

-  Associating a maintainer with each driver

-  On each release cycle compiling a support matrix of driver status, 
based on data submitted by the driver maintainers, or face relegation from the 
table

I have seen pages on third party CI and Gerrit’s interface to support this, but 
I found nothing specific about Ironic driver contributors making use of this.  
Is there anyone out there who has done this and can comment on their experience?

Best wishes,
Stig Telfer
Cray


From: Devananda van der Veen [mailto:devananda@gmail.com]
Sent: Thursday, May 22, 2014 1:03 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Ironic] handling drivers that will not be third-party 
tested

I'd like to bring up the topic of drivers which, for one reason or another, are 
probably never going to have third party CI testing.

Take for example the iBoot driver proposed here:
  https://review.openstack.org/50977

I would like to encourage this type of driver as it enables individual 
contributors, who may be using off-the-shelf or home-built systems, to benefit 
from Ironic's ability to provision hardware, even if that hardware does not 
have IPMI or another enterprise-grade out-of-band management interface. 
However, I also don't expect the author to provide a full third-party CI 
environment, and as such, we should not claim the same level of test coverage 
and consistency as we would like to have with drivers in the gate.

As it is, Ironic already supports out-of-tree drivers. A python module that 
registers itself with the appropriate entrypoint will be made available if the 
ironic-conductor service is configured to load that driver. For what it's 
worth, I recall Nova going through a very similar discussion over the last few 
cycles...

So, why not just put the driver in a separate library on github or stackforge?


-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-06 Thread Day, Phil

From: Scott Devoid [mailto:dev...@anl.gov]
Sent: 04 June 2014 17:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation 
ratio out of scheduler

Not only live upgrades but also dynamic reconfiguration.

Overcommitting affects the quality of service delivered to the cloud user.  In 
this situation in particular, as in many situations in general, I think we want 
to enable the service provider to offer multiple qualities of service.  That 
is, enable the cloud provider to offer a selectable level of overcommit.  A 
given instance would be placed in a pool that is dedicated to the relevant 
level of overcommit (or, possibly, a better pool if the selected one is 
currently full).  Ideally the pool sizes would be dynamic.  That's the dynamic 
reconfiguration I mentioned preparing for.

+1 This is exactly the situation I'm in as an operator. You can do different 
levels of overcommit with host-aggregates and different flavors, but this has 
several drawbacks:

  1.  The nature of this is slightly exposed to the end-user, through 
extra-specs and the fact that two flavors cannot have the same name. One 
scenario we have is that we want to be able to document our flavor names--what 
each name means, but we want to provide different QoS standards for different 
projects. Since flavor names must be unique, we have to create different 
flavors for different levels of service. Sometimes you do want to lie to your 
users!
[Day, Phil] I agree that there is a problem with having every new option we add 
in extra_specs leading to a new set of flavors.There are a number of 
changes up for review to expose more hypervisor capabilities via extra_specs 
that also have this potential problem.What I’d really like to be able to 
ask for a s a user is something like “a medium instance with a side order of 
overcommit”, rather than have to choose from a long list of variations.I 
did spend some time trying to think of a more elegant solution – but as the 
user wants to know what combinations are available it pretty much comes down to 
needing that full list of combinations somewhere.So maybe the problem isn’t 
having the flavors so much, but in how the user currently has to specific an 
exact match from that list.
If the user could say “I want a flavor with these attributes” and then the 
system would find a “best match” based on criteria set by the cloud admin (for 
example I might or might not want to allow a request for an overcommitted 
instance to use my not-overcommited flavor depending on the roles of the 
tenant) then would that be a more user friendly solution ?


  1.  If I have two pools of nova-compute HVs with different overcommit 
settings, I have to manage the pool sizes manually. Even if I use puppet to 
change the config and flip an instance into a different pool, that requires me 
to restart nova-compute. Not an ideal situation.
[Day, Phil] If the pools are aggregates, and the overcommit is defined by 
aggregate meta-data then I don’t see why you  need to restart nova-compute.
3.  If I want to do anything complicated, like 3 overcommit tiers with 
good, better, best performance and allow the scheduler to pick better 
for a good instance if the good pool is full, this is very hard and 
complicated to do with the current system.
[Day, Phil]  Yep, a combination of filters and weighting functions would allow 
you to do this – its not really tied to whether the overcommit Is defined in 
the scheduler or the host though as far as I can see.

I'm looking forward to seeing this in nova-specs!
~ Scott
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-06 Thread Day, Phil


From: Scott Devoid [mailto:dev...@anl.gov]
Sent: 04 June 2014 17:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation 
ratio out of scheduler

Not only live upgrades but also dynamic reconfiguration.

Overcommitting affects the quality of service delivered to the cloud user.  In 
this situation in particular, as in many situations in general, I think we want 
to enable the service provider to offer multiple qualities of service.  That 
is, enable the cloud provider to offer a selectable level of overcommit.  A 
given instance would be placed in a pool that is dedicated to the relevant 
level of overcommit (or, possibly, a better pool if the selected one is 
currently full).  Ideally the pool sizes would be dynamic.  That's the dynamic 
reconfiguration I mentioned preparing for.

+1 This is exactly the situation I'm in as an operator. You can do different 
levels of overcommit with host-aggregates and different flavors, but this has 
several drawbacks:

  1.  The nature of this is slightly exposed to the end-user, through 
extra-specs and the fact that two flavors cannot have the same name. One 
scenario we have is that we want to be able to document our flavor names--what 
each name means, but we want to provide different QoS standards for different 
projects. Since flavor names must be unique, we have to create different 
flavors for different levels of service. Sometimes you do want to lie to your 
users!
[Day, Phil] BTW you might be able to (nearly) do this already if you define 
aggregates for the two QoS pools, and limit which projects can be scheduled 
into those pools using the AggregateMultiTenancyIsolation filter.I say 
nearly because as pointed out by this spec that filter currently only excludes 
tenants from each aggregate – it doesn’t actually constrain them to only be in 
a specific aggregate.


  1.  If I have two pools of nova-compute HVs with different overcommit 
settings, I have to manage the pool sizes manually. Even if I use puppet to 
change the config and flip an instance into a different pool, that requires me 
to restart nova-compute. Not an ideal situation.
  2.  If I want to do anything complicated, like 3 overcommit tiers with 
good, better, best performance and allow the scheduler to pick better 
for a good instance if the good pool is full, this is very hard and 
complicated to do with the current system.

I'm looking forward to seeing this in nova-specs!
~ Scott
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-sdk-php] Reasons to use Behat/behavior driven development in an SDK?

2014-06-06 Thread Choi, Sam
Hello all,
During our most recent php sdk team meeting, a suggestion to use the PHP 
framework Behat (http://behat.org/) was brought up. I'd like to begin a 
discussion around the pros/cons of using Behat to create tests. For those not 
familiar with the php sdk or Behat, we are currently using PHPUnit for all of 
our tests and are considering Behat for writing human-readable stories that can 
potentially generate tests that can be run against the sdk.

A couple general questions for all devs

-Is a similar framework being used for any OpenStack CLI/SDK project? 
I'd love to see an example.

-What kind of pros/cons have you all seen when using Behat or similar 
frameworks for testing?

Jamie

-Would the suggested Behat framework be intended to supplement our 
existing PHPUnit tests? In effect, giving us a set of Behat tests and PHPUnit 
tests. Or was the suggestion to have all tests rely on the Behat framework?

After reviewing Behat and some related technologies, I do have a few concerns 
at the moment. Any thoughts of the concerns would be appreciated.


-We are introducing another dependency for the PHP SDK. What happens if 
there is a reason for us to switch to another BDD framework or ditch it to go 
back to PHPUnit? It seems like we would likely be faced with significant test 
refactoring.

-Contributors will have to be proficient with Behat/BDD in order to 
write tests for the PHP SDK. This adds to what may already be a steep learning 
curve for a new contributor who has to learn our code base, Guzzle for the 
transport layer, how to use various/develop various cloud services etc.

-For more complicated tests, writing out features, scenarios, step 
definitions, and testing logic would almost certainly take longer than writing 
a traditional test in pure PHP.


Thanks,
--
Sam Choi
Hewlett-Packard Co.
HP Cloud Services
+1 650 316 1652 / Office

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Thoughts about how to enforce DB and object models mapping

2014-06-06 Thread Murray, Paul (HP Cloud)
Hi Sylvain,

I think the idea is that anyone making changes to the db data model has to make 
changes to the corresponding object and that is captured in the data model 
implications of nova-specs (or it should be). Then reviewers check it is done. 
Tests on the objects will necessarily exercise the database fields by writing 
to and reading form them I think.

In the case of ComputeNode, it is still just getting there. I should be putting 
fields in as part of the resource-tracker-objects work track, and you will have 
seen by now that there are patches up for review doing that. Any help making it 
complete is welcome.

The philosophy is to only add fields that are being used - so as it is being 
built out, the existence of a field in db model does not necessarily imply it 
should be added to the ComputeNode object. It is easy to remove an unused field 
from the db, but more complicated to remove it from the object. Likewise for 
fields that want to change their format. So for my part, the fields that were 
not added yet are fields that were not used in the resource tracker - even if 
they are used in the scheduler.

The one exception is the pci_stats - which was undergoing change.

Paul.

-Original Message-
From: Sylvain Bauza [mailto:sba...@redhat.com] 
Sent: 06 June 2014 08:52
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova] Thoughts about how to enforce DB and object 
models mapping

Hi,

By working on providing a new scheduler client for compute nodes, I began to 
use the ComputeNode object instead of placing a call to conductor directly.

Unfortunately, I recently discovered that some changes have been done in the DB 
model for ComputeNode that haven't been populated on the corresponding Object 
model.

As there is no current code in Nova using ComputeNode objects, I can understand 
that, as the first user of the object, I have to find the lacks and fix them, 
so I'm OK with that.

That said, I'm thinking about how we could make sure that any change in a DB 
model would have to be also done on the corresponding Object model.
A possible approach would be to create a test class for each object and provide 
a check against the DB model to make sure that at least the fields are all 
there.

Does that sound reasonable to you ? Should we provide another way for gating 
this ?

Thanks,
-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Can nova manage both ironic bare metal nodes and kvm VMs ?

2014-06-06 Thread 严超
Yes, I followed this link
https://etherpad.openstack.org/p/IronicDeployDevstack
So Ironic can automatically install ubuntu to bare metal.

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*


*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*


2014-06-06 15:27 GMT+08:00 Jander lu lhcxx0...@gmail.com:

 still not, I did not successfully set up my Ironic environment.

 has your Ironic environment run well now?


 2014-06-06 11:53 GMT+08:00 严超 yanchao...@gmail.com:

 Hi, Jander:
 Thank you very much .
 Does this work after you follow these steps ?

 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*


 2014-06-06 11:40 GMT+08:00 Jander lu lhcxx0...@gmail.com:

 hi chao
 I have met the same problem, I read this article
 http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/


 2014-06-05 19:26 GMT+08:00 严超 yanchao...@gmail.com:

  Hi, All:
 In deploying with devstack and Ironic+Nova, we set:
 compute_driver = nova.virt.ironic.IronicDriver
 This means we can no longer use nova to boot VMs.
 Is there a way to manage both ironic bare metal nodes and kvm VMs
 in Nova ?
  I followed this Link:
 https://etherpad.openstack.org/p/IronicDeployDevstack


 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Our gate status // getting better at rechecks

2014-06-06 Thread Lucas Alvares Gomes
On Fri, Jun 6, 2014 at 3:18 AM, David Shrewsbury
shrewsbury.d...@gmail.com wrote:
 FYI for all,

 I have posted http://review.openstack.org/98201 in an attempt to at least
 give *some* warning to developers that a change to one of the public
 methods on classes we derive from may have consequences to Ironic.
 I have WIP'd it until 97757 lands.

Thanks David, that will help a lot until we get the driver merged into nova.

 On Thu, Jun 5, 2014 at 7:51 PM, Devananda van der Veen
 devananda@gmail.com wrote:

 Quick update for those who are following along but may not be on IRC right
 now.

 The gate (not just ours -- the gate for all of openstack, which Ironic
 is a part of) is having issues right now. See Sean's email for details
 on that, and what you can do to help

   http://lists.openstack.org/pipermail/openstack-dev/2014-June/036810.html

 Also, a patch landed in Nova which completely broke Ironic's unit and
 tempest tests two days ago. Blame me if you need to blame someone -- I

No one to blame here, this is a problem with having an out-of-tree
driver and it's better that it happened now than later, at least we
are at the beginning of the cycle and that gives us enough time to fix
it and add some sanity checks like @David is doing to prevent it from
happening again.

 looked at the patch and thought it was fine, and so did a couple
 nova-core. That is why all your Ironic patches are failing unit tests
 and tempest tests, and they will keep failing until 97757 lands.
 Unfortunately, the whole gate queue is backed up, so this fix has
 already been in the queue for ~24hrs, and will probably take another
 day, at least, to land.

 In the meantime, what can you do to help? Keep working on bug fixes in
 Ironic and in the nova.virt.ironic driver, and help review incoming
 specifications. See Sean's email, look at the elastic recheck status
 page, write E-R queries, and help fix those bugs. If you're not sure
 how to help with rechecks, join #openstack-qa.

 If you're an ironic-core member, please don't approve any patches
 until after 97757 lands -- and then, I think we should only be
 approving important bug fixes until the gate stabilizes.

 Thanks,
 Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Thoughts about how to enforce DB and object models mapping

2014-06-06 Thread Sylvain Bauza
Hi Paul,

Thanks for replying quickly,

Agreed with the fact it should be managed by a spec, now we have it.
The problem is that we can't guaranttee that an unused field in DB won't
be used someday without approving a blueprint, if there is still a
possibility to directly use the conductor (take a bugfix for example).
How are we sure that a DB field is not in use, and how as reviewers, can
we -1 to someone using it now ?

Once 100% of Nova code will be migrated to Objects, then yes, it will be
easy to track down such changes.
Take the reserved_instances field in compute_nodes. By looking at the
compute manager, we can't find where it is defined so we could think
this DB field is not in use, but that's wrong. This field is being set
somewhere else, and is used by the scheduler in a filter.

So, indeed, I can understand how hard it is to provide new Object
versions, but IMHO we should assume that if no code is using a DB field,
that's a good fit for removing it from DB.

-Sylvain




Le 06/06/2014 12:46, Murray, Paul (HP Cloud) a écrit :
 Hi Sylvain,

 I think the idea is that anyone making changes to the db data model has to 
 make changes to the corresponding object and that is captured in the data 
 model implications of nova-specs (or it should be). Then reviewers check it 
 is done. Tests on the objects will necessarily exercise the database fields 
 by writing to and reading form them I think.

 In the case of ComputeNode, it is still just getting there. I should be 
 putting fields in as part of the resource-tracker-objects work track, and you 
 will have seen by now that there are patches up for review doing that. Any 
 help making it complete is welcome.

 The philosophy is to only add fields that are being used - so as it is being 
 built out, the existence of a field in db model does not necessarily imply it 
 should be added to the ComputeNode object. It is easy to remove an unused 
 field from the db, but more complicated to remove it from the object. 
 Likewise for fields that want to change their format. So for my part, the 
 fields that were not added yet are fields that were not used in the resource 
 tracker - even if they are used in the scheduler.

 The one exception is the pci_stats - which was undergoing change.

 Paul.

 -Original Message-
 From: Sylvain Bauza [mailto:sba...@redhat.com] 
 Sent: 06 June 2014 08:52
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Nova] Thoughts about how to enforce DB and object 
 models mapping

 Hi,

 By working on providing a new scheduler client for compute nodes, I began to 
 use the ComputeNode object instead of placing a call to conductor directly.

 Unfortunately, I recently discovered that some changes have been done in the 
 DB model for ComputeNode that haven't been populated on the corresponding 
 Object model.

 As there is no current code in Nova using ComputeNode objects, I can 
 understand that, as the first user of the object, I have to find the lacks 
 and fix them, so I'm OK with that.

 That said, I'm thinking about how we could make sure that any change in a DB 
 model would have to be also done on the corresponding Object model.
 A possible approach would be to create a test class for each object and 
 provide a check against the DB model to make sure that at least the fields 
 are all there.

 Does that sound reasonable to you ? Should we provide another way for gating 
 this ?

 Thanks,
 -Sylvain


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-06 Thread Murray, Paul (HP Cloud)
Forcing an instance to a specific host is very useful for the operator - it 
fulfills a valid use case for monitoring and testing purposes. I am not 
defending a particular way of doing this, just bringing up that it has to be 
handled. The effect on limits is purely implementation - no limits get set so 
it by-passes any resource constraints, which is deliberate.

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: 04 June 2014 19:17
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation 
ratio out of scheduler

On 06/04/2014 06:10 AM, Murray, Paul (HP Cloud) wrote:
 Hi Jay,

 This sounds good to me. You left out the part of limits from the 
 discussion - these filters set the limits used at the resource tracker.

Yes, and that is, IMO, bad design. Allocation ratios are the domain of the 
compute node and the resource tracker. Not the scheduler. The allocation ratios 
simply adjust the amount of resources that the compute node advertises to 
others. Allocation ratios are *not* scheduler policy, and they aren't related 
to flavours.

 You also left out the force-to-host and its effect on limits.

force-to-host is definitively non-cloudy. It was a bad idea that should never 
have been added to Nova in the first place.

That said, I don't see how force-to-host has any affect on limits. 
Limits should not be output from the scheduler. In fact, they shouldn't be 
anything other than an *input* to the scheduler, provided in each host state 
struct that gets built from records updated in the resource tracker and the 
Nova database.

  Yes, I
 would agree with doing this at the resource tracker too.

 And of course the extensible resource tracker is the right way to do 
 it J

:) Yes, clearly this is something that I ran into while brainstorming around 
the extensible resource tracker patches.

Best,
-jay

 Paul.

 *From:*Jay Lau [mailto:jay.lau@gmail.com]
 *Sent:* 04 June 2014 10:04
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova] Proposal: Move CPU and memory 
 allocation ratio out of scheduler

 Does there is any blueprint related to this? Thanks.

 2014-06-03 21:29 GMT+08:00 Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com:

 Hi Stackers,

 tl;dr
 =

 Move CPU and RAM allocation ratio definition out of the Nova scheduler 
 and into the resource tracker. Remove the calculations for overcommit 
 out of the core_filter and ram_filter scheduler pieces.

 Details
 ===

 Currently, in the Nova code base, the thing that controls whether or 
 not the scheduler places an instance on a compute host that is already 
 full (in terms of memory or vCPU usage) is a pair of configuration
 options* called cpu_allocation_ratio and ram_allocation_ratio.

 These configuration options are defined in, respectively, 
 nova/scheduler/filters/core_filter.py and 
 nova/scheduler/filters/ram_filter.py.

 Every time an instance is launched, the scheduler loops through a 
 collection of host state structures that contain resource consumption 
 figures for each compute node. For each compute host, the core_filter 
 and ram_filter's host_passes() method is called. In the host_passes() 
 method, the host's reported total amount of CPU or RAM is multiplied 
 by this configuration option, and the product is then subtracted from 
 the reported used amount of CPU or RAM. If the result is greater than 
 or equal to the number of vCPUs needed by the instance being launched, 
 True is returned and the host continues to be considered during 
 scheduling decisions.

 I propose we move the definition of the allocation ratios out of the 
 scheduler entirely, as well as the calculation of the total amount of 
 resources each compute node contains. The resource tracker is the most 
 appropriate place to define these configuration options, as the 
 resource tracker is what is responsible for keeping track of total and 
 used resource amounts for all compute nodes.

 Benefits:

   * Allocation ratios determine the amount of resources that a compute 
 node advertises. The resource tracker is what determines the amount of 
 resources that each compute node has, and how much of a particular 
 type of resource have been used on a compute node. It therefore makes 
 sense to put calculations and definition of allocation ratios where 
 they naturally belong.
   * The scheduler currently needlessly re-calculates total resource 
 amounts on every call to the scheduler. This isn't necessary. The 
 total resource amounts don't change unless either a configuration 
 option is changed on a compute node (or host aggregate), and this 
 calculation can be done more efficiently once in the resource tracker.
   * Move more logic out of the scheduler
   * With the move to an extensible resource tracker, we can more 
 easily evolve to defining all resource-related options in the same 
 place (instead of in 

Re: [openstack-dev] [Nova] Mid cycle meetup

2014-06-06 Thread David Shrewsbury
On Fri, Jun 6, 2014 at 4:04 AM, Thierry Carrez thie...@openstack.org
wrote:

 Michael Still wrote:
  Nova will hold its Juno mid cycle meetup between July 28 and 30, at an
  Intel campus in Beaverton, OR (near Portland). There is a wiki page
  with more details here:
 
  https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint

 On https://wiki.openstack.org/wiki/Sprints it shows that this would be a
 Nova+Ironic thing, and there is a TripleO+Ironic thing in Raleigh over
 the same dates.

 Should one of those drop its Ironic focus to encourage convergence of
 Ironic devs on a single area ?


Devananda has already stated his preference for meeting with the Nova
team in order to help streamline the process of getting the driver for
Ironic
merged back into Nova for this cycle. So it's unlikely we'll have any Ironic
folks in Raleigh.

--
David Shrewsbury (IRC: Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack races piling up in the gate - please stop approving patches unless they are fixing a race condition

2014-06-06 Thread Jeremy Stanley
On 2014-06-06 00:19:15 -0400 (-0400), Kevin Benton wrote:
 So if I'm understanding you correctly, a job can fail and will be
 held onto in case one of the parent jobs causes a reset in which
 case it will be retested?

Correct. If change E suffers a job failure and so does change B
ahead of it, then it's assumed that change B may be at fault and
change E gets retested without B. That was one of the primary design
requirements (and arguably THE defining characteristic) of Zuul's
speculative parallelization.

http://ci.openstack.org/zuul/gating.html#testing-in-parallel
http://docs.openstack.org/infra/publications/zuul/#(18)

-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [TC] Program Mission Statement and the Catalog

2014-06-06 Thread Russell Bryant
On 06/06/2014 04:00 AM, Mark McLoughlin wrote:
 On Wed, 2014-06-04 at 18:03 -0700, Mark Washenberger wrote:
 Hi folks,


 I'd like to propose the Images program to adopt a mission statement
 [1] and then change it to reflect our new aspirations of acting as a
 Catalog that works with artifacts beyond just disk images [2]. 


 Since the Glance mini summit early this year, momentum has been
 building significantly behind catalog effort and I think its time we
 recognize it officially, to ensure further growth can proceed and to
 clarify the interactions the Glance Catalog will have with other
 OpenStack projects.


 Please see the linked openstack/governance changes, and provide your
 feedback either in this thread, on the changes themselves, or in the
 next TC meeting when we get a chance to discuss.


 Thanks to Georgy Okrokvertskhov for coming up with the new mission
 statement.
 
 Just quoting the proposal here to make the idea slightly more
 accessible, perhaps triggering some discussion here:
 
   https://review.openstack.org/98002
 
   Artifact Repository Service:
 codename: Glance
 mission:
   To provide services to store, browse, share, distribute, and manage
   artifacts consumable by OpenStack services in a unified manner. An 
 artifact
   is any strongly-typed, versioned collection of document and bulk,
   unstructured data and is immutable once the artifact is published in the
   repository.

As I noted on the review, I like this direction for Glance.  However, I
have context in my head from other discussions about this.

One concern is the broad definition of artifacts.  Perhaps that's what
we want though, and we'll just have exceptions when warranted, such as
key storage.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Too much shim rest proxy mechanism drivers in ML2

2014-06-06 Thread Mathieu Rohon
hi henry,

On Fri, Jun 6, 2014 at 10:17 AM, henry hly henry4...@gmail.com wrote:
 ML2 mechanism drivers are becoming another kind of plugins. Although they
 can be loaded together, but can not work with each other.

 Today, there are more and more drivers, supporting all kinds of networking
 hardware and middleware (sdn controller). Unfortunately, they are designed
 exclusively as chimney REST proxies.

 A very general use case of heterogeneous networking: we have OVS controlled
 by ovs agent, and  switchs from different vendor, some of them are
 controller by their own driver/agent directly, others are controlled by a
 sdn controller middleware. Can we create a vxlan network, across all these
 sw/hw switchs?

 It's not so easy: neutron ovs use l2 population mech driver, sdn controllers
 have their own population way, today most dedicated switch driver can only
 support vlan. sdn controller people may say: it's ok, just put everything
 under the control of my controller, leaving ml2 plugin as a shim rest proxy
 layer. But, shouldn't Openstack Neutron itself be the first class citizen
 even if there is not controller involved?

I totally agree. By using l2population with tunnel networks (vxlan,
gre), you will not be able to plug an external device which could
possibly terminate your tunnel. The ML2 plugin has to be aware a new
port in the vxlan segment. I think this is the scope of this bp :
https://blueprints.launchpad.net/neutron/+spec/neutron-switch-port-extension

mixing several SDN controller (when used with ovs/of/lb agent, neutron
could be considered has a SDN controller) could be achieved the same
way, with the SDN controller sending notification to neutron for the
ports that it manages.


 Could we remove all device related adaption(rest/ssh/netconf/of... proxy)
 from these mechanism driver to the agent side, leaving only necessary code
 in the plugin? Heterogeneous networking may become easier, while ofagent
 give a good example, it can co-exist with native neutron OVS agent in vxlan
 l2 population.

linuxbridge agent can coexist too

And with the help of coming ML2 agent framework, hardware
 device or middleware controller adaption agent could be more simplified.

I don't understand the reason why you want to move middleware
controller to the agent.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Use of final and private keywords to limit extending

2014-06-06 Thread Jamie Hannaford
So this is an issue that’s been heavily discussed recently in the PHP community.

Based on personal opinion, I heavily favor and use private properties in 
software I write. I haven’t, however, used the “final” keyword that much. But 
the more I read about and see it being used, the more inclined I am to use it 
in projects. Here’s a great overview of why it’s useful for public APIs: 
http://verraes.net/2014/05/final-classes-in-php/

Here’s a tl;dr executive summary:

- Open/Closed principle. It’s important to understand that “Open for 
extension”, does not mean “Open for inheritance”. Composition, strategies, 
callbacks, plugins, event listeners, … are all valid ways to extend without 
inheritance. And usually, they are much preferred to inheritance – hence the 
conventional recommendation in OOP to “favour composition over inheritance”. 
Inheritance creates more coupling, that can be hard to get rid of, and that can 
make understanding the code quite tough.

- Providing an API is a responsibility: by allowing end-users to access 
features of our SDK, we need to give certain guarantees of stability or low 
change frequency. The behavior of classes should be deterministic - i.e. we 
should be able to trust that a class does a certain thing. There’s no trust 
whatsoever if that behavior can be edited and overridden from external code.

- Future-proofing: the fewer behaviours and extension points we expose, the 
more freedom we have to change system internals. This is the idea behind 
encapsulation.

You said that we should only use private and final keywords if there’s an 
overwhelming reason to do so. I completely disagree. I actually want to flip 
the proposition here: I think we should only use public keywords if we’re 
CERTAIN we want to encourage and allow the inheritance of that class. By making 
a class inheritable, you are saying to the outside world: this class is meant 
to be extended. And the majority of times this is not what we want. Sure there 
are times when inheritance may well be the best option - but you can support 
extension points in different, often better, ways. Declaring explicitly what 
the extension points are is part of the contract your code has with the rest of 
the system. Final classes help to enforce this contract.

To summarize, we have nothing to lose by favoring private and final keywords. 
We gain the above advantages, and if we later decide to open up that class as 
an extension point we can remove the keywords without any issues. Should a 
valid reason come up to open it up, it will be easy to do so, because nothing 
depends on it being closed. On the other hand, if you start by making 
everything open or inheritable, it will be very hard to close it later.

Jamie


On June 5, 2014 at 6:24:52 PM, Matthew Farina 
(m...@mattfarina.commailto:m...@mattfarina.com) wrote:

Some recent reviews have started to include the use of the private
keyword for methods and talk of using final on classes. I don't think
we have consistent agreement on how we should do this.

My take is that we should not use private or final unless we can
articulate the design decision to intentionally do so.

To limit public the public API for a class we can use protected.
Moving from protected to private or the use of final should have a
good reason.

In open source software code is extended in ways we often don't think
of up front. Using private and final limits how those things can
happen. When we use them we are intentionally limiting extending so we
should be able to articulate why we want to put that limitation in
place.

Given the reviews that have been put forth I think there is a
different stance. If there is one please share it.

- Matt



Jamie Hannaford
Software Developer III - CH [experience Fanatical Support]

Tel:+41434303908
Mob:+41791009767
[Rackspace]



Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy
-
Rackspace Hosting Australia PTY LTD a company registered in the state of 
Victoria, Australia (company registered number ACN 153 275 524) whose 
registered office is at Suite 3, Level 7, 210 George Street, Sydney, NSW 2000, 
Australia. Rackspace Hosting Australia PTY LTD privacy policy can be viewed at 
www.rackspace.com.au/company/legal-privacy-statement.php
-
Rackspace US, Inc, 5000 Walzem Road, San Antonio, Texas 78218, United States of 
America
Rackspace US, Inc privacy policy can be viewed at 
www.rackspace.com/information/legal/privacystatement
-
Rackspace Limited is a company registered in England  Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ.
Rackspace Limited privacy policy can be viewed at 

Re: [openstack-dev] Missing button to send Ctrl+Alt+Del for SPICE Console

2014-06-06 Thread Ben Nemec
This sounds like a reasonable thing to open a bug against Horizon on
Launchpad.

On 06/04/2014 06:25 PM, Martinx - ジェームズ wrote:
 Hello Stackers!
 
 I'm using SPICE Consoles now but, there is no button to send Ctrl + Alt +
 Del to a Windows Instance, so, it becomes very hard to log in into those
 guests...
 
 Can you guys enable it at Horizon?!
 
 Tks!
 Thiago
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Mehdi Abaakouk added to oslo.messaging-core

2014-06-06 Thread Mark McLoughlin
Mehdi has been making great contributions and reviews on oslo.messaging
for months now, so I've added him to oslo.messaging-core.

Thank you for all your hard work Mehdi!

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mid-cycle sprints: a tracking page to rule them all

2014-06-06 Thread Ben Nemec
On 06/05/2014 04:48 AM, Thierry Carrez wrote:
 Hey everyone,
 
 With all those mid-cycle sprints being proposed I found it a bit hard to
 track them all (and, like, see if I could just attend one). I tried to
 list them all at:
 
 https://wiki.openstack.org/wiki/Sprints
 
 I'm pretty sure I missed some (in particular I couldn't find a Nova
 mid-cycle meetup), so if you have one set up and participation is still
 open, please add it there !
 

Thanks for pulling this all together.  I moved TripleO to Under
Discussion because I don't believe we've selected a final date yet.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Mehdi Abaakouk added to oslo.messaging-core

2014-06-06 Thread Anita Kuno
On 06/06/2014 10:57 AM, Mark McLoughlin wrote:
 Mehdi has been making great contributions and reviews on oslo.messaging
 for months now, so I've added him to oslo.messaging-core.
 
 Thank you for all your hard work Mehdi!
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Congratulations Mehdi!

Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Please help to review https://review.openstack.org/#/c/96679/

2014-06-06 Thread Ben Nemec
Please don't make review requests on openstack-dev.  See
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html
for details.

Thanks.

-Ben

On 06/05/2014 10:51 PM, Jian Hua Geng wrote:
 
 
 Hi All,
 
 Can anyone help to review this patch
 https://review.openstack.org/#/c/96679/ and provide me your comments?
 thanks a lot!
 
 
 --
 Best regard,
 David Geng
 --
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Mehdi Abaakouk added to oslo.messaging-core

2014-06-06 Thread Ben Nemec
Great!  I know I'm not much use reviewing oslo.messaging patches, so now
I can feel slightly less guilty about it. :-)

-Ben

On 06/06/2014 09:57 AM, Mark McLoughlin wrote:
 Mehdi has been making great contributions and reviews on oslo.messaging
 for months now, so I've added him to oslo.messaging-core.
 
 Thank you for all your hard work Mehdi!
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][L3Please review blueprint..

2014-06-06 Thread Paul Michali (pcm)
https://review.openstack.org/#/c/88406/

Thanks!


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Mehdi Abaakouk added to oslo.messaging-core

2014-06-06 Thread Doug Hellmann
Welcome to the team, Mehdi!

On Fri, Jun 6, 2014 at 10:57 AM, Mark McLoughlin mar...@redhat.com wrote:
 Mehdi has been making great contributions and reviews on oslo.messaging
 for months now, so I've added him to oslo.messaging-core.

 Thank you for all your hard work Mehdi!

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3Please review blueprint..

2014-06-06 Thread Carl Baldwin
I have it on my list to review today.  Thanks, Paul.

Carl

On Fri, Jun 6, 2014 at 9:11 AM, Paul Michali (pcm) p...@cisco.com wrote:
 https://review.openstack.org/#/c/88406/

 Thanks!


 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3Please review blueprint..

2014-06-06 Thread Anita Kuno
On 06/06/2014 11:29 AM, Carl Baldwin wrote:
 I have it on my list to review today.  Thanks, Paul.
 
 Carl
 
 On Fri, Jun 6, 2014 at 9:11 AM, Paul Michali (pcm) p...@cisco.com wrote:
 https://review.openstack.org/#/c/88406/

 Thanks!


 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Let's remember that the mailing list is not the place to request reviews.

People read these emails and then copy behaviour. Let's take review
requests to irc, and also remember the best way to get reviews is to
give them.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Help debugging js files

2014-06-06 Thread Abishek Subramanian (absubram)
Hi,

I need to make some changes to the horizon.instances.js file and I was 
wondering what the best method would be to help debug issues in this file? Is 
there a debugger or something available to debug this file separately?

Thanks!
Abishek
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Spark plugin: EDP and Spark jobs

2014-06-06 Thread Trevor McKay
Thanks Daniele,

  This is a good summary (also pasted more or less on the etherpad).

 There is also a third possibility of bypassing the job-server problem
 and call directly Spark commands on the master node of the cluster.

  I am starting to play around with this idea as a simple
proof-of-concept. I think it can be done along with some refactoring in
the Sahara job manager.

  I think the refactoring is viable and can give us something where the
details of run, status, kill can be hidden behind another common
interface.  If this proves to be viable, we can pursue more capable
spark job models next.

We shall see!  Learn by doing.  I should have a CR in a few days.

Best,

Trevor

On Fri, 2014-06-06 at 10:54 +0200, Daniele Venzano wrote:
 Dear all,
 
 A short while ago the Spark plugin for Sahara was merged, opening up the
 possibility of deploying Spark clusters with one click from OpenStack.
 Since Spark is quite different from Hadoop, we need to take a number of
 decisions on how to proceed implementing important features, like, in
 particular, EDP. Spark does not have a built-in job-server and EDP needs
 a way to have a very generic and high level interface to submit, check
 the basic status and kill a job.
 
 In summary, this is our understanding of the current situation:
 1. a quick hack is to use Oozie for application submission (this mimics
 what Cloudera did by the end of last year, when preparing to announce
 the integration of Spark in CDH)
 2. an alternative is to use a spark job-server, which should replace Oozie
 (there is a repo on github from ooyala that implements an instance of a
 job-server)
 
 Here's our view on the points above:
 1. clearly, the first approach is an ugly hack, that creates
 dependencies with Oozie. Oozie requires mapreduce, and uses tiny
 map-only jobs to submit part of a larger workflow. Besides dependencies,
 this is a bazooka to kill a fly, as we're not addressing spark
 application workflows right now
 2. the spark job-server idea is more clean, but the current project from
 Ooyala supports an old version of spark. Spark 1.0.0 (which we have
 already tested in Sahara and that we will commit soon) offers some new
 methods to submit and package applications, that can drastically
 simplify the job-server
 
 As a consequence, the doubt is: do we contribute to that project, create
 a new one, or contribute directly to spark?
 
 A few more points:
 - assuming we have a working prototype of 2), we need to modify the
 Sahara setup such that it deploys, in addition to the usual suspects
 (master and slaves) one more service, the spark job-server
 
 There is also a third possibility of bypassing the job-server problem
 and call directly Spark commands on the master node of the cluster.



 One last observation: currently, spark in standalone mode (that we use
 in the plugin) does not support other schedulers than FIFO, when
 multiple spark applications/jobs are submitted to the cluster. Hence,
 the spark job-server could be a good place to integrate a better job
 scheduler.
 
 Trevor McKay opened a pad here:
 https://etherpad.openstack.org/p/sahara_spark_edp
 
 to gather ideas and feedback. This email is based on the very
 preliminary discussion that happened yesterday via IRC, email and the
 above-mentioned etherpad and has the objective of starting a public
 discussion on how to proceed.
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Help debugging js files

2014-06-06 Thread Ana Krivokapic

On 06/06/2014 05:39 PM, Abishek Subramanian (absubram) wrote:
 Hi,

 I need to make some changes to the horizon.instances.js file and I was
 wondering what the best method would be to help debug issues in this
 file? Is there a debugger or something available to debug this file
 separately?

 Thanks!
 Abishek

Hi Abishek,

A couple of debugging tips with regard to Horizon frond-end development:

* Use Chrome's Developer Tools or Firefox's Firebug plugin. They include
very powerful tools including a debugger, a console, the ability to edit
static files on the fly, etc.

* Put COMPRESS_ENABLED = False into your local_settings.py to disable
automatic compression of static files. This will help you see which
files the errors are actually occurring in.

* If you are using devstack or similar for your development environment,
use Django's built-in web server, rather than a full fledged web server
like Apache. It will make your life much easier - you will not have to
run collectstatic after changing static files, you will have the log
output in the console instead of having to dig through log files, etc.

HTH

-- 
Regards,

Ana Krivokapic
Software Engineer
OpenStack team
Red Hat Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Re: Spark plugin: EDP and Spark jobs

2014-06-06 Thread Trevor McKay
(resend with proper subject markers)

Thanks Daniele,

  This is a good summary (also pasted more or less on the etherpad).

 There is also a third possibility of bypassing the job-server problem
 and call directly Spark commands on the master node of the cluster.

  I am starting to play around with this idea as a simple
proof-of-concept. I think it can be done along with some refactoring in
the Sahara job manager.

  I think the refactoring is viable and can give us something where the
details of run, status, kill can be hidden behind another common
interface.  If this proves to be viable, we can pursue more capable
spark job models next.

We shall see!  Learn by doing.  I should have a CR in a few days.

Best,

Trevor

On Fri, 2014-06-06 at 10:54 +0200, Daniele Venzano wrote:
 Dear all,
 
 A short while ago the Spark plugin for Sahara was merged, opening up
the
 possibility of deploying Spark clusters with one click from OpenStack.
 Since Spark is quite different from Hadoop, we need to take a number
of
 decisions on how to proceed implementing important features, like, in
 particular, EDP. Spark does not have a built-in job-server and EDP
needs
 a way to have a very generic and high level interface to submit, check
 the basic status and kill a job.
 
 In summary, this is our understanding of the current situation:
 1. a quick hack is to use Oozie for application submission (this
mimics
 what Cloudera did by the end of last year, when preparing to announce
 the integration of Spark in CDH)
 2. an alternative is to use a spark job-server, which should replace
Oozie
 (there is a repo on github from ooyala that implements an instance of
a
 job-server)
 
 Here's our view on the points above:
 1. clearly, the first approach is an ugly hack, that creates
 dependencies with Oozie. Oozie requires mapreduce, and uses tiny
 map-only jobs to submit part of a larger workflow. Besides
dependencies,
 this is a bazooka to kill a fly, as we're not addressing spark
 application workflows right now
 2. the spark job-server idea is more clean, but the current project
from
 Ooyala supports an old version of spark. Spark 1.0.0 (which we have
 already tested in Sahara and that we will commit soon) offers some new
 methods to submit and package applications, that can drastically
 simplify the job-server
 
 As a consequence, the doubt is: do we contribute to that project,
create
 a new one, or contribute directly to spark?
 
 A few more points:
 - assuming we have a working prototype of 2), we need to modify the
 Sahara setup such that it deploys, in addition to the usual suspects
 (master and slaves) one more service, the spark job-server
 
 There is also a third possibility of bypassing the job-server problem
 and call directly Spark commands on the master node of the cluster.



 One last observation: currently, spark in standalone mode (that we use
 in the plugin) does not support other schedulers than FIFO, when
 multiple spark applications/jobs are submitted to the cluster. Hence,
 the spark job-server could be a good place to integrate a better job
 scheduler.
 
 Trevor McKay opened a pad here:
 https://etherpad.openstack.org/p/sahara_spark_edp
 
 to gather ideas and feedback. This email is based on the very
 preliminary discussion that happened yesterday via IRC, email and the
 above-mentioned etherpad and has the objective of starting a public
 discussion on how to proceed.
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Refine engine - executor protocol

2014-06-06 Thread W Chan
Renat,

Regarding blueprint
https://blueprints.launchpad.net/mistral/+spec/mistral-engine-executor-protocol,
can you clarify what it means by worker parallelism and
engine-executor parallelism?
 Currently, the engine and executor are launched with the eventlet driver
in oslo.messaging.  Once a message arrives over transport, a new green
thread is spawned and passed to the dispatcher.  In the case of executor,
the function being dispatched to is handle_task.  I'm unclear what
additional parallelism this blueprint is referring to.  The context isn't
clear from the summit notes.

Thanks.
Winson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Designate Incubation Request

2014-06-06 Thread Mac Innes, Kiall
Several of the TC requested we have an openstack-infra managed DevStack
gate enabled before they would cast their vote - I'm happy to say, we've
got it :)

With the merge of [1], Designate now has voting devstack /
requirements / docs jobs. An example of the DevStack run is at [2].

Vote Designate @ [3] :)

Thanks,
Kiall

[1]: https://review.openstack.org/#/c/98439/
[2]: https://review.openstack.org/#/c/98442/
[3]: https://review.openstack.org/#/c/97609/

On Sat, 2014-05-24 at 17:24 +, Hayes, Graham wrote:
 Hi all,
 
 Designate would like to apply for incubation status in OpenStack.
 
 Our application is here: 
 https://wiki.openstack.org/wiki/Designate/Incubation_Application 
 
 As part of our application we would like to apply for a new program. Our
 application for the program is here:
 
 https://wiki.openstack.org/wiki/Designate/Program_Application 
 
 Designate is a DNS as a Service project, providing both end users,
 developers, and administrators with an easy to use REST API to manage
 their DNS Zones and Records.
 
 Thanks,
 
 Graham

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] use of the word certified

2014-06-06 Thread Anita Kuno
So there are certain words that mean certain things, most don't, some do.

If words that mean certain things are used then some folks start using
the word and have expectations around the word and the OpenStack
Technical Committee and other OpenStack programs find themselves on the
hook for behaviours that they didn't agree to.

Currently the word under discussion is certified and its derivatives:
certification, certifying, and others with root word certificate.

This came to my attention at the summit with a cinder summit session
with the one of the cerficiate words in the title. I had thought my
point had been made but it appears that there needs to be more
discussion on this. So let's discuss.

Let's start with the definition of certify:
cer·ti·fy
verb (used with object), cer·ti·fied, cer·ti·fy·ing.
1. to attest as certain; give reliable information of; confirm: He
certified the truth of his claim.
2. to testify to or vouch for in writing: The medical examiner will
certify his findings to the court.
3. to guarantee; endorse reliably: to certify a document with an
official seal.
4. to guarantee (a check) by writing on its face that the account
against which it is drawn has sufficient funds to pay it.
5. to award a certificate to (a person) attesting to the completion of a
course of study or the passing of a qualifying examination.
Source: http://dictionary.reference.com/browse/certify

The issue I have with the word certify is that it requires someone or a
group of someones to attest to something. The thing attested to is only
as credible as the someone or the group of someones doing the attesting.
We have no process, nor do I feel we want to have a process for
evaluating the reliability of the somones or groups of someones doing
the attesting.

I think that having testing in place in line with other programs testing
of patches (third party ci) in cinder should be sufficient to address
the underlying concern, namely reliability of opensource hooks to
proprietary code and/or hardware. I would like the use of the word
certificate and all its roots to no longer be used in OpenStack
programs with regard to testing. This won't happen until we get some
discussion and agreement on this, which I would like to have.

Thank you for your participation,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements around statistics and billing

2014-06-06 Thread Jorge Miramontes
Hey Stephen,

What we really care about are the following:

- Inbound bandwidth (bytes)
- Outbound bandwidth (bytes)
- Instance Uptime (requires create/delete events)

Just to note our current LBaaS implementation at Rackspace keeps track of
when features are enabled/disabled. For example, we have markers for when
SSL is turned on/off, markers for when we suspend/unsuspend load
balancers, etc. Some of this stuff is used for tracking purposes, some of
it is used for billing purposes and some of it used for both purposes. We
also keep track of all user initiated API requests to help us out when
issues arise.

From my experience building usage collection systems just know it is not a
trivial task, especially if we need to track events. One good tip is to be
as explicit as possible and as granular as possible. Being implicit causes
bad things to happen. Also, if we didn't have UDP as a protocol I would
recommend using Hadoop's map reduce functionality to get accurate
statistics by map-reducing request logs.

I would not advocate tracking per node statistics as the user can track
that information by themselves if they really want to. We currently, don't
have any customers that have asked for this feature.

If you want to tackle the usage collection problem for Neutron LBaaS I
would be glad to help as I've got quite a bit of experience in this
subject matter.

Cheers,
--Jorge




From:  Eichberger, German german.eichber...@hp.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Tuesday, June 3, 2014 5:20 PM
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Neutron][LBaaS] Requirements around
statistics and  billing


Hi Stephen,
 
We would like all those numbers as well
J
 
Additionally, we measure:
·
When a lb instance was created, deleted, etc.
·
For monitoring we ³ping² a load balancers health check and report/act on
the results
·
For user¹s troubleshooting we make the haproxy logs available. Which
contain connection information like from, to, duration, protocol, status
(though
 we frequently have been told that this is not really useful for
debuggingŠ) and of course having that more gui-fied would be neat
 
German
 
 
 
From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]

Sent: Tuesday, May 27, 2014 8:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Requirements around statistics
and billing

 
Hi folks!
 

We have yet to have any kind of meaningful discussion on this list around
load balancer stats (which, I presume to include data that will
eventually need to be consumed by a billing system). I'd like to get the
discussion started here,
 as this will have significant meaning for how we both make this data
available to users, and how we implement back-end systems to be able to
provide this data.

 

So!  What kinds of data are people looking for, as far as load balancer
statistics.

 

For our part, as an absolute minimum we need the following per
loadbalancer + listener combination:

 

* Total bytes transferred in for a given period

* Total bytes transferred out for a given period

 

Our product and billing people I'm sure would like the following as well:

 

* Some kind of peak connections / second data (95th percentile or average
over a period, etc.)

* Total connections for a given period

* Total HTTP / HTTPS requests served for a given period

 

And the people who work on UIs and put together dashboards would like:

 

* Current requests / second (average for last X seconds, either
on-demand, or simply dumped regularly).

* Current In/Out bytes throughput

 

And our monitoring people would like this:

 

* Errors / second

* Current connections / second and bytes throughput secant slope (ie.
like derivative but easier to calculate from digital data) for last X
seconds (ie. detecting massive spikes or drops in traffic, potentially
useful for detecting a problem
 before it becomes critical)

 

And some of our users would like all of the above data per pool, and not
just for loadbalancer + listener. Some would also like to see it per
member (though I'm less inclined to make this part of our standard).

 

I'm also interested in hearing vendor capabilities here, as it doesn't
make sense to design stats that most can't implement, and I imagine
vendors also have valuable data on what their customer ask for / what
stats are most useful in troubleshooting.

 

What other statistics data for load balancing are meaningful and
hopefully not too arduous to calculate? What other data are your users
asking for or accustomed to seeing?

 

Thanks,

Stephen

 

-- 
Stephen Balukoff 
Blue Box Group, LLC
(800)613-4305 x807 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova] Mid cycle meetup

2014-06-06 Thread Devananda van der Veen
I have just announced the Ironic mid-cycle in Beaverton, co-located
with Nova. That's the main one for Ironic.

However, there are many folks working on both TripleO and Ironic, so I
wouldn't be surprised if there is a (small?) group at the TripleO
sprint hacking on Ironic, even if there's nothing official, and even
if the dates overlap (which I really hope they don't). I'm going to
try to attend the TripleO sprint if at all possible, as that project
remains one of the largest users of Ironic that I'm aware of.

-D


On Fri, Jun 6, 2014 at 5:17 AM, David Shrewsbury
shrewsbury.d...@gmail.com wrote:
 On Fri, Jun 6, 2014 at 4:04 AM, Thierry Carrez thie...@openstack.org
 wrote:

 Michael Still wrote:
  Nova will hold its Juno mid cycle meetup between July 28 and 30, at an
  Intel campus in Beaverton, OR (near Portland). There is a wiki page
  with more details here:
 
  https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint

 On https://wiki.openstack.org/wiki/Sprints it shows that this would be a
 Nova+Ironic thing, and there is a TripleO+Ironic thing in Raleigh over
 the same dates.

 Should one of those drop its Ironic focus to encourage convergence of
 Ironic devs on a single area ?


 Devananda has already stated his preference for meeting with the Nova
 team in order to help streamline the process of getting the driver for
 Ironic
 merged back into Nova for this cycle. So it's unlikely we'll have any Ironic
 folks in Raleigh.

 --
 David Shrewsbury (IRC: Shrews)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-06 Thread Jorge Miramontes
Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free to
add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from
Barbican. For those that aren't up to date with the Neutron LBaaS API
Revision, the project/tenant/user provides a secret (container?) id when
enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican then
Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will be
supported.
 - Decisions are made on behalf of the user which lessens the amount of
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI
think.

2. Push orchestration decisions to API users. This idea comes with two
assumptions. The first assumption is that most providers' customers use
the cloud via a GUI, which in turn can handle any orchestration decisions
that need to be made. The second assumption is that power API users are
savvy and can handle their decisions as well. Using this method requires
services, such as LBaaS, to register in the form of metadata to a
barbican container.

* Example: If a user makes a change to a secret the GUI can see which
services are registered and opt to warn the user of consequences. Power
users can look at the registered services and make decisions how they see
fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality
is at the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in this
case?
 - Pushes complexity of decision making on to GUI engineers and power API
users.


I would like to get a consensus on which option to move forward with ASAP
since the hackathon is coming up and delivering Barbican to Neutron LBaaS
integration is essential to exposing SSL/TLS functionality, which almost
everyone has stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My
reason for choosing option #2 has to deal mostly with the simplicity of
implementing such a mechanism. Simplicity also means we can implement the
necessary code and get it approved much faster which seems to be a concern
for everyone. What option does everyone else want to move forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-06 Thread Mark McLoughlin
On Fri, 2014-06-06 at 13:29 -0400, Anita Kuno wrote:
 The issue I have with the word certify is that it requires someone or a
 group of someones to attest to something. The thing attested to is only
 as credible as the someone or the group of someones doing the attesting.
 We have no process, nor do I feel we want to have a process for
 evaluating the reliability of the somones or groups of someones doing
 the attesting.
 
 I think that having testing in place in line with other programs testing
 of patches (third party ci) in cinder should be sufficient to address
 the underlying concern, namely reliability of opensource hooks to
 proprietary code and/or hardware. I would like the use of the word
 certificate and all its roots to no longer be used in OpenStack
 programs with regard to testing. This won't happen until we get some
 discussion and agreement on this, which I would like to have.

Thanks for bringing this up Anita. I agree that certified driver or
similar would suggest something other than I think we mean.

And, for whatever its worth, the topic did come up at a Foundation board
meeting and some board members expressed similar concerns, although I
guess that was more precisely about the prospect of the Foundation
calling drivers certified.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-06 Thread Qin Zhao
Yuriy,

And I think if we use proxy object of multiprocessing, the green thread
will not switch during we call libguestfs.  Is that correct?


On Fri, Jun 6, 2014 at 2:44 AM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Yuriy,

 I read multiprocessing source code just now.  Now I feel it may not solve
 this problem very easily.  For example, let us assume that we will use the
 proxy object in Manager's process to call libguestfs.  In manager.py, I see
 it needs to create a pipe, before fork the child process. The write end of
 this pipe is required by child process.


 http://sourcecodebrowser.com/python-multiprocessing/2.6.2.1/classmultiprocessing_1_1managers_1_1_base_manager.html#a57fe9abe7a3d281286556c4bf3fbf4d5

 And in Process._bootstrp(), I think we will need to register a function to
 be called by _run_after_forkers(), in order to closed the fds inherited
 from Nova process.


 http://sourcecodebrowser.com/python-multiprocessing/2.6.2.1/classmultiprocessing_1_1process_1_1_process.html#ae594800e7bdef288d9bfbf8b79019d2e

 And we also can not close the write end fd created by Manager in
 _run_after_forkers(). One feasible way may be getting that fd from the 5th
 element of _args attribute of Process object, then skip to close this
 fd  I have not investigate if or not Manager need to use other fds,
 besides this pipe. Personally, I feel such an implementation will be a
 little tricky and risky, because it tightly depends on Manager code. If
 Manager opens other files, or change the argument order, our code will fail
 to run. Am I wrong?  Is there any other safer way?


 On Thu, Jun 5, 2014 at 11:40 PM, Yuriy Taraday yorik@gmail.com
 wrote:

 Please take a look at
 https://docs.python.org/2.7/library/multiprocessing.html#managers -
 everything is already implemented there.
 All you need is to start one manager that would serve all your requests
 to libguestfs. The implementation in stdlib will provide you with all
 exceptions and return values with minimum code changes on Nova side.
 Create a new Manager, register an libguestfs endpoint in it and call
 start(). It will spawn a separate process that will speak with calling
 process over very simple RPC.
 From the looks of it all you need to do is replace tpool.Proxy calls in
 VFSGuestFS.setup method to calls to this new Manager.


 On Thu, Jun 5, 2014 at 7:21 PM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Yuriy,

 Thanks for reading my bug!  You are right. Python 3.3 or 3.4 should not
 have this issue, since they have can secure the file descriptor. Before
 OpenStack move to Python 3, we may still need a solution. Calling
 libguestfs in a separate process seems to be a way. This way, Nova code can
 close those fd by itself, not depending upon CLOEXEC. However, that will be
 an expensive solution, since it requires a lot of code change. At least we
 need to write code to pass the return value and exception between these two
 processes. That will make this solution very complex. Do you agree?


 On Thu, Jun 5, 2014 at 9:39 PM, Yuriy Taraday yorik@gmail.com
 wrote:

 This behavior of os.pipe() has changed in Python 3.x so it won't be an
 issue on newer Python (if only it was accessible for us).

 From the looks of it you can mitigate the problem by running libguestfs
 requests in a separate process (multiprocessing.managers comes to mind).
 This way the only descriptors child process could theoretically inherit
 would be long-lived pipes to main process although they won't leak because
 they should be marked with CLOEXEC before any libguestfs request is run.
 The other benefit is that this separate process won't be busy opening and
 closing tons of fds so the problem with inheriting will be avoided.


 On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com
 wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this
 problem does not occur during data injection.  Before creating the ISO, 
 the
 driver code will extend the disk. Libguestfs is invoked in that time 
 frame.

 And now I think this problem may occur at any time, if the code use
 tpool to invoke libguestfs, and one external commend is executed in 
 another
 green thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs
 routine in greenthread, rather than another native thread. But it will
 impact the performance very much. So I do not think that is an acceptable
 solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent 

Re: [openstack-dev] use of the word certified

2014-06-06 Thread Doug Hellmann
On Fri, Jun 6, 2014 at 1:29 PM, Anita Kuno ante...@anteaya.info wrote:
 So there are certain words that mean certain things, most don't, some do.

 If words that mean certain things are used then some folks start using
 the word and have expectations around the word and the OpenStack
 Technical Committee and other OpenStack programs find themselves on the
 hook for behaviours that they didn't agree to.

 Currently the word under discussion is certified and its derivatives:
 certification, certifying, and others with root word certificate.

 This came to my attention at the summit with a cinder summit session
 with the one of the cerficiate words in the title. I had thought my
 point had been made but it appears that there needs to be more
 discussion on this. So let's discuss.

 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He
 certified the truth of his claim.
 2. to testify to or vouch for in writing: The medical examiner will
 certify his findings to the court.
 3. to guarantee; endorse reliably: to certify a document with an
 official seal.
 4. to guarantee (a check) by writing on its face that the account
 against which it is drawn has sufficient funds to pay it.
 5. to award a certificate to (a person) attesting to the completion of a
 course of study or the passing of a qualifying examination.
 Source: http://dictionary.reference.com/browse/certify

 The issue I have with the word certify is that it requires someone or a
 group of someones to attest to something. The thing attested to is only
 as credible as the someone or the group of someones doing the attesting.
 We have no process, nor do I feel we want to have a process for
 evaluating the reliability of the somones or groups of someones doing
 the attesting.

 I think that having testing in place in line with other programs testing
 of patches (third party ci) in cinder should be sufficient to address
 the underlying concern, namely reliability of opensource hooks to
 proprietary code and/or hardware. I would like the use of the word
 certificate and all its roots to no longer be used in OpenStack
 programs with regard to testing. This won't happen until we get some
 discussion and agreement on this, which I would like to have.

 Thank you for your participation,
 Anita.

I didn't see that summit session. Is someone claiming that a driver is
being certified? Or asking that someone certify a driver?

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-06 Thread Russell Bryant
On 06/06/2014 03:29 PM, Doug Hellmann wrote:
 On Fri, Jun 6, 2014 at 1:29 PM, Anita Kuno ante...@anteaya.info wrote:
 So there are certain words that mean certain things, most don't, some do.

 If words that mean certain things are used then some folks start using
 the word and have expectations around the word and the OpenStack
 Technical Committee and other OpenStack programs find themselves on the
 hook for behaviours that they didn't agree to.

 Currently the word under discussion is certified and its derivatives:
 certification, certifying, and others with root word certificate.

 This came to my attention at the summit with a cinder summit session
 with the one of the cerficiate words in the title. I had thought my
 point had been made but it appears that there needs to be more
 discussion on this. So let's discuss.

 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He
 certified the truth of his claim.
 2. to testify to or vouch for in writing: The medical examiner will
 certify his findings to the court.
 3. to guarantee; endorse reliably: to certify a document with an
 official seal.
 4. to guarantee (a check) by writing on its face that the account
 against which it is drawn has sufficient funds to pay it.
 5. to award a certificate to (a person) attesting to the completion of a
 course of study or the passing of a qualifying examination.
 Source: http://dictionary.reference.com/browse/certify

 The issue I have with the word certify is that it requires someone or a
 group of someones to attest to something. The thing attested to is only
 as credible as the someone or the group of someones doing the attesting.
 We have no process, nor do I feel we want to have a process for
 evaluating the reliability of the somones or groups of someones doing
 the attesting.

 I think that having testing in place in line with other programs testing
 of patches (third party ci) in cinder should be sufficient to address
 the underlying concern, namely reliability of opensource hooks to
 proprietary code and/or hardware. I would like the use of the word
 certificate and all its roots to no longer be used in OpenStack
 programs with regard to testing. This won't happen until we get some
 discussion and agreement on this, which I would like to have.

 Thank you for your participation,
 Anita.
 
 I didn't see that summit session. Is someone claiming that a driver is
 being certified? Or asking that someone certify a driver?

The Cinder project has been using that terminology for testing of their
drivers for a while.  It's something worth discussing, though.  Maybe we
can put it on the agenda for an upcoming TC meeting?

https://wiki.openstack.org/wiki/Cinder/certified-drivers

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-06 Thread Youcef Laribi
+1 for option 2. 

In addition as an additional safeguard, the LBaaS service could check with 
Barbican when failing to use an existing secret to see if the secret has 
changed (lazy detection).  

Youcef

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Friday, June 06, 2014 12:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration 
Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on how 
Barbican and Neutron LBaaS will interact. There are currently two ideas in play 
and both will work. If you have another idea please free to add it so that we 
may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from 
Barbican. For those that aren't up to date with the Neutron LBaaS API Revision, 
the project/tenant/user provides a secret (container?) id when enabling SSL/TLS 
functionality.

* Example: If a user makes a change to a secret/container in Barbican then 
Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will be 
supported.
 - Decisions are made on behalf of the user which lessens the amount of calls 
the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to ensure 
delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI think.

2. Push orchestration decisions to API users. This idea comes with two 
assumptions. The first assumption is that most providers' customers use the 
cloud via a GUI, which in turn can handle any orchestration decisions that need 
to be made. The second assumption is that power API users are savvy and can 
handle their decisions as well. Using this method requires services, such as 
LBaaS, to register in the form of metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which services 
are registered and opt to warn the user of consequences. Power users can look 
at the registered services and make decisions how they see fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality is at 
the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in this case?
 - Pushes complexity of decision making on to GUI engineers and power API users.


I would like to get a consensus on which option to move forward with ASAP since 
the hackathon is coming up and delivering Barbican to Neutron LBaaS integration 
is essential to exposing SSL/TLS functionality, which almost everyone has 
stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My reason 
for choosing option #2 has to deal mostly with the simplicity of implementing 
such a mechanism. Simplicity also means we can implement the necessary code and 
get it approved much faster which seems to be a concern for everyone. What 
option does everyone else want to move forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Re: [openstack-tc] use of the word certified

2014-06-06 Thread Anita Kuno
Sorry missed the -dev list on my response.


 Original Message 
Subject: Re: [openstack-tc] [openstack-dev] use of the word certified
Date: Fri, 06 Jun 2014 15:45:02 -0400
From: Anita Kuno ante...@anteaya.info
To: openstack...@lists.openstack.org

On 06/06/2014 03:29 PM, Doug Hellmann wrote:
 On Fri, Jun 6, 2014 at 1:29 PM, Anita Kuno ante...@anteaya.info wrote:
 So there are certain words that mean certain things, most don't, some do.

 If words that mean certain things are used then some folks start using
 the word and have expectations around the word and the OpenStack
 Technical Committee and other OpenStack programs find themselves on the
 hook for behaviours that they didn't agree to.

 Currently the word under discussion is certified and its derivatives:
 certification, certifying, and others with root word certificate.

 This came to my attention at the summit with a cinder summit session
 with the one of the cerficiate words in the title. I had thought my
 point had been made but it appears that there needs to be more
 discussion on this. So let's discuss.

 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He
 certified the truth of his claim.
 2. to testify to or vouch for in writing: The medical examiner will
 certify his findings to the court.
 3. to guarantee; endorse reliably: to certify a document with an
 official seal.
 4. to guarantee (a check) by writing on its face that the account
 against which it is drawn has sufficient funds to pay it.
 5. to award a certificate to (a person) attesting to the completion of a
 course of study or the passing of a qualifying examination.
 Source: http://dictionary.reference.com/browse/certify

 The issue I have with the word certify is that it requires someone or a
 group of someones to attest to something. The thing attested to is only
 as credible as the someone or the group of someones doing the attesting.
 We have no process, nor do I feel we want to have a process for
 evaluating the reliability of the somones or groups of someones doing
 the attesting.

 I think that having testing in place in line with other programs testing
 of patches (third party ci) in cinder should be sufficient to address
 the underlying concern, namely reliability of opensource hooks to
 proprietary code and/or hardware. I would like the use of the word
 certificate and all its roots to no longer be used in OpenStack
 programs with regard to testing. This won't happen until we get some
 discussion and agreement on this, which I would like to have.

 Thank you for your participation,
 Anita.
 
 I didn't see that summit session. Is someone claiming that a driver is
 being certified? Or asking that someone certify a driver?
 
 Doug
 
 ___
 OpenStack-TC mailing list
 openstack...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc
 
The title of this etherpad was the title of the summit session:
https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification
http://junodesignsummit.sched.org/event/56eae44976e986f39c858d784344c7d0#.U5IaFnKZg5k

As you can see there has been some editing on the etherpad of the use of
the word certified, so I had thought we had an agreement - the cinder
devs and I, but was just told in cinder channel that the intent is to
continue to use the word unless told not to.

I had asked in cinder channel for the commit message of this patch to be
edited: https://review.openstack.org/#/c/84244/ and I was disagreed
with, so I thought I would bring my concerns to a wider audience.

Unfortunately #openstack-cinder is not a logged channel so I can't point
you to a public url for you to read channel logs (a link to the channel
logs would go here, if there was one).

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-06 Thread John Griffith
On Fri, Jun 6, 2014 at 1:23 PM, Mark McLoughlin mar...@redhat.com wrote:

 On Fri, 2014-06-06 at 13:29 -0400, Anita Kuno wrote:
  The issue I have with the word certify is that it requires someone or a
  group of someones to attest to something. The thing attested to is only
  as credible as the someone or the group of someones doing the attesting.
  We have no process, nor do I feel we want to have a process for
  evaluating the reliability of the somones or groups of someones doing
  the attesting.
 
  I think that having testing in place in line with other programs testing
  of patches (third party ci) in cinder should be sufficient to address
  the underlying concern, namely reliability of opensource hooks to
  proprietary code and/or hardware. I would like the use of the word
  certificate and all its roots to no longer be used in OpenStack
  programs with regard to testing. This won't happen until we get some
  discussion and agreement on this, which I would like to have.

 Thanks for bringing this up Anita. I agree that certified driver or
 similar would suggest something other than I think we mean.


​Can you expand on the above comment?  In other words a bit more about what
you mean.  I think from the perspective of a number of people that
participate in Cinder the intent is in fact to say.  Maybe it would help
clear some things up for folks that don't see why this has become a
debatable issue.

By running CI tests successfully that it is in fact a ​way of certifying
that our device and driver is in fact 'certified' to function appropriately
and provide the same level of API and behavioral compatability as the
default components as demonstrated by running CI tests on each submitted
patch.

Personally I believe part of the contesting of the phrases and terms is
partly due to the fact that a number of organizations have their own
certification programs and tests.  I think that's great, and they in fact
provide some form of certification that a device works in their
environment and to their expectations.

Doing this from a general OpenStack integration perspective doesn't seem
all that different to me.  For the record, my initial response to this was
that I didn't have too much preference on what it was called (verification,
certification etc etc), however there seems to be a large number of people
(not product vendors for what it's worth) that feel differently.





 And, for whatever its worth, the topic did come up at a Foundation board
 meeting and some board members expressed similar concerns, although I
 guess that was more precisely about the prospect of the Foundation
 calling drivers certified.

 Mark.


 ___
 OpenStack-TC mailing list
 openstack...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-06 Thread John Griffith
On Fri, Jun 6, 2014 at 1:55 PM, John Griffith john.griff...@solidfire.com
wrote:




 On Fri, Jun 6, 2014 at 1:23 PM, Mark McLoughlin mar...@redhat.com wrote:

 On Fri, 2014-06-06 at 13:29 -0400, Anita Kuno wrote:
  The issue I have with the word certify is that it requires someone or a
  group of someones to attest to something. The thing attested to is only
  as credible as the someone or the group of someones doing the attesting.
  We have no process, nor do I feel we want to have a process for
  evaluating the reliability of the somones or groups of someones doing
  the attesting.
 
  I think that having testing in place in line with other programs testing
  of patches (third party ci) in cinder should be sufficient to address
  the underlying concern, namely reliability of opensource hooks to
  proprietary code and/or hardware. I would like the use of the word
  certificate and all its roots to no longer be used in OpenStack
  programs with regard to testing. This won't happen until we get some
  discussion and agreement on this, which I would like to have.

 Thanks for bringing this up Anita. I agree that certified driver or
 similar would suggest something other than I think we mean.


 ​Can you expand on the above comment?  In other words a bit more about
 what you mean.  I think from the perspective of a number of people that
 participate in Cinder the intent is in fact to say.  Maybe it would help
 clear some things up for folks that don't see why this has become a
 debatable issue.

 By running CI tests successfully that it is in fact a ​way of certifying
 that our device and driver is in fact 'certified' to function appropriately
 and provide the same level of API and behavioral compatability as the
 default components as demonstrated by running CI tests on each submitted
 patch.

 Personally I believe part of the contesting of the phrases and terms is
 partly due to the fact that a number of organizations have their own
 certification programs and tests.  I think that's great, and they in fact
 provide some form of certification that a device works in their
 environment and to their expectations.

 Doing this from a general OpenStack integration perspective doesn't seem
 all that different to me.  For the record, my initial response to this was
 that I didn't have too much preference on what it was called (verification,
 certification etc etc), however there seems to be a large number of people
 (not product vendors for what it's worth) that feel differently.





 And, for whatever its worth, the topic did come up at a Foundation board
 meeting and some board members expressed similar concerns, although I
 guess that was more precisely about the prospect of the Foundation
 calling drivers certified.

 Mark.


 ___
 OpenStack-TC mailing list
 openstack...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-tc




​By the way, has anybody thought about this in an OpenStack general
context.  I mean, are we saying that we don't offer any sort of
certification or verification that the various OpenStack components or
services actually work?

I realize there are significantly different levels of certification and
that's an important distinction as well in my opinion.

Anyway, I'm not necessarily arguing one view over another here, but there
are valid points of view being raised.​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-06 Thread John Wood
Hello Jorge,

Just noting that for option #2, it seems to me that the registration feature in 
Barbican would not be required for the first version of this integration 
effort, but we should create a blueprint for it nonetheless. 

As for your question about services not registering/unregistering, I don't see 
an issue as long as the presence or absence of registered services on a 
Container/Secret does not **block** actions from happening, but rather is 
information that can be used to warn clients through their processes. For 
example, Barbican would still delete a Container/Secret even if it had 
registered services.

Does that all make sense though?

Thanks,
John


From: Youcef Laribi [youcef.lar...@citrix.com]
Sent: Friday, June 06, 2014 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

+1 for option 2.

In addition as an additional safeguard, the LBaaS service could check with 
Barbican when failing to use an existing secret to see if the secret has 
changed (lazy detection).

Youcef

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 12:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration 
Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on how 
Barbican and Neutron LBaaS will interact. There are currently two ideas in play 
and both will work. If you have another idea please free to add it so that we 
may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from 
Barbican. For those that aren't up to date with the Neutron LBaaS API Revision, 
the project/tenant/user provides a secret (container?) id when enabling SSL/TLS 
functionality.

* Example: If a user makes a change to a secret/container in Barbican then 
Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will be 
supported.
 - Decisions are made on behalf of the user which lessens the amount of calls 
the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to ensure 
delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI think.

2. Push orchestration decisions to API users. This idea comes with two 
assumptions. The first assumption is that most providers' customers use the 
cloud via a GUI, which in turn can handle any orchestration decisions that need 
to be made. The second assumption is that power API users are savvy and can 
handle their decisions as well. Using this method requires services, such as 
LBaaS, to register in the form of metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which services 
are registered and opt to warn the user of consequences. Power users can look 
at the registered services and make decisions how they see fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality is at 
the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in this case?
 - Pushes complexity of decision making on to GUI engineers and power API users.


I would like to get a consensus on which option to move forward with ASAP since 
the hackathon is coming up and delivering Barbican to Neutron LBaaS integration 
is essential to exposing SSL/TLS functionality, which almost everyone has 
stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My reason 
for choosing option #2 has to deal mostly with the simplicity of implementing 
such a mechanism. Simplicity also means we can implement the necessary code and 
get it approved much faster which seems to be a concern for everyone. What 
option does everyone else want to move forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-06 Thread Jorge Miramontes
Hey John,

Correct, I was envisioning that the Barbican request would not be
affected, but rather, the GUI operator or API user could use the
registration information to do so should they want to do so.

Cheers,
--Jorge




On 6/6/14 4:53 PM, John Wood john.w...@rackspace.com wrote:

Hello Jorge,

Just noting that for option #2, it seems to me that the registration
feature in Barbican would not be required for the first version of this
integration effort, but we should create a blueprint for it nonetheless.

As for your question about services not registering/unregistering, I
don't see an issue as long as the presence or absence of registered
services on a Container/Secret does not **block** actions from happening,
but rather is information that can be used to warn clients through their
processes. For example, Barbican would still delete a Container/Secret
even if it had registered services.

Does that all make sense though?

Thanks,
John


From: Youcef Laribi [youcef.lar...@citrix.com]
Sent: Friday, June 06, 2014 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

+1 for option 2.

In addition as an additional safeguard, the LBaaS service could check
with Barbican when failing to use an existing secret to see if the secret
has changed (lazy detection).

Youcef

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 12:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free to
add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from
Barbican. For those that aren't up to date with the Neutron LBaaS API
Revision, the project/tenant/user provides a secret (container?) id when
enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will
be supported.
 - Decisions are made on behalf of the user which lessens the amount of
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI
think.

2. Push orchestration decisions to API users. This idea comes with two
assumptions. The first assumption is that most providers' customers use
the cloud via a GUI, which in turn can handle any orchestration decisions
that need to be made. The second assumption is that power API users are
savvy and can handle their decisions as well. Using this method requires
services, such as LBaaS, to register in the form of metadata to a
barbican container.

* Example: If a user makes a change to a secret the GUI can see which
services are registered and opt to warn the user of consequences. Power
users can look at the registered services and make decisions how they see
fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality
is at the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in
this case?
 - Pushes complexity of decision making on to GUI engineers and power API
users.


I would like to get a consensus on which option to move forward with ASAP
since the hackathon is coming up and delivering Barbican to Neutron LBaaS
integration is essential to exposing SSL/TLS functionality, which almost
everyone has stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My
reason for choosing option #2 has to deal mostly with the simplicity of
implementing such a mechanism. Simplicity also means we can implement the
necessary code and get it approved much faster which seems to be a
concern for everyone. What option does everyone else want to move forward
with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-06 Thread Eichberger, German
Jorge + John,

I am most concerned with a user changing his secret in barbican and then the LB 
trying to update and causing downtime. Some users like to control when the 
downtime occurs.

For #1 it was suggested that once the event is delivered it would be up to a 
user to enable an auto-update flag.

In the case of #2 I am a bit worried about error cases: e.g. uploading the 
certificates succeeds but registering the loadbalancer(s) fails. So using the 
barbican system for those warnings might not as fool proof as we are hoping. 

One thing I like about #2 over #1 is that it pushes a lot of the information to 
Barbican. I think a user would expect when he uploads a new certificate to 
Barbican that the system warns him right away about load balancers using the 
old cert. With #1 he might get an e-mails from LBaaS telling him things changed 
(and we helpfully updated all affected load balancers) -- which isn't as 
immediate as #2. 

If we implement an auto-update flag for #1 we can have both. User's who like 
#2 juts hit the flag. Then the discussion changes to what we should implement 
first and I agree with Jorge + John that this should likely be #2.

German

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Friday, June 06, 2014 3:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Hey John,

Correct, I was envisioning that the Barbican request would not be affected, but 
rather, the GUI operator or API user could use the registration information to 
do so should they want to do so.

Cheers,
--Jorge




On 6/6/14 4:53 PM, John Wood john.w...@rackspace.com wrote:

Hello Jorge,

Just noting that for option #2, it seems to me that the registration 
feature in Barbican would not be required for the first version of this 
integration effort, but we should create a blueprint for it nonetheless.

As for your question about services not registering/unregistering, I 
don't see an issue as long as the presence or absence of registered 
services on a Container/Secret does not **block** actions from 
happening, but rather is information that can be used to warn clients 
through their processes. For example, Barbican would still delete a 
Container/Secret even if it had registered services.

Does that all make sense though?

Thanks,
John


From: Youcef Laribi [youcef.lar...@citrix.com]
Sent: Friday, June 06, 2014 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

+1 for option 2.

In addition as an additional safeguard, the LBaaS service could check 
with Barbican when failing to use an existing secret to see if the 
secret has changed (lazy detection).

Youcef

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 12:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on 
how Barbican and Neutron LBaaS will interact. There are currently two 
ideas in play and both will work. If you have another idea please free 
to add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets 
from Barbican. For those that aren't up to date with the Neutron LBaaS 
API Revision, the project/tenant/user provides a secret (container?) id 
when enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican 
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will 
be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to 
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
think.

2. Push orchestration decisions to API users. This idea comes with two 
assumptions. The first assumption is that most providers' customers use 
the cloud via a GUI, which in turn can handle any orchestration 
decisions that need to be made. The second assumption is that power API 
users are savvy and can handle their decisions as well. Using this 
method requires services, such as LBaaS, to register in the form of 
metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which 
services are registered and opt to warn the user of consequences. Power 
users can look at the 

[openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-06 Thread Devananda van der Veen
I think some things are broken in the oslo-incubator db migration code.

Ironic moved to this when Juno opened and things seemed fine, until
recently when Lucas tried to add a DB migration and noticed that it didn't
run... So I looked into it a bit today. Below are my findings.

Firstly, I filed this bug and proposed a fix, because I think that tests
that don't run any code should not report that they passed -- they should
report that they were skipped.
  https://bugs.launchpad.net/oslo/+bug/1327397
  No notice given when db migrations are not run due to missing engine

Then, I edited the test_migrations.conf file appropriately for my local
mysql service, ran the tests again, and verified that migration tests ran
-- and they passed. Great!

Now, a little background... Ironic's TestMigrations class inherits from
oslo's BaseMigrationTestCase, then opportunistically checks each
back-end, if it's available. This opportunistic checking was inherited from
Nova so that tests could pass on developer workstations where not all
backends are present (eg, I have mysql installed, but not postgres), and
still transparently run on all backends in the gate. I couldn't find such
opportunistic testing in the oslo db migration test code, unfortunately -
but maybe it's well hidden.

Anyhow. When I stopped the local mysql service (leaving the configuration
unchanged), I expected the tests to be skipped, but instead I got two
surprise failures:
- test_mysql_opportunistically() failed because setUp() raises an exception
before the test code could call calling _have_mysql()
- test_mysql_connect_fail() actually failed! Again, because setUp() raises
an exception before running the test itself

Unfortunately, there's one more problem... when I run the tests in
parallel, they fail randomly because sometimes two test threads run
different migration tests, and the setUp() for one thread (remember, it
calls _reset_databases) blows up the other test.

Out of 10 runs, it failed three times, each with different errors:
  NoSuchTableError: `chassis`
  ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
database exists
  ProgrammingError: (ProgrammingError) (1146, Table
'test_migrations.alembic_version' doesn't exist)

As far as I can tell, this is all coming from:

https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111


So, Ironic devs -- if you see a DB migration proposed, pay extra attention
to it. We aren't running migration tests in our check or gate queues right
now, and we shouldn't enable them until this fixed.

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Spec review request

2014-06-06 Thread Kanzhe Jiang
The serviceBase and insertion spec has been up for review for a while. It
would be great if it can be reviewed and moved forward.

https://review.openstack.org/#/c/93128/

Thanks,
Kanzhe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev