[openstack-dev] [Neutron][ServiceVM] servicevm IRC meeting reminder (June 10 Tuesday 5:00(AM)UTC-)

2014-06-09 Thread Isaku Yamahata
Hi. This is a reminder mail for the servicevm IRC meeting
June 10, 2014 Tuesdays 5:00(AM)UTC-
#openstack-meeting on freenode
https://wiki.openstack.org/wiki/Meetings/ServiceVM

agenda: (feel free to add your items)
* project incubation
* NFV meeting follow up
* open discussion

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Samuel Bercovici
Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new one, 
create a new one and update the listener to use it. 
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration 
Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on how 
Barbican and Neutron LBaaS will interact. There are currently two ideas in play 
and both will work. If you have another idea please free to add it so that we 
may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from 
Barbican. For those that aren't up to date with the Neutron LBaaS API Revision, 
the project/tenant/user provides a secret (container?) id when enabling SSL/TLS 
functionality.

* Example: If a user makes a change to a secret/container in Barbican then 
Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will be 
supported.
 - Decisions are made on behalf of the user which lessens the amount of calls 
the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to ensure 
delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI think.

2. Push orchestration decisions to API users. This idea comes with two 
assumptions. The first assumption is that most providers' customers use the 
cloud via a GUI, which in turn can handle any orchestration decisions that need 
to be made. The second assumption is that power API users are savvy and can 
handle their decisions as well. Using this method requires services, such as 
LBaaS, to register in the form of metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which services 
are registered and opt to warn the user of consequences. Power users can look 
at the registered services and make decisions how they see fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality is at 
the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in this case?
 - Pushes complexity of decision making on to GUI engineers and power API users.


I would like to get a consensus on which option to move forward with ASAP since 
the hackathon is coming up and delivering Barbican to Neutron LBaaS integration 
is essential to exposing SSL/TLS functionality, which almost everyone has 
stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My reason 
for choosing option #2 has to deal mostly with the simplicity of implementing 
such a mechanism. Simplicity also means we can implement the necessary code and 
get it approved much faster which seems to be a concern for everyone. What 
option does everyone else want to move forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Too much shim rest proxy mechanism drivers in ML2

2014-06-09 Thread henry hly
hi mathieu,


 I totally agree. By using l2population with tunnel networks (vxlan,
 gre), you will not be able to plug an external device which could
 possibly terminate your tunnel. The ML2 plugin has to be aware a new
 port in the vxlan segment. I think this is the scope of this bp :

https://blueprints.launchpad.net/neutron/+spec/neutron-switch-port-extension

 mixing several SDN controller (when used with ovs/of/lb agent, neutron
 could be considered has a SDN controller) could be achieved the same
 way, with the SDN controller sending notification to neutron for the
 ports that it manages.

I agree with basic ieda of this BP, especially controller agnostic and no
vendor specific code to handle segment id. since Neutron already has all
information about ports and a standard way to populate it (l2 pop), why not
just reuse it?

  And with the help of coming ML2 agent framework, hardware
  device or middleware controller adaption agent could be more simplified.

 I don't understand the reason why you want to move middleware
 controller to the agent.

this BP suggest a driver side hook plug, my idea is that existing agent
side router VIF plug processing should be ok. Suppose we have a hardware
router with VETP termination, just keep the L3 plugin unchanged, for L2
part maybe a very thin DEV specific mechanism driver is there (just like
OVS mech driver, doing necessary validation with 10's line of code). Most
work is in agent side: when a router interface is created, DEV specific L3
agent will interact with the router (either directly config wih
netconf/cli, or indirectly via some controller middleware), and then hook
to DEV specific L2 agent co-located with it, doing a virtual VIF plug-in.
Exactly same as OVS agent, this L2 agent scanned the newly plugged VIF,
then rpc call back to ML2 plugin with port-update and standard l2 pop.

While OVS/linux bridge agent VIF plug is identified by port name in br-int,
these appliance specific L3  L2 agents may need a new virtual plug hook.
Any producer/consumer pattern is ok, shared file in tmpfs, name pipe, etc.
Anyway, these work shouldn't happen in plugin side, just leave it in agent
side, to keep with the same framework as exist ovs/bridge agent.

Today DEV specific L2 agent can fork from OVS agent, just like what ofagent
does. In the future, modularized ML2 agent can reduce work to write code
for a new switch engine.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Fuel] relationship btw TripleO and Fuel

2014-06-09 Thread LeslieWang
Dear all,
Seems like both Fuel and TripleO are designed to solve problem of complex 
Openstack installation and Deployment. TripleO is using Heat for orchestration. 
If we can define network creation, OS provision and deployment in Heat 
template, seems like they can achieve similar goal. So can anyone explain the 
difference of these two projects, and future roadmap of each of them? Thanks!
TripleO is a program aimed at installing, upgrading and operating OpenStack 
clouds using OpenStack's own cloud facilities as the foundations - building on 
nova, neutron and heat to automate fleet management at datacentre scale (and 
scaling down to as few as 2 machines).
Fuel is an all-in-one control plane for automated hardware discovery, network 
verification, operating systems provisioning and deployment of OpenStack. It 
provides user-friendly Web interface for installations management, simplifying 
OpenStack installation up to a few clicks.
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-09 Thread Eoghan Glynn


 So there are certain words that mean certain things, most don't, some do.
 
 If words that mean certain things are used then some folks start using
 the word and have expectations around the word and the OpenStack
 Technical Committee and other OpenStack programs find themselves on the
 hook for behaviours that they didn't agree to.
 
 Currently the word under discussion is certified and its derivatives:
 certification, certifying, and others with root word certificate.
 
 This came to my attention at the summit with a cinder summit session
 with the one of the cerficiate words in the title. I had thought my
 point had been made but it appears that there needs to be more
 discussion on this. So let's discuss.
 
 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He
 certified the truth of his claim.
 2. to testify to or vouch for in writing: The medical examiner will
 certify his findings to the court.
 3. to guarantee; endorse reliably: to certify a document with an
 official seal.
 4. to guarantee (a check) by writing on its face that the account
 against which it is drawn has sufficient funds to pay it.
 5. to award a certificate to (a person) attesting to the completion of a
 course of study or the passing of a qualifying examination.
 Source: http://dictionary.reference.com/browse/certify
 
 The issue I have with the word certify is that it requires someone or a
 group of someones to attest to something. The thing attested to is only
 as credible as the someone or the group of someones doing the attesting.
 We have no process, nor do I feel we want to have a process for
 evaluating the reliability of the somones or groups of someones doing
 the attesting.
 
 I think that having testing in place in line with other programs testing
 of patches (third party ci) in cinder should be sufficient to address
 the underlying concern, namely reliability of opensource hooks to
 proprietary code and/or hardware. I would like the use of the word
 certificate and all its roots to no longer be used in OpenStack
 programs with regard to testing. This won't happen until we get some
 discussion and agreement on this, which I would like to have.
 
 Thank you for your participation,
 Anita.

Hi Anita,

Just a note on cross-posting to both the os-dev and os-tc lists.

Anyone not on the TC who will hits reply-all is likely to see their
post be rejected by the TC list moderator, but go through to the
more open dev list.

As a result, the thread diverges (as we saw with the recent election
stats/turnout thread).

Also, moderation rejects are an unpleasant user experience.

So if a post is intended to reach out for input from the wider dev
community, it's better to post *only* to the -dev list, or vice versa
if you want to interact with a narrower audience.

Thanks,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Eoghan Glynn


- Original Message -
 
 
 
 On Fri, Jun 6, 2014 at 1:23 PM, Mark McLoughlin  mar...@redhat.com  wrote:
 
 
 
 On Fri, 2014-06-06 at 13:29 -0400, Anita Kuno wrote:
  The issue I have with the word certify is that it requires someone or a
  group of someones to attest to something. The thing attested to is only
  as credible as the someone or the group of someones doing the attesting.
  We have no process, nor do I feel we want to have a process for
  evaluating the reliability of the somones or groups of someones doing
  the attesting.
  
  I think that having testing in place in line with other programs testing
  of patches (third party ci) in cinder should be sufficient to address
  the underlying concern, namely reliability of opensource hooks to
  proprietary code and/or hardware. I would like the use of the word
  certificate and all its roots to no longer be used in OpenStack
  programs with regard to testing. This won't happen until we get some
  discussion and agreement on this, which I would like to have.
 
 Thanks for bringing this up Anita. I agree that certified driver or
 similar would suggest something other than I think we mean.
 ​Can you expand on the above comment? In other words a bit more about what
 you mean. I think from the perspective of a number of people that
 participate in Cinder the intent is in fact to say. Maybe it would help
 clear some things up for folks that don't see why this has become a
 debatable issue.
 
 By running CI tests successfully that it is in fact a ​way of certifying that
 our device and driver is in fact 'certified' to function appropriately and
 provide the same level of API and behavioral compatability as the default
 components as demonstrated by running CI tests on each submitted patch.
 
 Personally I believe part of the contesting of the phrases and terms is
 partly due to the fact that a number of organizations have their own
 certification programs and tests. I think that's great, and they in fact
 provide some form of certification that a device works in their
 environment and to their expectations.
 
 Doing this from a general OpenStack integration perspective doesn't seem all
 that different to me. For the record, my initial response to this was that I
 didn't have too much preference on what it was called (verification,
 certification etc etc), however there seems to be a large number of people
 (not product vendors for what it's worth) that feel differently.

Since certification seems to be quite an overloaded term
already, I wonder would a more back-to-basics phrase such as
quality assured better capture the Cinder project's use of
the word?

It does exactly what it says on the tin ... i.e. captures the
fact that a vendor has run an agreed battery of tests against
their driver and the harness has reported green-ness with a
meaning that is well understood upstream (as the Tempest test
cases are in the public domain). 

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Fuel] relationship btw TripleO and Fuel

2014-06-09 Thread Robert Collins
Well, fuel devs are also hacking on TripleO :) I don't know the exact
timelines but I'm certainly hopeful that we'll see long term
convergence - as TripleO gets more capable, more and more of Fuel
could draw on TripleO facilities, for instance.

-Rob

On 9 June 2014 19:41, LeslieWang wqyu...@hotmail.com wrote:
 Dear all,

 Seems like both Fuel and TripleO are designed to solve problem of complex
 Openstack installation and Deployment. TripleO is using Heat for
 orchestration. If we can define network creation, OS provision and
 deployment in Heat template, seems like they can achieve similar goal. So
 can anyone explain the difference of these two projects, and future roadmap
 of each of them? Thanks!

 TripleO is a program aimed at installing, upgrading and operating OpenStack
 clouds using OpenStack's own cloud facilities as the foundations - building
 on nova, neutron and heat to automate fleet management at datacentre scale
 (and scaling down to as few as 2 machines).

 Fuel is an all-in-one control plane for automated hardware discovery,
 network verification, operating systems provisioning and deployment of
 OpenStack. It provides user-friendly Web interface for installations
 management, simplifying OpenStack installation up to a few clicks.

 Best Regards
 Leslie

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Mehdi Abaakouk added to oslo.messaging-core

2014-06-09 Thread Victor Stinner
Le vendredi 6 juin 2014, 15:57:09 Mark McLoughlin a écrit :
 Mehdi has been making great contributions and reviews on oslo.messaging
 for months now, so I've added him to oslo.messaging-core.
 
 Thank you for all your hard work Mehdi!

Congrats Mehdi :-)

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy mechanism drivers in ML2

2014-06-09 Thread Luke Gorrie
On 6 June 2014 10:17, henry hly henry4...@gmail.com wrote:

 ML2 mechanism drivers are becoming another kind of plugins. Although
 they can be loaded together, but can not work with each other.

[...]

 Could we remove all device related adaption(rest/ssh/netconf/of... proxy)
 from these mechanism driver to the agent side, leaving only necessary code
 in the plugin?


In the Snabb NFV mech driver [*] we are trying a design that you might find
interesting.

We stripped the mech driver down to bare bones and declared that the agent
has to access the Neutron configuration independently.

In practice this means that our out-of-tree agent is connecting to
Neutron's MySQL database directly instead of being fed config changes by
custom sync code in ML2. This means there are very little work for the mech
driver to do (in our case check configuration and perform special port
binding).

We are also trying to avoid running an OVS/LinuxBridge-style agent on the
compute hosts in order to keep the code footprint small. I hope we will
succeed -- I'd love to hear if somebody else is running agent-less?
Currently we depend on a really ugly workaround to make VIF binding succeed
and we are looking for a clean alternative:
https://github.com/lukego/neutron/commit/31d6d0657aeae9fd97a63e4d53da34fb86be92f7

[*] Snabb NFV mech driver code: https://review.openstack.org/95711

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Day, Phil
Hi Joe,

Can you give some examples of what that data would be used for ?

It sounds on the face of it that what you’re looking for is pretty similar to 
what Extensible Resource Tracker sets out to do 
(https://review.openstack.org/#/c/86050   
https://review.openstack.org/#/c/71557)

Phil

From: Joe Cropper [mailto:cropper@gmail.com]
Sent: 07 June 2014 07:30
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Arbitrary extra specs for compute nodes?

Hi Folks,
I was wondering if there was any such mechanism in the compute node structure 
to hold arbitrary key-value pairs, similar to flavors' extra_specs concept?
It appears there are entries for things like pci_stats, stats and recently 
added extra_resources -- but these all tend to have more specific usages vs. 
just arbitrary data that may want to be maintained about the compute node over 
the course of its lifetime.
Unless I'm overlooking an existing construct for this, would this be something 
that folks would welcome a Juno blueprint for--i.e., adding extra_specs style 
column with a JSON-formatted string that could be loaded as a dict of key-value 
pairs?

Thoughts?
Thanks,
Joe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Debugging Devstack Neutron with Pycharm

2014-06-09 Thread Gal Sagie
Hello all, 

I am trying to debug devstack Neutron with Pycharm, i have found here ( 
https://wiki.openstack.org/wiki/NeutronDevelopment#How_to_debug_Neutron_.28and_other_OpenStack_projects_probably_.29
 ) 
That i need to change the neutron server code to this= eventlet.monkey_patch() 
To: eventlet.monkey_patch(os=False, thread=False) 

I have done so, debug seems to run, but when i am trying to initiate commands 
from the CLI i get this: 


gal@ubuntu:~/devstack$ neutron net-list 
Connection to neutron failed: Maximum attempts reached 

(the server seems to run ok...) 

Any help is appreciated as i am trying to learn and understand main flows by 
debugging the code locally. 

Thanks 

Gal. 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Debugging Devstack Neutron with Pycharm

2014-06-09 Thread Gal Sagie
Hello all, 

I am trying to debug devstack Neutron with Pycharm, i have found here ( 
https://wiki.openstack.org/wiki/NeutronDevelopment#How_to_debug_Neutron_.28and_other_OpenStack_projects_probably_.29
 ) 
That i need to change the neutron server code to this= eventlet.monkey_patch() 
To: eventlet.monkey_patch(os=False, thread=False) 

I have done so, debug seems to run, but when i am trying to initiate commands 
from the CLI i get this: 


gal@ubuntu:~/devstack$ neutron net-list 
Connection to neutron failed: Maximum attempts reached 

(the server seems to run ok...) 

Any help is appreciated as i am trying to learn and understand main flows by 
debugging the code locally. 

Thanks 

Gal. 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] AggregateMultiTenancyIsolation scheduler filter - bug, or new feature proposal?

2014-06-09 Thread Jesse Pretorius
Hi everyone,

We have a need to be able to dedicate a specific host aggregate to a list
of tenants/projects. If the aggregate is marked as such, the aggregate may
only be used by that specified list of tenants and those tenants may only
be scheduled to that aggregate.

The AggregateMultiTenancyIsolation filter almost does what we need - it
pushes all new instances created by a specified tenant to the designated
aggregate. However, it also seems to still see that aggregate as available
for other tenants.

The description in the documentation [1] states: If a host is in an
aggregate that has the metadata key filter_tenant_id it only creates
instances from that tenant (or list of tenants).

This would seem to us either as a code bug, or a documentation bug?

If the filter is working as intended, then I'd like to propose working on a
patch to the filter which has an additional metadata field (something like
'filter_tenant_exclusive') which - when 'true' - will consider the
filter_tenant_id list to be the only projects/tenants which may be
scheduled onto the host aggregate, and the only host aggregate which the
list of projects/tenants which may be scheduled onto.

Note that there has been some similar work done with [2] and [3]. [2]
actually works as we expect, but as is noted in the gerrit comments it
seems rather wasteful to add a new filter when we could use the existing
filter as a base. [3] is a much larger framework to facilitate end-users
being able to request a whole host allocation - while this could be a nice
addition, it's overkill for what we're looking for. We're happy to
facilitate this with a simple admin-only allocation.

So - should I work on a nova-specs proposal for a change, or should I just
log a bug against either nova or docs? :) Guidance would be appreciated.

[1]
http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html
[2]
https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
[3] https://blueprints.launchpad.net/nova/+spec/whole-host-allocation
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread hossein zabolzadeh
Hi OpenStack Development Community,
I know that the OpenStack interest is to become a cloud computing operating
system. And this simple sentence means: Say goodbye to Statefull
Applications.
But, as you know we are in the transition phase from stateful apps to
stateless apps(Remember Pets and Cattle Example). Legacy apps are still in
used and how openstack can address the problems of running stateful
applications(e.g. HA, DR, FT, R,...)?
HA: High Availability
DR: Disaster Recovery
FT: Fault Tolerance
R: Resiliancy!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Designate Incubation Request

2014-06-09 Thread Sean Dague
On 06/06/2014 12:06 PM, Mac Innes, Kiall wrote:
 Several of the TC requested we have an openstack-infra managed DevStack
 gate enabled before they would cast their vote - I'm happy to say, we've
 got it :)
 
 With the merge of [1], Designate now has voting devstack /
 requirements / docs jobs. An example of the DevStack run is at [2].
 
 Vote Designate @ [3] :)
 
 Thanks,
 Kiall
 
 [1]: https://review.openstack.org/#/c/98439/
 [2]: https://review.openstack.org/#/c/98442/
 [3]: https://review.openstack.org/#/c/97609/

I'm seeing in [2] api logs that something was run (at least 1 API
request was processed), but it's hard to see where that is in the
console logs. Pointers?

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Duncan Thomas
On 9 June 2014 09:44, Eoghan Glynn egl...@redhat.com wrote:

 Since certification seems to be quite an overloaded term
 already, I wonder would a more back-to-basics phrase such as
 quality assured better capture the Cinder project's use of
 the word?

 It does exactly what it says on the tin ... i.e. captures the
 fact that a vendor has run an agreed battery of tests against
 their driver and the harness has reported green-ness with a
 meaning that is well understood upstream (as the Tempest test
 cases are in the public domain).


I think 'quality-assured' makes a far stronger statement than
'certified'. 'Certified' indicated that some configuration has been
shown to work for for some set of feature, and some organisation is
attesting to the fact that is true. This is /exactly/ what the cinder
team is attesting to, and this program was bought in
_because_a_large_number_of_drivers_didn't_work_in_the_slightest_.
Since it is the cinder team who are going to get up fielding support
for cinder code, and the cinder team who's reputation is on the line
over the quality of cinder code, I think we are exactly the people who
can design a certification program, and that is exactly what we have
done.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-09 Thread Evgeny Fedoruk
Hi All,



A Spec. RST  document for LBaaS TLS support was added to Gerrit for review

https://review.openstack.org/#/c/98640



You are welcome to start commenting it for any open discussions.

I tried to address each aspect being discussed, please add comments about 
missing things.



Thanks,

Evgeny

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-09 Thread Duncan Thomas
On 6 June 2014 18:29, Anita Kuno ante...@anteaya.info wrote:
 So there are certain words that mean certain things, most don't, some do.

 If words that mean certain things are used then some folks start using
 the word and have expectations around the word and the OpenStack
 Technical Committee and other OpenStack programs find themselves on the
 hook for behaviours that they didn't agree to.

 Currently the word under discussion is certified and its derivatives:
 certification, certifying, and others with root word certificate.

 This came to my attention at the summit with a cinder summit session
 with the one of the cerficiate words in the title. I had thought my
 point had been made but it appears that there needs to be more
 discussion on this. So let's discuss.

 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He
 certified the truth of his claim.

So the cinder team are attesting that a set of tests have been run
against a driver: a certified driver.

 3. to guarantee; endorse reliably: to certify a document with an
 official seal.

We (the cinder) team) are guaranteeing that the driver has been
tested, in at least one configuration, and found to pass all of the
tempest tests. This is a far better state than we were at 6 months
ago, where many drivers didn't even pass a smoke test.

 5. to award a certificate to (a person) attesting to the completion of a
 course of study or the passing of a qualifying examination.

The cinder cert process is pretty much an exam.


I think the work certification covers exactly what we are doing. Give
cinder-core are the people on the hook for any cinder problems
(including vendor specific ones), and the cinder core are the people
who get bad-mouthed when there are problems (including vendor specific
ones), I think this level of certification gives us value.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Evgeny Fedoruk
Hi All,

A Spec RST  document was added to Gerrit for review
https://review.openstack.org/#/c/98640

You are welcome to start commenting it for any open discussions.
I tried to address each aspect being discussed,
please add comments about missing things.

Thanks,
Evgeny


-Original Message-
From: Samuel Bercovici 
Sent: Monday, June 09, 2014 9:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici; Evgeny Fedoruk
Subject: RE: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new one, 
create a new one and update the listener to use it. 
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration 
Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on how 
Barbican and Neutron LBaaS will interact. There are currently two ideas in play 
and both will work. If you have another idea please free to add it so that we 
may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from 
Barbican. For those that aren't up to date with the Neutron LBaaS API Revision, 
the project/tenant/user provides a secret (container?) id when enabling SSL/TLS 
functionality.

* Example: If a user makes a change to a secret/container in Barbican then 
Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will be 
supported.
 - Decisions are made on behalf of the user which lessens the amount of calls 
the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to ensure 
delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI think.

2. Push orchestration decisions to API users. This idea comes with two 
assumptions. The first assumption is that most providers' customers use the 
cloud via a GUI, which in turn can handle any orchestration decisions that need 
to be made. The second assumption is that power API users are savvy and can 
handle their decisions as well. Using this method requires services, such as 
LBaaS, to register in the form of metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which services 
are registered and opt to warn the user of consequences. Power users can look 
at the registered services and make decisions how they see fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality is at 
the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in this case?
 - Pushes complexity of decision making on to GUI engineers and power API users.


I would like to get a consensus on which option to move forward with ASAP since 
the hackathon is coming up and delivering Barbican to Neutron LBaaS integration 
is essential to exposing SSL/TLS functionality, which almost everyone has 
stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My reason 
for choosing option #2 has to deal mostly with the simplicity of implementing 
such a mechanism. Simplicity also means we can implement the necessary code and 
get it approved much faster which seems to be a concern for everyone. What 
option does everyone else want to move forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Eoghan Glynn


 On 9 June 2014 09:44, Eoghan Glynn egl...@redhat.com wrote:
 
  Since certification seems to be quite an overloaded term
  already, I wonder would a more back-to-basics phrase such as
  quality assured better capture the Cinder project's use of
  the word?
 
  It does exactly what it says on the tin ... i.e. captures the
  fact that a vendor has run an agreed battery of tests against
  their driver and the harness has reported green-ness with a
  meaning that is well understood upstream (as the Tempest test
  cases are in the public domain).
 
 
 I think 'quality-assured' makes a far stronger statement than
 'certified'.

Hmmm, what kind of statement is made by the title of the program
under which the Tempest harness falls:

  
https://github.com/openstack/governance/blob/master/reference/programs.yaml#L247

The purpose of Quality Assurance is to assure quality, no?

So essentially anything that passes such QA tests, has had its
quality assured in a well-understood sense?

 'Certified' indicated that some configuration has been
 shown to work for for some set of feature, and some organisation is
 attesting to the fact that is true. This is /exactly/ what the cinder
 team is attesting to, and this program was bought in
 _because_a_large_number_of_drivers_didn't_work_in_the_slightest_.
 Since it is the cinder team who are going to get up fielding support
 for cinder code, and the cinder team who's reputation is on the line
 over the quality of cinder code, I think we are exactly the people who
 can design a certification program, and that is exactly what we have
 done.

Sure, no issue at all with the Cinder team being best placed to
judge what works and what doesn't in terms of Cinder backends.

Just gently suggesting that due to the terminology-overload, it
might be wise to choose a term with fewer connotations.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-09 Thread Dmitriy Shulyak
Hi folks,

I know that sometime ago saltstack was evaluated to be used as orchestrator
in fuel, so I've prepared some initial specification, that addresses basic
points of integration, and general requirements for orchestrator.

In my opinion saltstack perfectly fits our needs, and we can benefit from
using mature orchestrator, that has its own community. I still dont have
all the answers, but , anyway, i would like to ask all of you to start a
review for specification

https://docs.google.com/document/d/1uOHgxM9ZT_2IdcmWvgpEfCMoV8o0Fk7BoAlsGHEoIfs/edit?usp=sharing

I will place it in fuel-docs repo as soon as specification will be full
enough to start POC, or if you think that spec should placed there as is, i
can do it now

Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] AggregateMultiTenancyIsolation scheduler filter - bug, or new feature proposal?

2014-06-09 Thread Belmiro Moreira
Hi Jesse,

I would say that is a documentation bug for the
“AggregateMultiTenancyIsolation” filter.


When this was implemented the objective was to schedule only instances from
specific tenants for those aggregates but not make them exclusive.


That’s why the work on
https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
started but was left on hold because it was believed
https://blueprints.launchpad.net/nova/+spec/whole-host-allocation had some
similarities and eventually could solve the problem in a more generic way.


However p-clouds implementation is marked as “slow progress” and I believe
there is no active work at the moment.


Probably is a good time to review the ProjectsToAggregateFilter filter
again. The implementation and reviews are available at
https://review.openstack.org/#/c/28635/


One of the problems raised was performance concerns considering the number
of DB queries required. However this can be documented if people intend to
enable the filter.

In the review there was also the discussion about a config option for the
old filter.


cheers,

Belmiro


--

Belmiro Moreira

CERN

Email: belmiro.more...@cern.ch

IRC: belmoreira



On Mon, Jun 9, 2014 at 1:12 PM, Jesse Pretorius jesse.pretor...@gmail.com
wrote:

 Hi everyone,

 We have a need to be able to dedicate a specific host aggregate to a list
 of tenants/projects. If the aggregate is marked as such, the aggregate may
 only be used by that specified list of tenants and those tenants may only
 be scheduled to that aggregate.

 The AggregateMultiTenancyIsolation filter almost does what we need - it
 pushes all new instances created by a specified tenant to the designated
 aggregate. However, it also seems to still see that aggregate as available
 for other tenants.

 The description in the documentation [1] states: If a host is in an
 aggregate that has the metadata key filter_tenant_id it only creates
 instances from that tenant (or list of tenants).

 This would seem to us either as a code bug, or a documentation bug?

 If the filter is working as intended, then I'd like to propose working on
 a patch to the filter which has an additional metadata field (something
 like 'filter_tenant_exclusive') which - when 'true' - will consider the
 filter_tenant_id list to be the only projects/tenants which may be
 scheduled onto the host aggregate, and the only host aggregate which the
 list of projects/tenants which may be scheduled onto.

 Note that there has been some similar work done with [2] and [3]. [2]
 actually works as we expect, but as is noted in the gerrit comments it
 seems rather wasteful to add a new filter when we could use the existing
 filter as a base. [3] is a much larger framework to facilitate end-users
 being able to request a whole host allocation - while this could be a nice
 addition, it's overkill for what we're looking for. We're happy to
 facilitate this with a simple admin-only allocation.

 So - should I work on a nova-specs proposal for a change, or should I just
 log a bug against either nova or docs? :) Guidance would be appreciated.

 [1]
 http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html
 [2]
 https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
 [3] https://blueprints.launchpad.net/nova/+spec/whole-host-allocation

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-09 Thread Eugene Nikanorov
Mike,

Thanks a lot for your response!
Some comments:
 There’s some in-Python filtering following it which does not seem
necessary; the alloc.vxlan_vni not in vxlan_vnis” phrase
 could just as well be a SQL “NOT IN” expression.
There we have to do specific set intersection between configured ranges and
existing allocation. That could be done in sql,
but that certainly would lead to a huge sql query text as full vxlan range
could consist of 16 millions of ids.

  The synchronize_session=“fetch” is certainly a huge part of the time
spent here
You've actually made a good point about synchronize_session=“fetch” which
was obviously misused by me.
It seems to save up to 40% of plain deleting time.

I've fixed that and get some speedup with deletes for both mysql and
postgress that reduced difference between chunked/non-chunked version:

50k vnis to add/deletePg adding vnisPg deleting vnisPg TotalMysql adding
vnisMysql deleting vnisMysql totalnon-chunked sql221537151530chuked in 10020
1333141428

Results of chunked and non-chunked version look closer, but gap increases
with vni range size (based on few tests of 150k vni range)

So I'm going to fix chunked version that is on review now. If you think
that the benefit doesn't worth complexity - please let me know.

Thanks,
Eugene.

On Mon, Jun 9, 2014 at 1:33 AM, Mike Bayer mba...@redhat.com wrote:


 On Jun 7, 2014, at 4:38 PM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

 Hi folks,

 There was a small discussion about the better way of doing sql operations
 for vni synchronization with the config.
 Initial proposal was to handle those in chunks. Carl also suggested to
 issue a single sql query.
 I've did some testing with my sql and postgress.
 I've tested the following scenario: vxlan range is changed from
 5:15 to 0:10 and vice versa.
 That involves adding and deleting 5 vni in each test.

 Here are the numbers:
  50k vnis to add/deletePg adding vnisPg deleting vnis Pg TotalMysql
 adding vnis Mysql deleting vnisMysql totalnon-chunked sql 232245 142034 
 chunked
 in 10020 173714 1731

 I've done about 5 tries to get each number to minimize random floating
 factor (due to swaps, disc or cpu activity or other factors)
 That might be surprising that issuing multiple sql statements instead one
 big is little bit more efficient, so I would appreciate if someone could
 reproduce those numbers.
 Also I'd like to note that part of code that iterates over vnis fetched
 from db is taking 10 seconds both on mysql and postgress and is a part of
 deleting vnis numbers.
 In other words, difference between multiple DELETE sql statements and
 single one is even bigger (in percent) than these numbers show.

 The code which I used to test is here:
 http://paste.openstack.org/show/83298/
 Right now the chunked version is commented out, so to switch between
 versions some lines should be commented and some - uncommented.


 I’ve taken a look at this, though I’m not at the point where I have things
 set up to run things like this within full context, and I don’t know that I
 have any definitive statements to make, but I do have some suggestions:

 1. I do tend to chunk things a lot, selects, deletes, inserts, though the
 chunk size I work with is typically more like 1000, rather than 100.   When
 chunking, we’re looking to select a size that doesn’t tend to overload the
 things that are receiving the data (query buffers, structures internal to
 both SQLAlchemy as well as the DBAPI and the relational database), but at
 the same time doesn’t lead to too much repetition on the Python side (where
 of course there’s a lot of slowness).

 2. Specifically regarding “WHERE x IN (…..)”, I always chunk those.  When
 we use IN with a list of values, we’re building an actual SQL string that
 becomes enormous.  This puts strain on the database’s query engine that is
 not optimized for SQL strings that are hundreds of thousands of characters
 long, and on some backends this size is limited; on Oracle, there’s a limit
 of 1000 items.   So I’d always chunk this kind of thing.

 3. I’m not sure of the broader context of this code, but in fact placing a
 literal list of items in the IN in this case seems unnecessary; the
 “vmis_to_remove” list itself was just SELECTed two lines above.   There’s
 some in-Python filtering following it which does not seem necessary; the 
 alloc.vxlan_vni not in vxlan_vnis” phrase could just as well be a SQL
 “NOT IN” expression.  Not sure if determination of the “.allocated” flag
 can be done in SQL, if that’s a plain column, then certainly.Again not
 sure if this is just an artifact of how the test is done here, but if the
 goal is to optimize this code for speed, doing a DELETE…WHERE .. IN (SELECT
 ..) is probably better.   I see that the SELECT is using a lockmode, but it
 would seem that if just the rows we care to DELETE are inlined within the
 DELETE itself this wouldn’t be needed either.

 It’s likely that everything in #3 is pretty 

[openstack-dev] [all] Gate still backed up - need assistance with nova-network logging enhancements

2014-06-09 Thread Sean Dague
Based on some back of envelope math the gate is basically processing 2
changes an hour, failing one of them. So if you want to know how long
the gate is, take the length / 2 in hours.

Right now we're doing a lot of revert roulette, trying to revert things
that we think landed about the time things went bad. I call this
roulette because in many cases the actual issue isn't well understood. A
key reason for this is:

*nova network is a blackhole*

There is no work unit logging in nova-network, and no attempted
verification that the commands it ran did a thing. Most of these
failures that we don't have good understanding of are the network not
working under nova-network.

So we could *really* use a volunteer or two to prioritize getting that
into nova-network. Without it we might manage to turn down the failure
rate by reverting things (or we might not) but we won't really know why,
and we'll likely be here again soon.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Constraint validation and property list filtering in Murano

2014-06-09 Thread Alexander Tivelkov
Hi folks,

There is an important topic which I would like to discuss: it seems like
there is a place for improvement in UI validation and filtering in Murano.

The reason of writing this is a change-set [1] (being an implementation of
blueprint [2]) which allows package developers to specify the constraints
for Flavor fields in dynamic UI definitions, and a little controversy about
this commit among the core team.
In my opinion, the change itself is great (thanks, Ryan!) and I am going to
put my +2 on it, but I would like to say that there may exist a better and
more complete approach, which we probably should adopt in future.


The main idea is that in Murano we have a concept of Application
Definitions, and these definitions should be complete enough to specify all
the properties, dependencies, constraints and limitations for each
application in the Catalog.
Currently we write these defintions in MuranoPL, and the constraints and
limitations are defined as its Contracts.

For example, imagine we have an application which should be run on a Server
having some specific hardware spec, e.g. having not less then 2 CPU cores
and at least 8 Gb of RAM.
In this case, these limits may be expressed as the Contract on the property
defining the reference to the VM. The contract may look like this:

$.class(Instance).check($.flavor.cpuCores=2 and $.flavor.ramMb=8192)

(this will require us to create a data structure for flavors: currently we
use plain string names - but this is quite an easy and straitforward change)

Defining filter constraints on the UI side without having them in MuranoPL
constraints is not enough: even if the UI is used to restrict the values of
some properties, these restrictions may be ignored if the input object mode
is composed manually and is sent to MuranoAPI without UI usage. This means
that the MuranoPL contract should be the primary source of
constraints/limitations, while the UI-side properties only suppliment them.

This causes the need of defining constraints in two locations: in MuranoPL
for runtime validation and in UI definitions for client-side checks and
filtering. These two have different notations: MuranoPL uses flexible
yaql-based contracts, which allow to construct and enforce almost eny
expressions, while DynamicUI has a limited number of available properties
for each type of input field. If some field does not have ability to
enforce some check, then it has to be added in python code and commited to
Murano's codebase, which contradicts with the mission of Application
Catalog.
This approach is overcomplicated, as it requires the package developer to
learn two different notations. Also it is error-prone, as there is no
automatic way to ensure that the ui-side constraint definitions do really
match the MuranoPL contracts.


So, I would prefer to have a single location of constraint definitions -
MuranoPL contracts. These contracts (in their yaql form) should be
processible by the dynamic UI  and should be used for both field value
checks and dropdown lists filterings.
Also, the UI form for each component of the environment should be displayed
and validated in the context of the contract applied to this component.
In the example given above, the virtual machine contract is defined for the
application class, while the UI form for it is defined for Instance
class. While this form should be the same in all usages of this class, its
context (availability and possible values of different fields) should be
defined by the contracts defined by the class which uses it, i.e. the
Application.



As a bottom line, I would suggest to accept commit [1] for now (we need
flavor filtering anyway), but agree that this should be a temporary
workaround. Meahwile, we need to design and implement a way of passing
contracts from MuranoPL classes to the UI engine and use this contracts fro
both API-side validation and list filtering.


[1] https://review.openstack.org/#/c/97904/
[2]
https://blueprints.launchpad.net/murano/+spec/filter-flavor-for-each-service

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Jay Pipes

On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and testing
purposes.


Pray tell, what is that valid use case?


I am not defending a particular way of doing this, just
bringing up that it has to be handled. The effect on limits is purely
implementation - no limits get set so it by-passes any resource
constraints, which is deliberate.

-Original Message- From: Jay Pipes
[mailto:jaypi...@gmail.com] Sent: 04 June 2014 19:17 To:
openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [nova]
Proposal: Move CPU and memory allocation ratio out of scheduler

On 06/04/2014 06:10 AM, Murray, Paul (HP Cloud) wrote:

Hi Jay,

This sounds good to me. You left out the part of limits from the
discussion - these filters set the limits used at the resource
tracker.


Yes, and that is, IMO, bad design. Allocation ratios are the domain
of the compute node and the resource tracker. Not the scheduler. The
allocation ratios simply adjust the amount of resources that the
compute node advertises to others. Allocation ratios are *not*
scheduler policy, and they aren't related to flavours.


You also left out the force-to-host and its effect on limits.


force-to-host is definitively non-cloudy. It was a bad idea that
should never have been added to Nova in the first place.

That said, I don't see how force-to-host has any affect on limits.
Limits should not be output from the scheduler. In fact, they
shouldn't be anything other than an *input* to the scheduler,
provided in each host state struct that gets built from records
updated in the resource tracker and the Nova database.


Yes, I would agree with doing this at the resource tracker too.

And of course the extensible resource tracker is the right way to
do it J


:) Yes, clearly this is something that I ran into while brainstorming
around the extensible resource tracker patches.

Best, -jay


Paul.

*From:*Jay Lau [mailto:jay.lau@gmail.com] *Sent:* 04 June 2014
10:04 *To:* OpenStack Development Mailing List (not for usage
questions) *Subject:* Re: [openstack-dev] [nova] Proposal: Move CPU
and memory allocation ratio out of scheduler

Does there is any blueprint related to this? Thanks.

2014-06-03 21:29 GMT+08:00 Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com:

Hi Stackers,

tl;dr =

Move CPU and RAM allocation ratio definition out of the Nova
scheduler and into the resource tracker. Remove the calculations
for overcommit out of the core_filter and ram_filter scheduler
pieces.

Details ===

Currently, in the Nova code base, the thing that controls whether
or not the scheduler places an instance on a compute host that is
already full (in terms of memory or vCPU usage) is a pair of
configuration options* called cpu_allocation_ratio and
ram_allocation_ratio.

These configuration options are defined in, respectively,
nova/scheduler/filters/core_filter.py and
nova/scheduler/filters/ram_filter.py.

Every time an instance is launched, the scheduler loops through a
collection of host state structures that contain resource
consumption figures for each compute node. For each compute host,
the core_filter and ram_filter's host_passes() method is called. In
the host_passes() method, the host's reported total amount of CPU
or RAM is multiplied by this configuration option, and the product
is then subtracted from the reported used amount of CPU or RAM. If
the result is greater than or equal to the number of vCPUs needed
by the instance being launched, True is returned and the host
continues to be considered during scheduling decisions.

I propose we move the definition of the allocation ratios out of
the scheduler entirely, as well as the calculation of the total
amount of resources each compute node contains. The resource
tracker is the most appropriate place to define these configuration
options, as the resource tracker is what is responsible for keeping
track of total and used resource amounts for all compute nodes.

Benefits:

* Allocation ratios determine the amount of resources that a
compute node advertises. The resource tracker is what determines
the amount of resources that each compute node has, and how much of
a particular type of resource have been used on a compute node. It
therefore makes sense to put calculations and definition of
allocation ratios where they naturally belong. * The scheduler
currently needlessly re-calculates total resource amounts on every
call to the scheduler. This isn't necessary. The total resource
amounts don't change unless either a configuration option is
changed on a compute node (or host aggregate), and this calculation
can be done more efficiently once in the resource tracker. * Move
more logic out of the scheduler * With the move to an extensible
resource tracker, we can more easily evolve to defining all
resource-related options in the same place (instead of in different
filter files 

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Jay Pipes

On 06/05/2014 09:54 AM, Day, Phil wrote:

-Original Message- From: Jay Pipes
[mailto:jaypi...@gmail.com] Sent: 04 June 2014 19:23 To:
openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
[nova] Proposal: Move CPU and memory allocation ratio out of
scheduler

On 06/04/2014 11:56 AM, Day, Phil wrote:

Hi Jay,


* Host aggregates may also have a separate allocation ratio
that overrides any configuration setting that a particular host
may have


So with your proposal would the resource tracker be responsible
for picking and using override values defined as part of an
aggregate that includes the host ?


Not quite sure what you're asking, but I *think* you are asking
whether I am proposing that the host aggregate's allocation ratio
that a compute node might be in would override any allocation ratio
that might be set on the compute node? I would say that no, the
idea would be that the compute node's allocation ratio would
override any host aggregate it might belong to.



I'm not sure why you would want it that way round - aggregates lets
me set/change the value of a number of hosts, and change the set of
hosts that the values apply to.That in general seems a much
better model for operators that having to manage things on a per host
basis.

Why not keep the current model where an aggregate  setting overrides
the default - that will now come from the host config rather that
scheduler config ?


That's actually exactly what I proposed in the blueprint spec:

https://review.openstack.org/#/c/98664/

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Use of final and private keywords to limit extending

2014-06-09 Thread Matthew Farina
If you don't mind I'd like to step back for a moment and talk about
the end users of this codebase and the types code it will be used in.

We're looking to make application developers successful in PHP. The
top 10% of PHP application developers aren't an issue. If they have an
SDK or not they will build amazing things. It's the long tail of app
devs. Many of these developers don't know things we might take for
granted, like dependency injection. A lot of them may writing
spaghetti procedural code. I use these examples because I've run into
them in the past couple months. We need to make these folks successful
in a cost effective and low barrier to entry manner.

When I've gotten into the world of closed source PHP (or any other
language for that matter) and work that's not in the popular space
I've seen many things that aren't clean or pretty. But, they work.

That means this SDK needs to be useful in the modern frameworks (which
vary widely on opinions) and in environments we may not like.

The other thing I'd like to talk about is the protected keyword. I use
this a lot. Using protected means an outside caller can't access the
method. Only other methods on the class or classes that extend it.
This is an easy way to have an API and internals.

Private is different. Private means it's part of the class but not
there for extended classes. It's not just about controlling the public
API for callers but not letting classes that extend this one have
access to the functionality.

Given the scope of who our users are...

- Any place we use the `final` scoping we need to explain how to
extend it properly. It's a teaching moment for someone who might not
come to a direction on what to do very quickly. Think about the long
tail of developers and projects, most of which are not open source.

Note, I said I'm not opposed to using final. It's an intentional
decision. For the kinds of things we're doing I can't see all to many
use cases for using final. We need to enable users to be successful
without controlling how they write applications because this is an
add-on to help them not a driver for their architecture.

- For scoping private and public APIs, `protected` is a better keyword
unless we are intending on blocking extension. If we block extension
we should explain how to handled overriding things that are likely to
happen in real world applications that are not ideally written or
architected.

At the end of the day, applications that successfully do what they
need to do while using OpenStack on the backend is what will make
OpenStack more successful. We need to help make it easy for the
developers, no matter how they choose to code, to be successful. I
find it useful to focus on end users and their practical cases over
the theory of how to design something.

Thoughts,
Matt


On Fri, Jun 6, 2014 at 10:01 AM, Jamie Hannaford
jamie.hannaf...@rackspace.com wrote:
 So this is an issue that’s been heavily discussed recently in the PHP
 community.

 Based on personal opinion, I heavily favor and use private properties in
 software I write. I haven’t, however, used the “final” keyword that much.
 But the more I read about and see it being used, the more inclined I am to
 use it in projects. Here’s a great overview of why it’s useful for public
 APIs: http://verraes.net/2014/05/final-classes-in-php/

 Here’s a tl;dr executive summary:

 - Open/Closed principle. It’s important to understand that “Open for
 extension”, does not mean “Open for inheritance”. Composition, strategies,
 callbacks, plugins, event listeners, … are all valid ways to extend without
 inheritance. And usually, they are much preferred to inheritance – hence the
 conventional recommendation in OOP to “favour composition over inheritance”.
 Inheritance creates more coupling, that can be hard to get rid of, and that
 can make understanding the code quite tough.

 - Providing an API is a responsibility: by allowing end-users to access
 features of our SDK, we need to give certain guarantees of stability or low
 change frequency. The behavior of classes should be deterministic - i.e. we
 should be able to trust that a class does a certain thing. There’s no trust
 whatsoever if that behavior can be edited and overridden from external code.

 - Future-proofing: the fewer behaviours and extension points we expose, the
 more freedom we have to change system internals. This is the idea behind
 encapsulation.

 You said that we should only use private and final keywords if there’s an
 overwhelming reason to do so. I completely disagree. I actually want to flip
 the proposition here: I think we should only use public keywords if we’re
 CERTAIN we want to encourage and allow the inheritance of that class. By
 making a class inheritable, you are saying to the outside world: this class
 is meant to be extended. And the majority of times this is not what we want.
 Sure there are times when inheritance may well be the best option - but you
 can support extension 

[openstack-dev] Promoting healing script to scheme migration script?

2014-06-09 Thread Jakub Libosvar
Hi all,

I'd like to get some opinions on following idea:

Because currently we have (thanks to Ann) WIP of healing script capable
of changing database scheme by comparing tables in the database to
models in current codebase, I started to think whether it could be used
generally to db upgrades instead of generating migration scripts.

If I understand correctly the purpose of migration scripts used to be to:
1) separate changes according plugins
2) upgrade database scheme
3) migrate data according the changed scheme

Since we dropped on conditional migrations, we can cross out no.1).
The healing script is capable of doing no.2) without any manual effort
and without adding migration script.

That means if we will decide to go along with using script for updating
database scheme, migration scripts will be needed only for data
migration (no.3)) which are from my experience rare.

Also other benefit would be that we won't need to store all database
models from Icehouse release which we probably will need in case we want
to heal database in order to achieve idempotent Icehouse database
scheme with Juno codebase.

Please share your ideas and reveal potential glitches in the proposal.

Thank you,
Kuba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread Matthew Farina
In my experience building apps that run in OpenStack, you don't give
up state. You shift how you handle state.

For example, instead of always routing a user to the same instance and
that instance holding the session data there is a common session store
for the app (possibly synced between regions). If you store session on
each instance and loose an instance you'll run into problems. If
sessions is more of a service for each instance than an instance
coming and going isn't a big deal.

A good database as a service, swift (object storage), and maybe a
microservice architecture may be helpful.

Legacy applications might have some issues with the architecture
changes and some may not be a good fit for cloud architectures. One
way to help legacy applications is to use block storage, keep the
latest snapshot of the instance in glance (image service), and monitor
an instance. If an instance goes offline you can easily create a new
one from the image and mount block storage with the data.

- Matt



On Mon, Jun 9, 2014 at 7:27 AM, hossein zabolzadeh zabolza...@gmail.com wrote:
 Hi OpenStack Development Community,
 I know that the OpenStack interest is to become a cloud computing operating
 system. And this simple sentence means: Say goodbye to Statefull
 Applications.
 But, as you know we are in the transition phase from stateful apps to
 stateless apps(Remember Pets and Cattle Example). Legacy apps are still in
 used and how openstack can address the problems of running stateful
 applications(e.g. HA, DR, FT, R,...)?
 HA: High Availability
 DR: Disaster Recovery
 FT: Fault Tolerance
 R: Resiliancy!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][db] Promoting healing script to scheme migration script?

2014-06-09 Thread Jakub Libosvar
Forgot to add tags, sorry

On 06/09/2014 04:18 PM, Jakub Libosvar wrote:
 Hi all,
 
 I'd like to get some opinions on following idea:
 
 Because currently we have (thanks to Ann) WIP of healing script capable
 of changing database scheme by comparing tables in the database to
 models in current codebase, I started to think whether it could be used
 generally to db upgrades instead of generating migration scripts.
 
 If I understand correctly the purpose of migration scripts used to be to:
 1) separate changes according plugins
 2) upgrade database scheme
 3) migrate data according the changed scheme
 
 Since we dropped on conditional migrations, we can cross out no.1).
 The healing script is capable of doing no.2) without any manual effort
 and without adding migration script.
 
 That means if we will decide to go along with using script for updating
 database scheme, migration scripts will be needed only for data
 migration (no.3)) which are from my experience rare.
 
 Also other benefit would be that we won't need to store all database
 models from Icehouse release which we probably will need in case we want
 to heal database in order to achieve idempotent Icehouse database
 scheme with Juno codebase.
 
 Please share your ideas and reveal potential glitches in the proposal.
 
 Thank you,
 Kuba
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mid-cycle questions for folks

2014-06-09 Thread Rossella Sblendido
I had to call too. I got same conditions as Carl.

cheers,

Rossella

On 06/05/2014 04:45 PM, Kyle Mestery wrote:
 It would be ideal if folks could use the room block I reserved when
 booking, if their company policy allows it. I've gotten word from the
 hotel they may release the block if more people don't use it, just
 FYI.

 On Thu, Jun 5, 2014 at 5:46 AM, Paul Michali (pcm) p...@cisco.com wrote:
 I booked through our company travel and got a comparable rate ($111 or $114, 
 I can’t recall the exact price).

 Regards,

 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



 On Jun 5, 2014, at 12:48 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Yes, I was able to book it for $114 a night with no prepayment.  I had
 to call.  The agent found the block under Cisco and the date range.

 Carl

 On Wed, Jun 4, 2014 at 4:43 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 I think it's even cheaper than that. Try calling the hotel to get the
 better rate, I think Carl was able to successfully acquire the room at
 the cheaper rate (something like $115 a night or so).

 On Wed, Jun 4, 2014 at 4:56 PM, Edgar Magana Perdomo (eperdomo)
 eperd...@cisco.com wrote:
 I tried to book online and it seems that the pre-payment is 
 non-refundable:

 Hyatt.Com Rate Rate RulesFull prepayment required, non-refundable, no
 date changes.


 The price is $149 USD per night. Is that what you have blocked?

 Edgar

 On 6/4/14, 2:47 PM, Kyle Mestery mest...@noironetworks.com wrote:

 Hi all:

 I was curious if people are having issues booking the room from the
 block I have setup. I received word from the hotel that only one (1!)
 person has booked yet. Given the mid-cycle is approaching in a month,
 I wanted to make sure that people are making plans for travel. Are
 people booking in places other than the one I had setup as reserved?
 If so, I'll remove the room block. Keep in mind the hotel I had a
 block reserved at is very convenient in that it's literally walking
 distance to the mid-cycle location at the Bloomington, MN Cisco
 offices.

 Thanks!
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Reasons to use Behat/behavior driven development in an SDK?

2014-06-09 Thread Matthew Farina
Jamie, thanks for sharing those links. They are quite useful and led
me to a couple questions.

1. To quote the first link, To do this, we need a way to describe the
requirement such that everyone – the business folks, the analyst, the
developer and the tester – have a common understanding of the scope of
the work. Where are the business folks, the analyst, and the tester?
behat does things in human readable language that's really useful for
the non-developer. Where do we have these in the development of this
SDK?

I ask because in that first post the idea of working with these
different types of people is a central point. If you're working on a
client project for non-technical clients, which is common in
consulting circles, and in enterprise apps where you have analysts,
product managers, and others this is definitely useful for them. Where
are they in the loop on developing this SDK?

2. Can you point to end users of the SDK who aren't developers or
engineers? That's not to say that someone developing an application
that uses the SDK isn't working with non-developers. If they have a
story about uploading a file to persistent storage the implementation
by the developer might use the SDK. But, us using BDD doesn't help
that process.

This is really about the people involved in this project and consuming
it. It's different from typical client consulting work or general
consumer facing production. The audience is different. Can you explain
how this technology is useful to this specific audience in a practical
way?

Thanks,
Matt


On Fri, Jun 6, 2014 at 12:55 PM, Jamie Hannaford
jamie.hannaf...@rackspace.com wrote:
 Hey all,

 Sorry for the length of reply - but I want to provide as much information as
 possible about this topic.

 Instead of enumerating pros and cons, I want to give a bit of context first
 (about what “feature stories” actually are), and then respond to some common
 misconceptions about them. This way, the pros/cons of Behat are a bit more
 substantiated.


 Would Behat replace PHPUnit?

 No - they’re completely different. We’d still use phpunit for unit testing
 because it’s way better at xunit-like assertions. We’d use behat instead for
 functional testing - making sure that features work against a production
 API.


 Who’s using Behat and is it suitable for us?

 From what I’ve heard, we’re using it for some projects at Rackspace and
 possibly some OpenStack projects - but I need to double check that. I’ve
 reached out to some folks about their experiences with it - so I’ll post the
 findings when I hear back.


 What are BDD feature stories?

 Here’s a link to a fantastic article which explains the benefits of BDD
 feature stories: http://dannorth.net/whats-in-a-story/

 tl;dr:

 BDD takes the position that you can turn an idea for a requirement into
 implemented, tested, production-ready code simply and effectively, as long
 as the requirement is specific enough that everyone knows what’s going on.
 To do this, we need a way to describe the requirement such that everyone –
 end-user, contributor, manager, technical lead (in short, anyone interested
 in using our SDK in their business) – have a common understanding of the
 scope of the work. You are showing them, in human-readable language, the
 features of the SDK and what it offers them. The result is that everyone —
 regardless of proficiency, skill level and familiarity with the codebase —
 is on the same level of understanding. From this they can agree a common
 definition of “done”, and we escape the dual gumption traps of “that’s not
 what I asked for” or “I forgot to tell you about this other thing”.

 This, then, is the role of a Story. It is a description of a requirement and
 a set of criteria by which we all agree that it is “done”. It helps us
 understand and satisfy customer use-cases in a well expressed and clear way.
 It also helps us track project progress by having well-established
 acceptance criteria for feature sets.


 3 misconceptions about BDD

 (Inspired by
 http://www.thoughtworks.com/insights/blog/3-misconceptions-about-bdd)

 1. End-users don’t care about this! They want code

 This is actually a completely misdirected point. The purpose of behat is not
 to serve as a public-facing repository of sample code. Its actual purpose is
 twofold: to serve as a functional test suite (i.e. make sure our SDK works
 against an API), and secondly to serve as a communication device - to codify
 features in a human-readable way.

 It’s the role of documentation to explain the concepts of the SDK with
 detailed code samples. Another good idea is to provide a “samples” folder
 that contains standalone scripts for common use-cases - this is what we
 offer for our current SDK, and users appreciate it. Both of these will allow
 developers to copy and paste working code for their requirements.

 2. Contributors don’t want to write these specifications!

 My response is this: how can you implement a piece of functionality if you
 

Re: [openstack-dev] Promoting healing script to scheme migration script?

2014-06-09 Thread Johannes Erdfelt
On Mon, Jun 09, 2014, Jakub Libosvar libos...@redhat.com wrote:
 I'd like to get some opinions on following idea:
 
 Because currently we have (thanks to Ann) WIP of healing script capable
 of changing database scheme by comparing tables in the database to
 models in current codebase, I started to think whether it could be used
 generally to db upgrades instead of generating migration scripts.

Do you have a link to these healing scripts?

 If I understand correctly the purpose of migration scripts used to be to:
 1) separate changes according plugins
 2) upgrade database scheme
 3) migrate data according the changed scheme
 
 Since we dropped on conditional migrations, we can cross out no.1).
 The healing script is capable of doing no.2) without any manual effort
 and without adding migration script.
 
 That means if we will decide to go along with using script for updating
 database scheme, migration scripts will be needed only for data
 migration (no.3)) which are from my experience rare.
 
 Also other benefit would be that we won't need to store all database
 models from Icehouse release which we probably will need in case we want
 to heal database in order to achieve idempotent Icehouse database
 scheme with Juno codebase.
 
 Please share your ideas and reveal potential glitches in the proposal.

I'm actually working on a project to implement declarative schema
migrations for Nova using the existing model we currently maintain.

The main goals for our project are to reduce the amount of work
maintaining the database schema but also to reduce the amount of
downtime during software upgrades by doing schema changes online (where
possible).

I'd like to see what other haves done and are working on the future so
we don't unnecessarily duplicate work :)

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Chris Friesen

On 06/07/2014 12:30 AM, Joe Cropper wrote:

Hi Folks,

I was wondering if there was any such mechanism in the compute node
structure to hold arbitrary key-value pairs, similar to flavors'
extra_specs concept?

It appears there are entries for things like pci_stats, stats and
recently added extra_resources -- but these all tend to have more
specific usages vs. just arbitrary data that may want to be maintained
about the compute node over the course of its lifetime.

Unless I'm overlooking an existing construct for this, would this be
something that folks would welcome a Juno blueprint for--i.e., adding
extra_specs style column with a JSON-formatted string that could be
loaded as a dict of key-value pairs?


If nothing else, you could put the compute node in a host aggregate and 
assign metadata to it.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-09 Thread Anita Kuno
On 06/09/2014 03:38 AM, Eoghan Glynn wrote:
 
 
 So there are certain words that mean certain things, most don't, some do.

 If words that mean certain things are used then some folks start using
 the word and have expectations around the word and the OpenStack
 Technical Committee and other OpenStack programs find themselves on the
 hook for behaviours that they didn't agree to.

 Currently the word under discussion is certified and its derivatives:
 certification, certifying, and others with root word certificate.

 This came to my attention at the summit with a cinder summit session
 with the one of the cerficiate words in the title. I had thought my
 point had been made but it appears that there needs to be more
 discussion on this. So let's discuss.

 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He
 certified the truth of his claim.
 2. to testify to or vouch for in writing: The medical examiner will
 certify his findings to the court.
 3. to guarantee; endorse reliably: to certify a document with an
 official seal.
 4. to guarantee (a check) by writing on its face that the account
 against which it is drawn has sufficient funds to pay it.
 5. to award a certificate to (a person) attesting to the completion of a
 course of study or the passing of a qualifying examination.
 Source: http://dictionary.reference.com/browse/certify

 The issue I have with the word certify is that it requires someone or a
 group of someones to attest to something. The thing attested to is only
 as credible as the someone or the group of someones doing the attesting.
 We have no process, nor do I feel we want to have a process for
 evaluating the reliability of the somones or groups of someones doing
 the attesting.

 I think that having testing in place in line with other programs testing
 of patches (third party ci) in cinder should be sufficient to address
 the underlying concern, namely reliability of opensource hooks to
 proprietary code and/or hardware. I would like the use of the word
 certificate and all its roots to no longer be used in OpenStack
 programs with regard to testing. This won't happen until we get some
 discussion and agreement on this, which I would like to have.

 Thank you for your participation,
 Anita.
 
 Hi Anita,
 
 Just a note on cross-posting to both the os-dev and os-tc lists.
 
 Anyone not on the TC who will hits reply-all is likely to see their
 post be rejected by the TC list moderator, but go through to the
 more open dev list.
 
 As a result, the thread diverges (as we saw with the recent election
 stats/turnout thread).
 
 Also, moderation rejects are an unpleasant user experience.
 
 So if a post is intended to reach out for input from the wider dev
 community, it's better to post *only* to the -dev list, or vice versa
 if you want to interact with a narrower audience.
My post was intended to include the tc list in the discussion

I have no say in what posts the tc email list moderator accepts or does
not, or how those posts not accepted are informed of their status.

Thanks Eoghan,
Anita.
 
 Thanks,
 Eoghan
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The possible ways of high availability for non-cloud-ready apps running on openstack

2014-06-09 Thread hossein zabolzadeh
Hi there.
I am dealing with large amount of legacy application(MediaWiki, Joomla,
...) running on openstack. I am looking for the best way to improve high
availability of my instances. All applications are not designed for
fail(Non-Cloud-Ready Apps). So, what is the best way of improving HA on my
non-clustered instances(Stateful Instances)?
Thanks in advance.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for Capabilities and Tags

2014-06-09 Thread Tripp, Travis S
FYI: We now have the initial Glance spec up for review.  
https://review.openstack.org/#/c/98554/

We generalized a few concepts and will look at how to bring a few of those 
concepts back in potentially via a future spec.

Thanks,
Travis

From: Tripp, Travis S
Sent: Friday, May 30, 2014 4:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for 
Capabilities and Tags
Importance: High

Thanks, Zane and Georgy!

We’ll begin getting all the expected sections for the new Glance spec repo into 
this document next week and then will upload in RST format for formal review. 
That is a bit more expedient since there are still several people editing. In 
the meantime, we’ll take any additional comments in the google doc.

Thanks,
Travis

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Friday, May 30, 2014 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for 
Capabilities and Tags
Importance: High

I think this is a great feature to have it in Glance. Tagging mechanism for 
objects which are not owned by Glance is complimentary to artifact 
catalog\repository in Glance. As soon as we keep tags and artifacts metadata 
close to each other the end-user will be able to use them seamlessly.
Artifacts also can use tags to find objects outside of artifact repository 
which is always good to have.
In Murano project we use Glance tags to find correct images which are required 
by specific applications. It will be great to extend this to other objects like 
networks, routers and flavors so that application write can specify kind of 
object are required for his application.

Thanks,
Georgy

On Fri, May 30, 2014 at 11:45 AM, Zane Bitter 
zbit...@redhat.commailto:zbit...@redhat.com wrote:
On 29/05/14 18:42, Tripp, Travis S wrote:
Hello everyone!

At the summit in Atlanta we demonstrated the “Graffiti” project
concepts.  We received very positive feedback from members of multiple
dev projects as well as numerous operators.  We were specifically asked
multiple times about getting the Graffiti metadata catalog concepts into
Glance so that we can start to officially support the ideas we
demonstrated in Horizon.

After a number of additional meetings at the summit and working through
ideas the past week, we’ve created the initial proposal for adding a
Metadata Catalog to Glance for capabilities and tags.  This is distinct
from the “Artifact Catalog”, but we do see that capability and tag
catalog can be used with the artifact catalog.

We’ve detailed our initial proposal in the following Google Doc.  Mark
Washenberger agreed that this was a good place to capture the initial
proposal and we can later move it over to the Glance spec repo which
will be integrated with Launchpad blueprints soon.

https://docs.google.com/document/d/1cS2tJZrj748ZsttAabdHJDzkbU9nML5S4oFktFNNd68

Please take a look and let’s discuss!

Also, the following video is a brief recap of what was demo’ d at the
summit.  It should help to set a lot of understanding behind the ideas
in the proposal.

https://www.youtube.com/watch?v=Dhrthnq1bnw

Thank you!

Travis Tripp (HP)

Murali Sundar (Intel)
*A Few Related Blueprints *


https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering

https://blueprints.launchpad.net/horizon/+spec/tagging

https://blueprints.launchpad.net/horizon/+spec/faceted-search

https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata

https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata

+1, this is something that will be increasingly important to orchestration. The 
folks working on the TOSCA (and others) - HOT translator project might be able 
to comment in more detail, but basically as people start wanting to write 
templates that run on multiple clouds (potentially even non-OpenStack clouds) 
some sort of catalog for capabilities will become crucial.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.comhttp://www.mirantis.com/
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread Georgy Okrokvertskhov
Hi,

You still can run legacy application on OpenStack with HA and DR using the
same good old school tools like pacemaker, heartbeat, DRBD etc. There are
all necessary features available in latest OpenStack. The most important
feature for HA - secondary IP address was implemented in Havana. Now you
can assign multiple IP addresses to the single VM port. Secondary IP can be
used as a VIP in pacemaker so it is possible to create classic
Active-Passive setup for any application. HAProxy is still there an you can
use it for any application which uses IP based transport for communication.
This secondary IP feature allows you to run even Windows cluster
applications without any significant changes in setup in comparison to the
running cluster on physical nodes.

There is no shared volumes (yet as I know) but you can use DRBD on VM to
sync two volumes attached to two different VMs and shared network
filesystems as a service is almost there. Using these approaches it is
possible to have data resilience for legacy applications too.

There is no automagic things which make legacy apps resilient, but it is
still possible to do with using known tools as there are no limitations
from OpenStack infrastructure side for that. As I know there were
discussions about exposing HA clusters on hypervisors that will allow some
kind of resilience automatically (through automatic migrations or
evacuation) but there is no active work on it visible.

Thanks
Georgy





On Mon, Jun 9, 2014 at 7:16 AM, Matthew Farina m...@mattfarina.com wrote:

 In my experience building apps that run in OpenStack, you don't give
 up state. You shift how you handle state.

 For example, instead of always routing a user to the same instance and
 that instance holding the session data there is a common session store
 for the app (possibly synced between regions). If you store session on
 each instance and loose an instance you'll run into problems. If
 sessions is more of a service for each instance than an instance
 coming and going isn't a big deal.

 A good database as a service, swift (object storage), and maybe a
 microservice architecture may be helpful.

 Legacy applications might have some issues with the architecture
 changes and some may not be a good fit for cloud architectures. One
 way to help legacy applications is to use block storage, keep the
 latest snapshot of the instance in glance (image service), and monitor
 an instance. If an instance goes offline you can easily create a new
 one from the image and mount block storage with the data.

 - Matt



 On Mon, Jun 9, 2014 at 7:27 AM, hossein zabolzadeh zabolza...@gmail.com
 wrote:
  Hi OpenStack Development Community,
  I know that the OpenStack interest is to become a cloud computing
 operating
  system. And this simple sentence means: Say goodbye to Statefull
  Applications.
  But, as you know we are in the transition phase from stateful apps to
  stateless apps(Remember Pets and Cattle Example). Legacy apps are still
 in
  used and how openstack can address the problems of running stateful
  applications(e.g. HA, DR, FT, R,...)?
  HA: High Availability
  DR: Disaster Recovery
  FT: Fault Tolerance
  R: Resiliancy!
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Jorge Miramontes
Hey German,

I agree with you. I don't really want to go with option #1 because making
decisions on behalf of the user (especially when security is involved) can
be quite tricky and dangerous. Your concerns are valid for option #2 but I
still think it is the better option to go with. I believe Carlos and Adam
are working with our Barbican team on a blueprint for option #2 so it
would be nice if you could take a look at that and see how we can
implement it to mitigate the concerns you laid out. While it would be nice
for us to figure out how to ensure registration/unregistration at least
the API user has the necessary info to ensure it themselves if need be.

I'm not sure if I like the auto-update flag concept after all as it adds
a layer of complexity depending on what the user has set.  I'd prefer
either an LBaaS makes all decisions on behalf of the user or LBaaS
makes no deacons on behalf of the user approach with the latter being my
preference. In one of my earlier emails I asked the fundamental question
of whether flexibility is worthwhile at the cost of complexity. I prefer
to start off simple since we don't have any real validation on whether
these flexible features will actually be used. Once we have a product
that is being widely deployed should flexible feature necessity become
evident.

Cheers,
--Jorge




On 6/6/14 5:52 PM, Eichberger, German german.eichber...@hp.com wrote:

Jorge + John,

I am most concerned with a user changing his secret in barbican and then
the LB trying to update and causing downtime. Some users like to control
when the downtime occurs.

For #1 it was suggested that once the event is delivered it would be up
to a user to enable an auto-update flag.

In the case of #2 I am a bit worried about error cases: e.g. uploading
the certificates succeeds but registering the loadbalancer(s) fails. So
using the barbican system for those warnings might not as fool proof as
we are hoping. 

One thing I like about #2 over #1 is that it pushes a lot of the
information to Barbican. I think a user would expect when he uploads a
new certificate to Barbican that the system warns him right away about
load balancers using the old cert. With #1 he might get an e-mails from
LBaaS telling him things changed (and we helpfully updated all affected
load balancers) -- which isn't as immediate as #2.

If we implement an auto-update flag for #1 we can have both. User's who
like #2 juts hit the flag. Then the discussion changes to what we should
implement first and I agree with Jorge + John that this should likely be
#2.

German

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 3:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey John,

Correct, I was envisioning that the Barbican request would not be
affected, but rather, the GUI operator or API user could use the
registration information to do so should they want to do so.

Cheers,
--Jorge




On 6/6/14 4:53 PM, John Wood john.w...@rackspace.com wrote:

Hello Jorge,

Just noting that for option #2, it seems to me that the registration
feature in Barbican would not be required for the first version of this
integration effort, but we should create a blueprint for it nonetheless.

As for your question about services not registering/unregistering, I
don't see an issue as long as the presence or absence of registered
services on a Container/Secret does not **block** actions from
happening, but rather is information that can be used to warn clients
through their processes. For example, Barbican would still delete a
Container/Secret even if it had registered services.

Does that all make sense though?

Thanks,
John


From: Youcef Laribi [youcef.lar...@citrix.com]
Sent: Friday, June 06, 2014 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

+1 for option 2.

In addition as an additional safeguard, the LBaaS service could check
with Barbican when failing to use an existing secret to see if the
secret has changed (lazy detection).

Youcef

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 12:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free
to add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to 

Re: [openstack-dev] [Neutron] Spec review request

2014-06-09 Thread Ben Nemec
Please don't send review requests to the list.  The preferred methods of
requesting reviews are explained here:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks.

-Ben

On 06/07/2014 12:31 AM, Kanzhe Jiang wrote:
 The serviceBase and insertion spec has been up for review for a while. It
 would be great if it can be reviewed and moved forward.
 
 https://review.openstack.org/#/c/93128/
 
 Thanks,
 Kanzhe
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread hossein zabolzadeh
Really thanks Georgy for your complete answer. My major concern on
openstack was HA on my legacy apps(I wanted to use cloudstack instead of
openstack becasue of its more attention to legacy apps and more HA
features). But now, I will check your listed HA solutions on openstack and
come back as soon as possible.


On Mon, Jun 9, 2014 at 8:53 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi,

 You still can run legacy application on OpenStack with HA and DR using the
 same good old school tools like pacemaker, heartbeat, DRBD etc. There are
 all necessary features available in latest OpenStack. The most important
 feature for HA - secondary IP address was implemented in Havana. Now you
 can assign multiple IP addresses to the single VM port. Secondary IP can be
 used as a VIP in pacemaker so it is possible to create classic
 Active-Passive setup for any application. HAProxy is still there an you can
 use it for any application which uses IP based transport for communication.
 This secondary IP feature allows you to run even Windows cluster
 applications without any significant changes in setup in comparison to the
 running cluster on physical nodes.

 There is no shared volumes (yet as I know) but you can use DRBD on VM to
 sync two volumes attached to two different VMs and shared network
 filesystems as a service is almost there. Using these approaches it is
 possible to have data resilience for legacy applications too.

 There is no automagic things which make legacy apps resilient, but it is
 still possible to do with using known tools as there are no limitations
 from OpenStack infrastructure side for that. As I know there were
 discussions about exposing HA clusters on hypervisors that will allow some
 kind of resilience automatically (through automatic migrations or
 evacuation) but there is no active work on it visible.

 Thanks
 Georgy





 On Mon, Jun 9, 2014 at 7:16 AM, Matthew Farina m...@mattfarina.com
 wrote:

 In my experience building apps that run in OpenStack, you don't give
 up state. You shift how you handle state.

 For example, instead of always routing a user to the same instance and
 that instance holding the session data there is a common session store
 for the app (possibly synced between regions). If you store session on
 each instance and loose an instance you'll run into problems. If
 sessions is more of a service for each instance than an instance
 coming and going isn't a big deal.

 A good database as a service, swift (object storage), and maybe a
 microservice architecture may be helpful.

 Legacy applications might have some issues with the architecture
 changes and some may not be a good fit for cloud architectures. One
 way to help legacy applications is to use block storage, keep the
 latest snapshot of the instance in glance (image service), and monitor
 an instance. If an instance goes offline you can easily create a new
 one from the image and mount block storage with the data.

 - Matt



 On Mon, Jun 9, 2014 at 7:27 AM, hossein zabolzadeh zabolza...@gmail.com
 wrote:
  Hi OpenStack Development Community,
  I know that the OpenStack interest is to become a cloud computing
 operating
  system. And this simple sentence means: Say goodbye to Statefull
  Applications.
  But, as you know we are in the transition phase from stateful apps to
  stateless apps(Remember Pets and Cattle Example). Legacy apps are still
 in
  used and how openstack can address the problems of running stateful
  applications(e.g. HA, DR, FT, R,...)?
  HA: High Availability
  DR: Disaster Recovery
  FT: Fault Tolerance
  R: Resiliancy!
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Chris Friesen

On 06/09/2014 07:59 AM, Jay Pipes wrote:

On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and testing
purposes.


Pray tell, what is that valid use case?


I find it useful for setting up specific testcases when trying to 
validate thingsput *this* instance on *this* host, put *those* 
instances on *those* hosts, now pull the power plug on *this* host...etc.


I wouldn't expect the typical openstack end-user to need it though.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Joe Cropper
There may also be specific software entitlement issues that make it useful
to deterministically know which host your VM will be placed on.  This can
be quite common in large organizations that have certain software that can
be tied to certain hardware or hardware with certain # of CPU capacity, etc.

Regards,
Joe


On Mon, Jun 9, 2014 at 11:32 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 06/09/2014 07:59 AM, Jay Pipes wrote:

 On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

 Forcing an instance to a specific host is very useful for the
 operator - it fulfills a valid use case for monitoring and testing
 purposes.


 Pray tell, what is that valid use case?


 I find it useful for setting up specific testcases when trying to validate
 thingsput *this* instance on *this* host, put *those* instances on
 *those* hosts, now pull the power plug on *this* host...etc.

 I wouldn't expect the typical openstack end-user to need it though.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html

On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:
 Hi Mike,

 However, when testing an application that uses a fixed set of tables, as 
 should be the case for the majority if not all Openstack apps, there’s no 
 reason that these tables need to be recreated for every test.

 This is a very good point. I tried to use the recipe from SQLAlchemy
 docs to run Nova DB API tests (yeah, I know, this might sound
 confusing, but these are actually methods that access the database in
 Nova) on production backends (MySQL and PostgreSQL). The abandoned
 patch is here [1]. Julia Varlamova has been working on rebasing this
 on master and should upload a new patch set soon.

 Overall, the approach with executing a test within a transaction and
 then emitting ROLLBACK worked quite well. The only problem I ran into
 were tests doing ROLLBACK on purpose. But you've updated the recipe
 since then and this can probably be solved by using of save points. I
 used a separate DB per a test running process to prevent race
 conditions, but we should definitely give READ COMMITTED approach a
 try. If it works, that will awesome.

 With a few tweaks of PostgreSQL config I was able to run Nova DB API
 tests in 13-15 seconds, while SQLite in memory took about 7s.

 Action items for me and Julia probably: [2] needs a spec with [1]
 updated accordingly. Using of this 'test in a transaction' approach
 seems to be a way to go for running all db related tests except the
 ones using DDL statements (as any DDL statement commits the current
 transaction implicitly on MySQL and SQLite AFAIK).

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/33236/
 [2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

 On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:

 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
 wrote:

 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread (remember, it calls
 _reset_databases) blows up the other test.

 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table
 'test_migrations.alembic_version' doesn't exist)

 As far as I can tell, this is all coming from:

 

Re: [openstack-dev] use of the word certified

2014-06-09 Thread Asselin, Ramy
Based on the discussion I'd like to propose these options:
1. Cinder-certified driver - This is an attempt to move the certification to 
the project level.
2. CI-tested driver - This is probably the most accurate, at least for what 
we're trying to achieve for Juno: Continuous Integration of Vendor-specific 
Drivers.

Ramy

-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Monday, June 09, 2014 4:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] use of the word certified

On 6 June 2014 18:29, Anita Kuno ante...@anteaya.info wrote:
 So there are certain words that mean certain things, most don't, some do.

 If words that mean certain things are used then some folks start using 
 the word and have expectations around the word and the OpenStack 
 Technical Committee and other OpenStack programs find themselves on 
 the hook for behaviours that they didn't agree to.

 Currently the word under discussion is certified and its derivatives:
 certification, certifying, and others with root word certificate.

 This came to my attention at the summit with a cinder summit session 
 with the one of the cerficiate words in the title. I had thought my 
 point had been made but it appears that there needs to be more 
 discussion on this. So let's discuss.

 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He 
 certified the truth of his claim.

So the cinder team are attesting that a set of tests have been run against a 
driver: a certified driver.

 3. to guarantee; endorse reliably: to certify a document with an 
 official seal.

We (the cinder) team) are guaranteeing that the driver has been tested, in at 
least one configuration, and found to pass all of the tempest tests. This is a 
far better state than we were at 6 months ago, where many drivers didn't even 
pass a smoke test.

 5. to award a certificate to (a person) attesting to the completion of 
 a course of study or the passing of a qualifying examination.

The cinder cert process is pretty much an exam.


I think the work certification covers exactly what we are doing. Give 
cinder-core are the people on the hook for any cinder problems (including 
vendor specific ones), and the cinder core are the people who get bad-mouthed 
when there are problems (including vendor specific ones), I think this level of 
certification gives us value.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread Georgy Okrokvertskhov
Hi Hossein,

In additions you may check the following:
Heat OS::Heat::HARestarter resource
http://docs.openstack.org/developer/heat/template_guide/openstack.html
This blog entry about clustering:
http://vmtrooper.com/openstack-your-windows-cluster-with-neutron-allowed-address-pairs/
Mistral project, specifically for Live migration:
https://wiki.openstack.org/wiki/Mistral#Live_migration
Murano project for legacy app management and composing:
https://wiki.openstack.org/wiki/Murano/ProjectOverview

Thanks,
Georgy


On Mon, Jun 9, 2014 at 9:30 AM, hossein zabolzadeh zabolza...@gmail.com
wrote:

 Really thanks Georgy for your complete answer. My major concern on
 openstack was HA on my legacy apps(I wanted to use cloudstack instead of
 openstack becasue of its more attention to legacy apps and more HA
 features). But now, I will check your listed HA solutions on openstack and
 come back as soon as possible.


 On Mon, Jun 9, 2014 at 8:53 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:

 Hi,

 You still can run legacy application on OpenStack with HA and DR using
 the same good old school tools like pacemaker, heartbeat, DRBD etc. There
 are all necessary features available in latest OpenStack. The most
 important feature for HA - secondary IP address was implemented in Havana.
 Now you can assign multiple IP addresses to the single VM port. Secondary
 IP can be used as a VIP in pacemaker so it is possible to create classic
 Active-Passive setup for any application. HAProxy is still there an you can
 use it for any application which uses IP based transport for communication.
 This secondary IP feature allows you to run even Windows cluster
 applications without any significant changes in setup in comparison to the
 running cluster on physical nodes.

 There is no shared volumes (yet as I know) but you can use DRBD on VM to
 sync two volumes attached to two different VMs and shared network
 filesystems as a service is almost there. Using these approaches it is
 possible to have data resilience for legacy applications too.

 There is no automagic things which make legacy apps resilient, but it is
 still possible to do with using known tools as there are no limitations
 from OpenStack infrastructure side for that. As I know there were
 discussions about exposing HA clusters on hypervisors that will allow some
 kind of resilience automatically (through automatic migrations or
 evacuation) but there is no active work on it visible.

 Thanks
 Georgy





 On Mon, Jun 9, 2014 at 7:16 AM, Matthew Farina m...@mattfarina.com
 wrote:

 In my experience building apps that run in OpenStack, you don't give
 up state. You shift how you handle state.

 For example, instead of always routing a user to the same instance and
 that instance holding the session data there is a common session store
 for the app (possibly synced between regions). If you store session on
 each instance and loose an instance you'll run into problems. If
 sessions is more of a service for each instance than an instance
 coming and going isn't a big deal.

 A good database as a service, swift (object storage), and maybe a
 microservice architecture may be helpful.

 Legacy applications might have some issues with the architecture
 changes and some may not be a good fit for cloud architectures. One
 way to help legacy applications is to use block storage, keep the
 latest snapshot of the instance in glance (image service), and monitor
 an instance. If an instance goes offline you can easily create a new
 one from the image and mount block storage with the data.

 - Matt



 On Mon, Jun 9, 2014 at 7:27 AM, hossein zabolzadeh zabolza...@gmail.com
 wrote:
  Hi OpenStack Development Community,
  I know that the OpenStack interest is to become a cloud computing
 operating
  system. And this simple sentence means: Say goodbye to Statefull
  Applications.
  But, as you know we are in the transition phase from stateful apps to
  stateless apps(Remember Pets and Cattle Example). Legacy apps are
 still in
  used and how openstack can address the problems of running stateful
  applications(e.g. HA, DR, FT, R,...)?
  HA: High Availability
  DR: Disaster Recovery
  FT: Fault Tolerance
  R: Resiliancy!
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev 

[openstack-dev] [Mistral] Mistral weekly meeting - meeting minutes

2014-06-09 Thread Timur Nurlygayanov
Hi team,

Thank you all for participating in Mistral weekly meeting today,

meeting minutes are available by the following links:
Minutes:
http://eavesdrop.openstack.org/meetings/mistral_weekly_meeting/2014/mistral_weekly_meeting.2014-06-09-15.59.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/mistral_weekly_meeting/2014/mistral_weekly_meeting.2014-06-09-15.59.txt
Log:
http://eavesdrop.openstack.org/meetings/mistral_weekly_meeting/2014/mistral_weekly_meeting.2014-06-09-15.59.log.html


-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Mike Bayer

On Jun 9, 2014, at 12:50 PM, Devananda van der Veen devananda@gmail.com 
wrote:

 There may be some problems with MySQL when testing parallel writes in
 different non-committing transactions, even in READ COMMITTED mode,
 due to InnoDB locking, if the queries use non-unique secondary indexes
 for UPDATE or SELECT..FOR UPDATE queries. This is done by the
 with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
 in Nova. So I would not recommend this approach, even though, in
 principle, I agree it would be a much more efficient way of testing
 database reads/writes.
 
 More details here:
 http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
 http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html

OK, but just to clarify my understanding, what is the approach to testing 
writes in parallel right now, are we doing CREATE DATABASE for two entirely 
distinct databases with some kind of generated name for each one?  Otherwise, 
if the parallel tests are against the same database, this issue exists 
regardless (unless autocommit mode is used, is FOR UPDATE accepted under those 
conditions?)




 
 On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com 
 wrote:
 Hi Mike,
 
 However, when testing an application that uses a fixed set of tables, as 
 should be the case for the majority if not all Openstack apps, there’s no 
 reason that these tables need to be recreated for every test.
 
 This is a very good point. I tried to use the recipe from SQLAlchemy
 docs to run Nova DB API tests (yeah, I know, this might sound
 confusing, but these are actually methods that access the database in
 Nova) on production backends (MySQL and PostgreSQL). The abandoned
 patch is here [1]. Julia Varlamova has been working on rebasing this
 on master and should upload a new patch set soon.
 
 Overall, the approach with executing a test within a transaction and
 then emitting ROLLBACK worked quite well. The only problem I ran into
 were tests doing ROLLBACK on purpose. But you've updated the recipe
 since then and this can probably be solved by using of save points. I
 used a separate DB per a test running process to prevent race
 conditions, but we should definitely give READ COMMITTED approach a
 try. If it works, that will awesome.
 
 With a few tweaks of PostgreSQL config I was able to run Nova DB API
 tests in 13-15 seconds, while SQLite in memory took about 7s.
 
 Action items for me and Julia probably: [2] needs a spec with [1]
 updated accordingly. Using of this 'test in a transaction' approach
 seems to be a way to go for running all db related tests except the
 ones using DDL statements (as any DDL statement commits the current
 transaction implicitly on MySQL and SQLite AFAIK).
 
 Thanks,
 Roman
 
 [1] https://review.openstack.org/#/c/33236/
 [2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
 
 On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:
 
 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
 wrote:
 
 I think some things are broken in the oslo-incubator db migration code.
 
 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.
 
 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
  https://bugs.launchpad.net/oslo/+bug/1327397
  No notice given when db migrations are not run due to missing engine
 
 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!
 
 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.
 
 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself
 
 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
I understand this concern and was advocating that a configuration option be 
available to disable or enable auto updating of SSL certificates. But since 
every one is in favor of storing meta data on the barbican container directly I 
guess this is a moot point now.

On Jun 6, 2014, at 5:52 PM, Eichberger, German german.eichber...@hp.com 
wrote:

 Jorge + John,
 
 I am most concerned with a user changing his secret in barbican and then the 
 LB trying to update and causing downtime. Some users like to control when the 
 downtime occurs.
 
 For #1 it was suggested that once the event is delivered it would be up to a 
 user to enable an auto-update flag.
 
 In the case of #2 I am a bit worried about error cases: e.g. uploading the 
 certificates succeeds but registering the loadbalancer(s) fails. So using the 
 barbican system for those warnings might not as fool proof as we are hoping. 
 
 One thing I like about #2 over #1 is that it pushes a lot of the information 
 to Barbican. I think a user would expect when he uploads a new certificate to 
 Barbican that the system warns him right away about load balancers using the 
 old cert. With #1 he might get an e-mails from LBaaS telling him things 
 changed (and we helpfully updated all affected load balancers) -- which isn't 
 as immediate as #2. 
 
 If we implement an auto-update flag for #1 we can have both. User's who 
 like #2 juts hit the flag. Then the discussion changes to what we should 
 implement first and I agree with Jorge + John that this should likely be #2.
 
 German
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
 Sent: Friday, June 06, 2014 3:05 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Hey John,
 
 Correct, I was envisioning that the Barbican request would not be affected, 
 but rather, the GUI operator or API user could use the registration 
 information to do so should they want to do so.
 
 Cheers,
 --Jorge
 
 
 
 
 On 6/6/14 4:53 PM, John Wood john.w...@rackspace.com wrote:
 
 Hello Jorge,
 
 Just noting that for option #2, it seems to me that the registration 
 feature in Barbican would not be required for the first version of this 
 integration effort, but we should create a blueprint for it nonetheless.
 
 As for your question about services not registering/unregistering, I 
 don't see an issue as long as the presence or absence of registered 
 services on a Container/Secret does not **block** actions from 
 happening, but rather is information that can be used to warn clients 
 through their processes. For example, Barbican would still delete a 
 Container/Secret even if it had registered services.
 
 Does that all make sense though?
 
 Thanks,
 John
 
 
 From: Youcef Laribi [youcef.lar...@citrix.com]
 Sent: Friday, June 06, 2014 2:47 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 +1 for option 2.
 
 In addition as an additional safeguard, the LBaaS service could check 
 with Barbican when failing to use an existing secret to see if the 
 secret has changed (lazy detection).
 
 Youcef
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 12:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Hey everyone,
 
 Per our IRC discussion yesterday I'd like to continue the discussion on 
 how Barbican and Neutron LBaaS will interact. There are currently two 
 ideas in play and both will work. If you have another idea please free 
 to add it so that we may evaluate all the options relative to each other.
 Here are the two current ideas:
 
 1. Create an eventing system for Barbican that Neutron LBaaS (and other
 services) consumes to identify when to update/delete updated secrets 
 from Barbican. For those that aren't up to date with the Neutron LBaaS 
 API Revision, the project/tenant/user provides a secret (container?) id 
 when enabling SSL/TLS functionality.
 
 * Example: If a user makes a change to a secret/container in Barbican 
 then Neutron LBaaS will see an event and take the appropriate action.
 
 PROS:
 - Barbican is going to create an eventing system regardless so it will 
 be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
 calls the user has to make.
 
 CONS:
 - An eventing framework can become complex especially since we need to 
 ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
 think.
 
 2. Push orchestration decisions to API users. This idea comes with two 
 assumptions. The first assumption is that most providers' customers use 
 the cloud via 

[openstack-dev] [Cinder] Third-Party CI Issue: direct access to review.openstack.org port 29418 required

2014-06-09 Thread Asselin, Ramy
All,

I've been working on setting up our Cinder 3rd party CI setup.
I ran into an issue where Zuul requires direct access to review.openstack.org 
port 29418, which is currently blocked in my environment. It should be 
unblocked around the end of June.

Since this will likely affect other vendors, I encourage you to take a few 
minutes and check if this affects you in order to allow sufficient time to 
resolve.

Please follow the instructions in section Reading the Event Stream here: [1]
Make sure you can get the event stream ~without~ any tunnels or proxies, etc. 
such as corkscrew [2].
(Double-check that any such configurations are commented out in: ~/.ssh/config 
and /etc/ssh/ssh_config)

Ramy (irc: asselin)

[1] http://ci.openstack.org/third_party.html
[2] http://en.wikipedia.org/wiki/Corkscrew_(program)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] AggregateMultiTenancyIsolation scheduler filter - bug, or new feature proposal?

2014-06-09 Thread Joe Gordon
On Jun 9, 2014 4:12 AM, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 Hi everyone,

 We have a need to be able to dedicate a specific host aggregate to a list
of tenants/projects. If the aggregate is marked as such, the aggregate may
only be used by that specified list of tenants and those tenants may only
be scheduled to that aggregate.

 The AggregateMultiTenancyIsolation filter almost does what we need - it
pushes all new instances created by a specified tenant to the designated
aggregate. However, it also seems to still see that aggregate as available
for other tenants.

 The description in the documentation [1] states: If a host is in an
aggregate that has the metadata key filter_tenant_id it only creates
instances from that tenant (or list of tenants).

 This would seem to us either as a code bug, or a documentation bug?

 If the filter is working as intended, then I'd like to propose working on
a patch to the filter which has an additional metadata field (something
like 'filter_tenant_exclusive') which - when 'true' - will consider the
filter_tenant_id list to be the only projects/tenants which may be
scheduled onto the host aggregate, and the only host aggregate which the
list of projects/tenants which may be scheduled onto.

 Note that there has been some similar work done with [2] and [3]. [2]
actually works as we expect, but as is noted in the gerrit comments it
seems rather wasteful to add a new filter when we could use the existing
filter as a base. [3] is a much larger framework to facilitate end-users
being able to request a whole host allocation - while this could be a nice
addition, it's overkill for what we're looking for. We're happy to
facilitate this with a simple admin-only allocation.

 So - should I work on a nova-specs proposal for a change, or should I
just log a bug against either nova or docs? :) Guidance would be
appreciated.

This sounds like a very reasonable idea, and we already have precedent for
doing things like this.

As for bug vs blueprint, it's more of new feature, and something good to
document so I'd say this should be very small blueprint that is very
restricted in scope.


 [1]
http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html
 [2]
https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
 [3] https://blueprints.launchpad.net/nova/+spec/whole-host-allocation

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 10:07 AM, Chris Friesen
chris.frie...@windriver.com wrote:
 On 06/07/2014 12:30 AM, Joe Cropper wrote:

 Hi Folks,

 I was wondering if there was any such mechanism in the compute node
 structure to hold arbitrary key-value pairs, similar to flavors'
 extra_specs concept?

 It appears there are entries for things like pci_stats, stats and
 recently added extra_resources -- but these all tend to have more
 specific usages vs. just arbitrary data that may want to be maintained
 about the compute node over the course of its lifetime.

 Unless I'm overlooking an existing construct for this, would this be
 something that folks would welcome a Juno blueprint for--i.e., adding
 extra_specs style column with a JSON-formatted string that could be
 loaded as a dict of key-value pairs?


 If nothing else, you could put the compute node in a host aggregate and
 assign metadata to it.

Yeah, I recognize this could be done, but I think that would be using
the host aggregate metadata a little too loosely since the metadata
I'm after is really tied explicitly to the compute node.  This would
present too many challenges when someone would want to use host
aggregates and the compute node-specific metadata.


 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Jain, Vivek
+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is a
must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:

Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new one,
create a new one and update the listener to use it.
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free to
add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from
Barbican. For those that aren't up to date with the Neutron LBaaS API
Revision, the project/tenant/user provides a secret (container?) id when
enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will
be supported.
 - Decisions are made on behalf of the user which lessens the amount of
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI
think.

2. Push orchestration decisions to API users. This idea comes with two
assumptions. The first assumption is that most providers' customers use
the cloud via a GUI, which in turn can handle any orchestration decisions
that need to be made. The second assumption is that power API users are
savvy and can handle their decisions as well. Using this method requires
services, such as LBaaS, to register in the form of metadata to a
barbican container.

* Example: If a user makes a change to a secret the GUI can see which
services are registered and opt to warn the user of consequences. Power
users can look at the registered services and make decisions how they see
fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality
is at the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in
this case?
 - Pushes complexity of decision making on to GUI engineers and power API
users.


I would like to get a consensus on which option to move forward with ASAP
since the hackathon is coming up and delivering Barbican to Neutron LBaaS
integration is essential to exposing SSL/TLS functionality, which almost
everyone has stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My
reason for choosing option #2 has to deal mostly with the simplicity of
implementing such a mechanism. Simplicity also means we can implement the
necessary code and get it approved much faster which seems to be a
concern for everyone. What option does everyone else want to move forward
with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-09 Thread Liz Blanchard
Hi all,

Thanks again for the great comments on the initial cut of wireframes. I’ve 
updated them a fair amount based on feedback in this e-mail thread along with 
the feedback written up here:
https://etherpad.openstack.org/p/alarm-management-page-design-discussion

Here is a link to the new version:
http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-06-05.pdf

And a quick explanation of the updates that I made from the last version:

1) Removed severity.

2) Added Status column. I also added details around the fact that users can 
enable/disable alerts.

3) Updated Alarm creation workflow to include choosing the project and user 
(optionally for filtering the resource list), choosing resource, and allowing 
for choose of amount of time to monitor for alarming.
 -Perhaps we could be even more sophisticated for how we let users filter 
down to find the right resources that they want to monitor for alarms?

4) As for notifying users…I’ve updated the “Alarms” section to be “Alarms 
History”. The point here is to show any Alarms that have occurred to notify the 
user. Other notification ideas could be to allow users to get notified of 
alerts via e-mail (perhaps a user setting?). I’ve added a wireframe for this 
update in User Settings. Then the Alarms Management section would just be where 
the user creates, deletes, enables, and disables alarms. Do you still think we 
don’t need the “alarms” tab? Perhaps this just becomes iteration 2 and is left 
out for now as you mention in your etherpad.

5) Question about combined alarms…currently I’ve designed it so that a user 
could create multiple levels in the “Alarm When…” section. They could combine 
these with AND/ORs. Is this going far enough? Or do we actually need to allow 
users to combine Alarms that might watch different resources?

6) I updated the Actions column to have the “More” drop down which is 
consistent with other tables in Horizon.

7) Added in a section in the “Add Alarm” workflow for “Actions after Alarm”. 
I’m thinking we could have some sort of If State is X, do X type selections, 
but I’m looking to understand more details about how the backend works for this 
feature. Eoghan gave examples of logging and potentially scaling out via Heat. 
Would simple drop downs support these events?

8) I can definitely add in a “scheduling” feature with respect to Alarms. I 
haven’t added it in yet, but I could see this being very useful in future 
revisions of this feature.

9) Another though is that we could add in some padding for outlier data as 
Eoghan mentioned. Perhaps a setting for “This has happened 3 times over the 
last minute, so now send an alarm.”?  

A new round of feedback is of course welcome :)

Best,
Liz

On Jun 4, 2014, at 1:27 PM, Liz Blanchard lsure...@redhat.com wrote:

 Thanks for the excellent feedback on these, guys! I’ll be working on making 
 updates over the next week and will send a fresh link out when done. Anyone 
 else with feedback, please feel free to fire away.
 
 Best,
 Liz
 On Jun 4, 2014, at 12:33 PM, Eoghan Glynn egl...@redhat.com wrote:
 
 
 Hi Liz,
 
 Two further thoughts occurred to me after hitting send on
 my previous mail.
 
 First, is the concept of alarm dimensioning; see my RDO Ceilometer
 getting started guide[1] for an explanation of that notion.
 
 A key associated concept is the notion of dimensioning which defines the 
 set of matching meters that feed into an alarm evaluation. Recall that 
 meters are per-resource-instance, so in the simplest case an alarm might be 
 defined over a particular meter applied to all resources visible to a 
 particular user. More useful however would the option to explicitly select 
 which specific resources we're interested in alarming on. On one extreme we 
 would have narrowly dimensioned alarms where this selection would have only 
 a single target (identified by resource ID). On the other extreme, we'd have 
 widely dimensioned alarms where this selection identifies many resources 
 over which the statistic is aggregated, for example all instances booted 
 from a particular image or all instances with matching user metadata (the 
 latter is how Heat identifies autoscaling groups).
 
 We'd have to think about how that concept is captured in the
 UX for alarm creation/update.
 
 Second, there are a couple of more advanced alarming features 
 that were added in Icehouse:
 
 1. The ability to constrain alarms on time ranges, such that they
  would only fire say during 9-to-5 on a weekday. This would
  allow for example different autoscaling policies to be applied
  out-of-hours, when resource usage is likely to be cheaper and
  manual remediation less straight-forward.
 
 2. The ability to exclude low-quality datapoints with anomolously
  low sample counts. This allows the leading edge of the trend of
  widely dimensioned alarms not to be skewed by eagerly-reporting
  outliers.
 
 Perhaps not in a first iteration, but at some point it may 

Re: [openstack-dev] [nova] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil philip@hp.com wrote:
 Hi Joe,



 Can you give some examples of what that data would be used for ?

Sure!  For example, in the PowerKVM world, hosts can be dynamically
configured to run in split-core processor mode.  This setting can be
dynamically changed and it'd be nice to allow the driver to track this
somehow -- and it probably doesn't warrant its own explicit field in
compute_node.  Likewise, PowerKVM also has a concept of the maximum
SMT level in which its guests can run (which can also vary dynamically
based on the split-core setting) and it would also be nice to tie such
settings to the compute node.

Overall, this would give folks writing compute drivers the ability to
attach the extra spec style data to a compute node for a variety of
purposes -- two simple examples provided above, but there are many
more.  :-)




 It sounds on the face of it that what you’re looking for is pretty similar
 to what Extensible Resource Tracker sets out to do
 (https://review.openstack.org/#/c/86050
 https://review.openstack.org/#/c/71557)

Thanks for pointing this out.  I actually ran across these while I was
searching the code to see what might already exist in this space.
Actually, the compute node 'stats' was always a first guess, but these
are clearly heavily reserved for the resource tracker and wind up
getting purged/deleted over time since the 'extra specs' I reference
above aren't necessarily tied to the spawning/deleting of instances.
In other words, they're not really consumable resources, per-se.
Unless I'm overlooking a way (perhaps I am) to use this
extensible-resource-tracker blueprint for arbitrary key-value pairs
**not** related to instances, I think we need something additional?

I'd happily create a new blueprint for this as well.




 Phil



 From: Joe Cropper [mailto:cropper@gmail.com]
 Sent: 07 June 2014 07:30
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Arbitrary extra specs for compute nodes?



 Hi Folks,

 I was wondering if there was any such mechanism in the compute node
 structure to hold arbitrary key-value pairs, similar to flavors'
 extra_specs concept?

 It appears there are entries for things like pci_stats, stats and recently
 added extra_resources -- but these all tend to have more specific usages vs.
 just arbitrary data that may want to be maintained about the compute node
 over the course of its lifetime.

 Unless I'm overlooking an existing construct for this, would this be
 something that folks would welcome a Juno blueprint for--i.e., adding
 extra_specs style column with a JSON-formatted string that could be loaded
 as a dict of key-value pairs?

 Thoughts?

 Thanks,

 Joe


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-09 Thread David Kranz

On 06/02/2014 06:57 AM, Sean Dague wrote:

Towards the end of the summit there was a discussion about us using a
shared review dashboard to see if a common view by the team would help
accelerate people looking at certain things. I spent some time this
weekend working on a tool to make building custom dashboard urls much
easier.

My current proposal is the following, and would like comments on it:
https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash

All items in the dashboard are content that you've not voted on in the
current patch revision, that you don't own, and that have passing
Jenkins test results.

1. QA Specs - these need more eyes, so we highlight them at top of page
2. Patches that are older than 5 days, with no code review
3. Patches that you are listed as a reviewer on, but haven't voting on
current version
4. Patches that already have a +2, so should be landable if you agree.
5. Patches that have no negative code review feedback on them
6. Patches older than 2 days, with no code review
Thanks, Sean. This is working great for me, but I think there is another 
important item that is missing and hope it is possible to add, perhaps 
even as among the most important items:


Patches that you gave a -1, but the response is a comment explaining why 
the -1 should be withdrawn rather than a new patch.


 -David


These are definitely a judgement call on what people should be looking
at, but this seems a pretty reasonable triaging list. I'm happy to have
a discussion on changes to this list.

The url for this is -  http://goo.gl/g4aMjM

(the long url is very long:
https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cselftitle=QA+Review+InboxQA+Specs=project%3Aopenstack%2Fqa-specsNeeds+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5dYour+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3AselfNeeds+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d

The url can be regenerated easily using the gerrit-dash-creator.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Jay Pipes

On 06/09/2014 12:50 PM, Devananda van der Veen wrote:

There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html


Hi Deva,

MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not 
READ_COMMITTED... are you saying that somewhere in the Ironic codebase 
we are setting the isolation mode manually to READ_COMMITTED for some 
reason?


Best,
-jay


On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

Hi Mike,


However, when testing an application that uses a fixed set of tables, as should 
be the case for the majority if not all Openstack apps, there’s no reason that 
these tables need to be recreated for every test.


This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:


On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
wrote:

I think some things are broken in the oslo-incubator db migration code.

Ironic moved to this when Juno opened and things seemed fine, until recently
when Lucas tried to add a DB migration and noticed that it didn't run... So
I looked into it a bit today. Below are my findings.

Firstly, I filed this bug and proposed a fix, because I think that tests
that don't run any code should not report that they passed -- they should
report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

Then, I edited the test_migrations.conf file appropriately for my local
mysql service, ran the tests again, and verified that migration tests ran --
and they passed. Great!

Now, a little background... Ironic's TestMigrations class inherits from
oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
if it's available. This opportunistic checking was inherited from Nova so
that tests could pass on developer workstations where not all backends are
present (eg, I have mysql installed, but not postgres), and still
transparently run on all backends in the gate. I couldn't find such
opportunistic testing in the oslo db migration test code, unfortunately -
but maybe it's well hidden.

Anyhow. When I stopped the local mysql service (leaving the configuration
unchanged), I expected the tests to be skipped, but instead I got two
surprise failures:
- test_mysql_opportunistically() failed because setUp() raises an exception
before the test code could call calling _have_mysql()
- test_mysql_connect_fail() actually failed! Again, because setUp() raises
an exception before running the test itself

Unfortunately, there's one more problem... when I run the tests in parallel,
they fail randomly because sometimes two test threads run different
migration tests, and the setUp() for one thread (remember, it calls
_reset_databases) blows up the other test.

Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
database exists
   ProgrammingError: 

Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and context

2014-06-09 Thread Steve Gordon
- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com, OpenStack 
 Development Mailing List (not for usage
 
 Just adding openstack-dev to the CC for now :).
 
 - Original Message -
  From: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com
  Subject: Re: NFV in OpenStack use cases and context
  
  Can we look at them one by one?
  
  Use case 1 - It's pure IaaS
  Use case 2 - Virtual network function as a service. It's actually about
  exposing services to end customers (enterprises) by the service provider.
  Use case 3 - VNPaaS - is similar to #2 but at the service level. At larger
  scale and not at the app level only.
  Use case 4 - VNF forwarding graphs. It's actually about dynamic
  connectivity between apps.
  Use case 5 - vEPC and vIMS - Those are very specific (good) examples of SP
  services to be deployed.
  Use case 6 - virtual mobile base station. Another very specific example,
  with different characteristics than the other two above.
  Use case 7 - Home virtualisation.
  Use case 8 - Virtual CDN
  
  As I see it those have totally different relevancy to OpenStack.
  Assuming we don't want to boil the ocean hereŠ
  
  1-3 seems to me less relevant here.
  4 seems to be a Neutron area.
  5-8 seems to be usefully to understand the needs of the NFV apps. The use
  case can help to map those needs.
  
  For 4 I guess the main part is about chaining and Neutron between DCs.
  Soma may call it SDN in WAN...
  
  For 5-8 at the end an option is to map all those into:
  -performance (net BW, storage BW mainly). That can be mapped to SR-IOV,
  NUMA. Etc'
  -determinism. Shall we especially minimise noisy neighbours. Not sure
  how NFV is special here, but for sure it's a major concern for lot of SPs.
  That can be mapped to huge pages, cache QOS, etc'.
  -overcoming of short term hurdles (just because of apps migrations
  issues). Small example is the need to define the tick policy of KVM just
  because that's what the app needs. Again, not sure how NFV special it is,
  and again a major concern of mainly application owners in the NFV domain.
  
  Make sense?

Hi Itai,

This makes sense to me. I think what we need to expand upon, with the ETSI NFV 
documents as a reference, is a two to three paragraph explanation of each use 
case explained at a more basic level - ideally on the Wiki page. It seems that 
use case 5 might make a particularly good initial target to work on fleshing 
out as an example? We could then look at linking the use case to concrete 
requirements based on this, I suspect we might want to break them down into:

a) The bare minimum requirements for OpenStack to support the use case at all. 
That is, requirements that without which the VNF simply can not function.

b) The requirements that are not mandatory but would be beneficial for 
OpenStack to support the use case. In particularly that might be requirements 
that would improve VNF performance or reliability by some margin (possibly 
significantly) but which it can function without if absolutely required.

Thoughts?

Steve



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Jay Pipes

On 06/09/2014 01:38 PM, Joe Cropper wrote:

On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil philip@hp.com wrote:

Hi Joe,



Can you give some examples of what that data would be used for ?


Sure!  For example, in the PowerKVM world, hosts can be dynamically
configured to run in split-core processor mode.  This setting can be
dynamically changed and it'd be nice to allow the driver to track this
somehow -- and it probably doesn't warrant its own explicit field in
compute_node.  Likewise, PowerKVM also has a concept of the maximum
SMT level in which its guests can run (which can also vary dynamically
based on the split-core setting) and it would also be nice to tie such
settings to the compute node.


That information is typically stored in the compute_node.cpu_info field.


Overall, this would give folks writing compute drivers the ability to
attach the extra spec style data to a compute node for a variety of
purposes -- two simple examples provided above, but there are many
more.  :-)


If it's something that the driver can discover on its own and that the 
driver can/should use in determining the capabilities that the 
hypervisor offers, then at this point, I believe compute_node.cpu_info 
is the place to put that information. It's probably worth renaming the 
cpu_info field to just capabilities instead, to be more generic and 
indicate that it's a place the driver stores discoverable capability 
information about the node...


Now, for *user-defined* taxonomies, I'm a big fan of simple string 
tagging, as is proposed for the server instance model in this spec:


https://review.openstack.org/#/c/91444/

Best,
jay





It sounds on the face of it that what you’re looking for is pretty similar
to what Extensible Resource Tracker sets out to do
(https://review.openstack.org/#/c/86050
https://review.openstack.org/#/c/71557)


Thanks for pointing this out.  I actually ran across these while I was
searching the code to see what might already exist in this space.
Actually, the compute node 'stats' was always a first guess, but these
are clearly heavily reserved for the resource tracker and wind up
getting purged/deleted over time since the 'extra specs' I reference
above aren't necessarily tied to the spawning/deleting of instances.
In other words, they're not really consumable resources, per-se.
Unless I'm overlooking a way (perhaps I am) to use this
extensible-resource-tracker blueprint for arbitrary key-value pairs
**not** related to instances, I think we need something additional?

I'd happily create a new blueprint for this as well.





Phil



From: Joe Cropper [mailto:cropper@gmail.com]
Sent: 07 June 2014 07:30
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Arbitrary extra specs for compute nodes?



Hi Folks,

I was wondering if there was any such mechanism in the compute node
structure to hold arbitrary key-value pairs, similar to flavors'
extra_specs concept?

It appears there are entries for things like pci_stats, stats and recently
added extra_resources -- but these all tend to have more specific usages vs.
just arbitrary data that may want to be maintained about the compute node
over the course of its lifetime.

Unless I'm overlooking an existing construct for this, would this be
something that folks would welcome a Juno blueprint for--i.e., adding
extra_specs style column with a JSON-formatted string that could be loaded
as a dict of key-value pairs?

Thoughts?

Thanks,

Joe


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Jay Pipes

On 06/09/2014 12:32 PM, Chris Friesen wrote:

On 06/09/2014 07:59 AM, Jay Pipes wrote:

On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and testing
purposes.


Pray tell, what is that valid use case?


I find it useful for setting up specific testcases when trying to
validate thingsput *this* instance on *this* host, put *those*
instances on *those* hosts, now pull the power plug on *this* host...etc.


So, violating the main design tenet of cloud computing: though shalt not 
care what physical machine your virtual machine lives on. :)



I wouldn't expect the typical openstack end-user to need it though.


Me either :)

I will point out, though, that it is indeed possible to achieve the same 
use case using host aggregates that would not break the main design 
tenet of cloud computing... just make two host aggregates, one for each 
compute node involved in your testing, and then simply supply scheduler 
hints that would only match one aggregate or the other.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Jay Pipes

On 06/09/2014 12:47 PM, Joe Cropper wrote:

There may also be specific software entitlement issues that make it
useful to deterministically know which host your VM will be placed on.
This can be quite common in large organizations that have certain
software that can be tied to certain hardware or hardware with certain #
of CPU capacity, etc.


Sure, agreed. However the cloudy way of doing things (as opposed to 
the enterprise IT/managed hosting way of doing things) is to rely on 
abstractions like host aggregates and not allow details of the physical 
host machine to leak out of the public cloud API.


Best,
-jay



On Mon, Jun 9, 2014 at 11:32 AM, Chris Friesen
chris.frie...@windriver.com mailto:chris.frie...@windriver.com wrote:

On 06/09/2014 07:59 AM, Jay Pipes wrote:

On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and
testing
purposes.


Pray tell, what is that valid use case?


I find it useful for setting up specific testcases when trying to
validate thingsput *this* instance on *this* host, put *those*
instances on *those* hosts, now pull the power plug on *this*
host...etc.

I wouldn't expect the typical openstack end-user to need it though.

Chris

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Infra] Mid-Cycle Meet-up

2014-06-09 Thread Matthew Treinish
On Thu, May 29, 2014 at 12:07:07PM -0400, Matthew Treinish wrote:
 
 Hi Everyone,
 
 So we'd like to announce to everyone that we're going to be doing a combined
 Infra and QA program mid-cycle meet-up. It will be the week of July 14th in
 Darmstadt, Germany at Deutsche Telekom who has graciously offered to sponsor 
 the
 event. The plan is to use the week as both a time for face to face 
 collaboration
 for both programs respectively as well as having a couple days of 
 bootstrapping
 for new users/contributors. The intent was that this would be useful for 
 people
 who are interested in contributing to either Infra or QA, and those who are
 running third party CI systems.
 
 The current break down for the week that we're looking at is:
 
 July 14th: Infra
 July 15th: Infra
 July 16th: Bootstrapping for new users
 July 17th: More bootstrapping
 July 18th: QA
 
 We still have to work out more details, and will follow up once we have them.
 But, we thought it would be better to announce the event earlier so people can
 start to plan travel if they need it.
 
 
 Thanks,
 
 Matt Treinish
 Jim Blair


Just a quick follow-up, the agenda has changed slightly based on room
availability since I first sent out the announcement. You can find up-to-date
information on the meet-up wiki page:

https://wiki.openstack.org/wiki/Qa_Infra_Meetup_2014

Once we work out a detailed agenda of discussion topics/work items for the 3
discussion days I'll update the wiki page.

Also, if you're intending to attend please put your name on the wiki page's
registration section.

Thanks,

Matt Treinish


pgpXjzH_m1tHt.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday June 10th at 19:00 UTC

2014-06-09 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday June 10th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-09 Thread Sean Dague
On 06/09/2014 01:38 PM, David Kranz wrote:
 On 06/02/2014 06:57 AM, Sean Dague wrote:
 Towards the end of the summit there was a discussion about us using a
 shared review dashboard to see if a common view by the team would help
 accelerate people looking at certain things. I spent some time this
 weekend working on a tool to make building custom dashboard urls much
 easier.

 My current proposal is the following, and would like comments on it:
 https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash

 All items in the dashboard are content that you've not voted on in the
 current patch revision, that you don't own, and that have passing
 Jenkins test results.

 1. QA Specs - these need more eyes, so we highlight them at top of page
 2. Patches that are older than 5 days, with no code review
 3. Patches that you are listed as a reviewer on, but haven't voting on
 current version
 4. Patches that already have a +2, so should be landable if you agree.
 5. Patches that have no negative code review feedback on them
 6. Patches older than 2 days, with no code review
 Thanks, Sean. This is working great for me, but I think there is another
 important item that is missing and hope it is possible to add, perhaps
 even as among the most important items:
 
 Patches that you gave a -1, but the response is a comment explaining why
 the -1 should be withdrawn rather than a new patch.

So how does one automatically detect those using the gerrit query language?

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] New extra spec operator proposed

2014-06-09 Thread Maldonado, Facundo N
Hi folks,
I submitted a new blueprint proposing the addition of a new operator to the 
existing ones.

BP: 
https://blueprints.launchpad.net/nova/+spec/add-all-in-list-operator-to-extra-spec-ops
Spec review: https://review.openstack.org/#/c/98179/

What do you think?

Thanks,
Facundo.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
On Mon, Jun 9, 2014 at 10:49 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 06/09/2014 12:50 PM, Devananda van der Veen wrote:

 There may be some problems with MySQL when testing parallel writes in
 different non-committing transactions, even in READ COMMITTED mode,
 due to InnoDB locking, if the queries use non-unique secondary indexes
 for UPDATE or SELECT..FOR UPDATE queries. This is done by the
 with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
 in Nova. So I would not recommend this approach, even though, in
 principle, I agree it would be a much more efficient way of testing
 database reads/writes.

 More details here:
 http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
 http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html


 Hi Deva,

 MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not
 READ_COMMITTED... are you saying that somewhere in the Ironic codebase we
 are setting the isolation mode manually to READ_COMMITTED for some reason?

 Best,
 -jay


Jay,

Not saying that at all. I was responding to Mike's suggested approach
for testing DB changes (which was actually off topic from my original
post), in which he suggested using READ_COMMITTED.

-Deva


 On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com
 wrote:

 Hi Mike,

 However, when testing an application that uses a fixed set of tables,
 as should be the case for the majority if not all Openstack apps, 
 there’s no
 reason that these tables need to be recreated for every test.


 This is a very good point. I tried to use the recipe from SQLAlchemy
 docs to run Nova DB API tests (yeah, I know, this might sound
 confusing, but these are actually methods that access the database in
 Nova) on production backends (MySQL and PostgreSQL). The abandoned
 patch is here [1]. Julia Varlamova has been working on rebasing this
 on master and should upload a new patch set soon.

 Overall, the approach with executing a test within a transaction and
 then emitting ROLLBACK worked quite well. The only problem I ran into
 were tests doing ROLLBACK on purpose. But you've updated the recipe
 since then and this can probably be solved by using of save points. I
 used a separate DB per a test running process to prevent race
 conditions, but we should definitely give READ COMMITTED approach a
 try. If it works, that will awesome.

 With a few tweaks of PostgreSQL config I was able to run Nova DB API
 tests in 13-15 seconds, while SQLite in memory took about 7s.

 Action items for me and Julia probably: [2] needs a spec with [1]
 updated accordingly. Using of this 'test in a transaction' approach
 seems to be a way to go for running all db related tests except the
 ones using DDL statements (as any DDL statement commits the current
 transaction implicitly on MySQL and SQLite AFAIK).

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/33236/
 [2]
 https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

 On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:


 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen
 devananda@gmail.com
 wrote:

 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until
 recently
 when Lucas tried to add a DB migration and noticed that it didn't run...
 So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they
 should
 report that they were skipped.
https://bugs.launchpad.net/oslo/+bug/1327397
No notice given when db migrations are not run due to missing
 engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests
 ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each
 back-end,
 if it's available. This opportunistic checking was inherited from Nova
 so
 that tests could pass on developer workstations where not all backends
 are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately
 -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the
 configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an
 exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp()
 raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in
 parallel,
 they fail randomly because 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Jay Pipes

On 06/09/2014 02:57 PM, Devananda van der Veen wrote:

On Mon, Jun 9, 2014 at 10:49 AM, Jay Pipes jaypi...@gmail.com wrote:

On 06/09/2014 12:50 PM, Devananda van der Veen wrote:


There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html



Hi Deva,

MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not
READ_COMMITTED... are you saying that somewhere in the Ironic codebase we
are setting the isolation mode manually to READ_COMMITTED for some reason?

Best,
-jay



Jay,

Not saying that at all. I was responding to Mike's suggested approach
for testing DB changes (which was actually off topic from my original
post), in which he suggested using READ_COMMITTED.


Apologies, thx for the clarification, Deva,

-jay


-Deva




On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com
wrote:


Hi Mike,


However, when testing an application that uses a fixed set of tables,
as should be the case for the majority if not all Openstack apps, there’s no
reason that these tables need to be recreated for every test.



This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2]
https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:



On Jun 6, 2014, at 8:12 PM, Devananda van der Veen
devananda@gmail.com
wrote:

I think some things are broken in the oslo-incubator db migration code.

Ironic moved to this when Juno opened and things seemed fine, until
recently
when Lucas tried to add a DB migration and noticed that it didn't run...
So
I looked into it a bit today. Below are my findings.

Firstly, I filed this bug and proposed a fix, because I think that tests
that don't run any code should not report that they passed -- they
should
report that they were skipped.
https://bugs.launchpad.net/oslo/+bug/1327397
No notice given when db migrations are not run due to missing
engine

Then, I edited the test_migrations.conf file appropriately for my local
mysql service, ran the tests again, and verified that migration tests
ran --
and they passed. Great!

Now, a little background... Ironic's TestMigrations class inherits from
oslo's BaseMigrationTestCase, then opportunistically checks each
back-end,
if it's available. This opportunistic checking was inherited from Nova
so
that tests could pass on developer workstations where not all backends
are
present (eg, I have mysql installed, but not postgres), and still
transparently run on all backends in the gate. I couldn't find such
opportunistic testing in the oslo db migration test code, unfortunately
-
but maybe it's well hidden.

Anyhow. When I stopped the local mysql service (leaving the
configuration
unchanged), I expected the tests to be skipped, but instead I got two
surprise failures:
- test_mysql_opportunistically() failed because setUp() raises an
exception
before the test code could call calling _have_mysql()
- test_mysql_connect_fail() actually failed! Again, because setUp()
raises
an exception before running the test itself

Unfortunately, there's one more problem... when I run the tests in
parallel,
they 

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Alex Glikson
 So maybe the problem isn?t having the flavors so much, but in how the 
user currently has to specific an exact match from that list.
If the user could say ?I want a flavor with these attributes? and then the 
system would find a ?best match? based on criteria set by the cloud admin 
then would that be a more user friendly solution ? 

Interesting idea.. Thoughts how this can be achieved?

Alex




From:   Day, Phil philip@hp.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   06/06/2014 12:38 PM
Subject:Re: [openstack-dev] [nova] Proposal: Move CPU and memory 
allocation ratio out of scheduler



 
From: Scott Devoid [mailto:dev...@anl.gov] 
Sent: 04 June 2014 17:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory 
allocation ratio out of scheduler
 
Not only live upgrades but also dynamic reconfiguration. 

Overcommitting affects the quality of service delivered to the cloud user. 
 In this situation in particular, as in many situations in general, I 
think we want to enable the service provider to offer multiple qualities 
of service.  That is, enable the cloud provider to offer a selectable 
level of overcommit.  A given instance would be placed in a pool that is 
dedicated to the relevant level of overcommit (or, possibly, a better pool 
if the selected one is currently full).  Ideally the pool sizes would be 
dynamic.  That's the dynamic reconfiguration I mentioned preparing for. 
 
+1 This is exactly the situation I'm in as an operator. You can do 
different levels of overcommit with host-aggregates and different flavors, 
but this has several drawbacks:
1.  The nature of this is slightly exposed to the end-user, through 
extra-specs and the fact that two flavors cannot have the same name. One 
scenario we have is that we want to be able to document our flavor 
names--what each name means, but we want to provide different QoS 
standards for different projects. Since flavor names must be unique, we 
have to create different flavors for different levels of service. 
Sometimes you do want to lie to your users!
[Day, Phil] I agree that there is a problem with having every new option 
we add in extra_specs leading to a new set of flavors.There are a 
number of changes up for review to expose more hypervisor capabilities via 
extra_specs that also have this potential problem.What I?d really like 
to be able to ask for a s a user is something like ?a medium instance with 
a side order of overcommit?, rather than have to choose from a long list 
of variations.I did spend some time trying to think of a more elegant 
solution ? but as the user wants to know what combinations are available 
it pretty much comes down to needing that full list of combinations 
somewhere.So maybe the problem isn?t having the flavors so much, but 
in how the user currently has to specific an exact match from that list.
If the user could say ?I want a flavor with these attributes? and then the 
system would find a ?best match? based on criteria set by the cloud admin 
(for example I might or might not want to allow a request for an 
overcommitted instance to use my not-overcommited flavor depending on the 
roles of the tenant) then would that be a more user friendly solution ? 
 
2.  If I have two pools of nova-compute HVs with different overcommit 
settings, I have to manage the pool sizes manually. Even if I use puppet 
to change the config and flip an instance into a different pool, that 
requires me to restart nova-compute. Not an ideal situation.
[Day, Phil] If the pools are aggregates, and the overcommit is defined by 
aggregate meta-data then I don?t see why you  need to restart 
nova-compute.
3.  If I want to do anything complicated, like 3 overcommit tiers with 
good, better, best performance and allow the scheduler to pick 
better for a good instance if the good pool is full, this is very 
hard and complicated to do with the current system.
[Day, Phil]  Yep, a combination of filters and weighting functions would 
allow you to do this ? its not really tied to whether the overcommit Is 
defined in the scheduler or the host though as far as I can see. 
 
I'm looking forward to seeing this in nova-specs!
~ Scott___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Eoghan Glynn


  So there are certain words that mean certain things, most don't, some do.
 
  If words that mean certain things are used then some folks start using
  the word and have expectations around the word and the OpenStack
  Technical Committee and other OpenStack programs find themselves on the
  hook for behaviours that they didn't agree to.
 
  Currently the word under discussion is certified and its derivatives:
  certification, certifying, and others with root word certificate.
 
  This came to my attention at the summit with a cinder summit session
  with the one of the cerficiate words in the title. I had thought my
  point had been made but it appears that there needs to be more
  discussion on this. So let's discuss.
 
  Let's start with the definition of certify:
  cer·ti·fy
  verb (used with object), cer·ti·fied, cer·ti·fy·ing.
  1. to attest as certain; give reliable information of; confirm: He
  certified the truth of his claim.
  2. to testify to or vouch for in writing: The medical examiner will
  certify his findings to the court.
  3. to guarantee; endorse reliably: to certify a document with an
  official seal.
  4. to guarantee (a check) by writing on its face that the account
  against which it is drawn has sufficient funds to pay it.
  5. to award a certificate to (a person) attesting to the completion of a
  course of study or the passing of a qualifying examination.
  Source: http://dictionary.reference.com/browse/certify
 
  The issue I have with the word certify is that it requires someone or a
  group of someones to attest to something. The thing attested to is only
  as credible as the someone or the group of someones doing the attesting.
  We have no process, nor do I feel we want to have a process for
  evaluating the reliability of the somones or groups of someones doing
  the attesting.
 
  I think that having testing in place in line with other programs testing
  of patches (third party ci) in cinder should be sufficient to address
  the underlying concern, namely reliability of opensource hooks to
  proprietary code and/or hardware. I would like the use of the word
  certificate and all its roots to no longer be used in OpenStack
  programs with regard to testing. This won't happen until we get some
  discussion and agreement on this, which I would like to have.
 
  Thank you for your participation,
  Anita.
  
  Hi Anita,
  
  Just a note on cross-posting to both the os-dev and os-tc lists.
  
  Anyone not on the TC who will hits reply-all is likely to see their
  post be rejected by the TC list moderator, but go through to the
  more open dev list.
  
  As a result, the thread diverges (as we saw with the recent election
  stats/turnout thread).
  
  Also, moderation rejects are an unpleasant user experience.
  
  So if a post is intended to reach out for input from the wider dev
  community, it's better to post *only* to the -dev list, or vice versa
  if you want to interact with a narrower audience.
 My post was intended to include the tc list in the discussion
 
 I have no say in what posts the tc email list moderator accepts or does
 not, or how those posts not accepted are informed of their status.

Well the TC list moderation policy isn't so much the issue here, as the
practice of cross-posting between open- and closed-moderation lists.

Even absent strict moderation being applied, as hasn't been the case for
this thread, cross-posting still tends to cause divergence of threads due
to moderator-lag and individuals choosing not to cross-post their replies.

The os-dev subscriber list should be a strict super-set of the os-tc list,
so anything posted just to the former will naturally be visible to the TC
membership also.

Thanks,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Anita Kuno
On 06/09/2014 03:17 PM, Eoghan Glynn wrote:
 
 
 So there are certain words that mean certain things, most don't, some do.

 If words that mean certain things are used then some folks start using
 the word and have expectations around the word and the OpenStack
 Technical Committee and other OpenStack programs find themselves on the
 hook for behaviours that they didn't agree to.

 Currently the word under discussion is certified and its derivatives:
 certification, certifying, and others with root word certificate.

 This came to my attention at the summit with a cinder summit session
 with the one of the cerficiate words in the title. I had thought my
 point had been made but it appears that there needs to be more
 discussion on this. So let's discuss.

 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He
 certified the truth of his claim.
 2. to testify to or vouch for in writing: The medical examiner will
 certify his findings to the court.
 3. to guarantee; endorse reliably: to certify a document with an
 official seal.
 4. to guarantee (a check) by writing on its face that the account
 against which it is drawn has sufficient funds to pay it.
 5. to award a certificate to (a person) attesting to the completion of a
 course of study or the passing of a qualifying examination.
 Source: http://dictionary.reference.com/browse/certify

 The issue I have with the word certify is that it requires someone or a
 group of someones to attest to something. The thing attested to is only
 as credible as the someone or the group of someones doing the attesting.
 We have no process, nor do I feel we want to have a process for
 evaluating the reliability of the somones or groups of someones doing
 the attesting.

 I think that having testing in place in line with other programs testing
 of patches (third party ci) in cinder should be sufficient to address
 the underlying concern, namely reliability of opensource hooks to
 proprietary code and/or hardware. I would like the use of the word
 certificate and all its roots to no longer be used in OpenStack
 programs with regard to testing. This won't happen until we get some
 discussion and agreement on this, which I would like to have.

 Thank you for your participation,
 Anita.

 Hi Anita,

 Just a note on cross-posting to both the os-dev and os-tc lists.

 Anyone not on the TC who will hits reply-all is likely to see their
 post be rejected by the TC list moderator, but go through to the
 more open dev list.

 As a result, the thread diverges (as we saw with the recent election
 stats/turnout thread).

 Also, moderation rejects are an unpleasant user experience.

 So if a post is intended to reach out for input from the wider dev
 community, it's better to post *only* to the -dev list, or vice versa
 if you want to interact with a narrower audience.
 My post was intended to include the tc list in the discussion

 I have no say in what posts the tc email list moderator accepts or does
 not, or how those posts not accepted are informed of their status.
 
 Well the TC list moderation policy isn't so much the issue here, as the
 practice of cross-posting between open- and closed-moderation lists.
 
 Even absent strict moderation being applied, as hasn't been the case for
 this thread, cross-posting still tends to cause divergence of threads due
 to moderator-lag and individuals choosing not to cross-post their replies.
 
 The os-dev subscriber list should be a strict super-set of the os-tc list,
 so anything posted just to the former will naturally be visible to the TC
 membership also.
 
 Thanks,
 Eoghan
 
I think you need to start a new topic with your thoughts on how the
email lists should be organized. This particular conversation doesn't
have much to do with the topic at hand anymore.

Thanks Eoghan,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Kurt Griffiths
Folks, this may be a bit of a bombshell, but I think we have been dancing 
around the issue for a while now and we need to address it head on. Let me 
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to 
support several messaging patterns. We could do that using a unified queue 
resource, combining both task distribution and feed semantics. Or we could 
create disjoint resources in the API, or even create two separate services 
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task 
distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, 
implementing task distribution is a relatively straightforward addition. If you 
want both types of semantics, you don’t necessarily gain anything by 
implementing them separately.

Lately we have been talking about writing drivers for traditional message 
brokers that will not be able to support the message feeds part of the API. 
I’ve started to think that having a huge part of the API that may or may not 
“work”, depending on how Marconi is deployed, is not a good story for users, 
esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a 
community. I see three options right now. I’ve listed several—but by no means 
conclusive—pros and cons for each, as well as some counterpoints, based on past 
discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle 
differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads

Against:

  *   Makes it hard for users to create applications that work across multiple 
clouds, since critical functionality may or may not be available in a given 
deployment. (counter: how many users need cross-cloud compatibility? Can they 
degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One 
would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for 
deployment (deploy one service or the other, or both, with clear expectations 
of what messaging patterns are supported in any case).

Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are 
such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just 
one (counter: can scale them independently, perhaps leading to gains in 
efficiency)

Option C. Require every backend to support the entirety of the API as it now 
stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in drivers and API between feed and task distribution 
operations (counter: there may be ways to continue sharing some code if the API 
is split)

Against:

  *   Requires operators to deploy a NoSQL cluster (counter: many operators are 
comfortable with NoSQL today)
  *   Currently requires MongoDB, which is AGPL (counter: a Redis driver is 
under development)
  *   A unified API is hard to tune for performance (counter: Redis driver 
should be able to handle high-throughput use cases, TBD)

I’d love to get everyone’s thoughts on these options; let's brainstorm for a 
bit, then we can home in on the option that makes the most sense. We may need 
to do some POCs or experiments to get enough information to make a good 
decision.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Samuel Bercovici
As far as I understand the Current Barbican implementation is immutable.
Can anyone from Barbican comment on this?

-Original Message-
From: Jain, Vivek [mailto:vivekj...@ebay.com] 
Sent: Monday, June 09, 2014 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is a must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:

Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new 
one, create a new one and update the listener to use it.
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on 
how Barbican and Neutron LBaaS will interact. There are currently two 
ideas in play and both will work. If you have another idea please free 
to add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets 
from Barbican. For those that aren't up to date with the Neutron LBaaS 
API Revision, the project/tenant/user provides a secret (container?) id 
when enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican 
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will 
be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to 
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
think.

2. Push orchestration decisions to API users. This idea comes with two 
assumptions. The first assumption is that most providers' customers use 
the cloud via a GUI, which in turn can handle any orchestration 
decisions that need to be made. The second assumption is that power API 
users are savvy and can handle their decisions as well. Using this 
method requires services, such as LBaaS, to register in the form of 
metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which 
services are registered and opt to warn the user of consequences. Power 
users can look at the registered services and make decisions how they 
see fit.

PROS:
 - Very simple to implement. The only code needed to make this a 
reality is at the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in 
this case?
 - Pushes complexity of decision making on to GUI engineers and power 
API users.


I would like to get a consensus on which option to move forward with 
ASAP since the hackathon is coming up and delivering Barbican to 
Neutron LBaaS integration is essential to exposing SSL/TLS 
functionality, which almost everyone has stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My 
reason for choosing option #2 has to deal mostly with the simplicity of 
implementing such a mechanism. Simplicity also means we can implement 
the necessary code and get it approved much faster which seems to be a 
concern for everyone. What option does everyone else want to move 
forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [hacking] Hacking 0.9.1 released

2014-06-09 Thread Joe Gordon
Hi folks,

Hacking 0.9.1 has just been released (hacking 0.9.1 had a minor bug).
Unlike other dependencies 'OpenStack Proposal Bot' does not automatically
push out a patch to the new version.

The recommended way to upgrade to hacking 0.9.1 is to add any new failing
tests to the exclude list in tox.ini and fix those in subsequent patches
(example: https://review.openstack.org/#/c/98864/).

pep8 1.5.x changed a whole bunch of internals, so when upgrading to the new
hacking please make sure your local checks still work.


best,
Joe

Release Notes:


   - New dependency versions, all with new features
   - pep8==1.5.6 [*https://github.com/jcrocholl/pep8/blob/master/CHANGES.txt
  https://github.com/jcrocholl/pep8/blob/master/CHANGES.txt
  https://mail.google.com/*]
 - Report E129 instead of E125 for visually indented line with same
 indent as next logical line.
 - Report E265 for space before block comment.
 - Report E713 and E714 when operators ``not in`` and ``is not``
 are  recommended (taken from hacking).
 - Report E131 instead of E121 / E126 if the hanging indent is not
 consistent within the same continuation block.  It helps when
error E121 or
 E126 is in the ``ignore`` list.
 - Report E126 instead of E121 when the continuation line is
 hanging with extra indentation, even if indentation is not a
multiple of 4.
  - pyflakes==0.8.1
  - flake8==2.1.0
   - More rules support noqa
  - Added to: H701, H702, H232, H234, H235, H237
   - Gate on Python3 compatibility
   - Dropped H901,H902 as those are now in pep8 and enforced by E713 and
   E714
   - Support for separate localization catalogs
   - Rule numbers added to http://docs.openstack.org/developer/hacking/
   - Improved performance
   - New Rules:
  - H104  File contains nothing but comments
  - H305  imports not grouped correctly
  - H307  like imports should be grouped together
  - H405  multi line docstring summary not separated with an empty line
  - H904  Wrap long lines in parentheses instead of a backslash


Thank you to everyone who contributed to hacking 0.9.1:
* Joe Gordon
* Ivan A. Melnikov
* Ben Nemec
* Chang Bo Guo
* Nikola Dipanov
* Clay Gerrard
* Cyril Roelandt
* Dirk Mueller
* James E. Blair
* Jeremy Stanley
* Julien Danjou
* Lei Zhang
* Marc Abramowitz
* Mike Perez
* Radomir Dopieralski
* Samuel Merritt
* YAMAMOTO Takashi
* ZhiQiang Fan
* fujioka yuuichi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] [Heat] [Mistral] [Murano] [Neutron] [Solum] Cross-project UI library: gathering the requirements

2014-06-09 Thread Timur Sufiev
Hi All,

At the Solum-Murano-Heat cross-project session [1] during the
Openstack Juno Summit it was decided that it would be beneficial for
the Solum, Murano and Heat projects to implement common UX patterns in
separate library. During an early discussion several more projects
were added (Mistral and Neutron), and an initial UI draft was proposed
[2]. That initial concept is just a first step in finding the common
ground between the needs of aforementioned projects and is much likely
to be reworked in future. So I’d like to initiate a discussion to
gather specific use cases from Solum, Heat and Neutron projects
(Murano and Mistral are already somehow covered) as well as gather in
this thread all people who are interested in the project.

[1] https://etherpad.openstack.org/p/9XQ7Q2NQdv
[2] 
https://docs.google.com/a/mirantis.com/document/d/19Q9JwoO77724RyOp7XkpYmALwmdb7JjoQHcDv4ffZ-I/edit#

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Tiwari, Arvind
As per current implementation, containers are immutable. 
Do we have any use case to make it mutable? Can we live with new container 
instead of updating an existing container?

Arvind 

-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com] 
Sent: Monday, June 09, 2014 1:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

As far as I understand the Current Barbican implementation is immutable.
Can anyone from Barbican comment on this?

-Original Message-
From: Jain, Vivek [mailto:vivekj...@ebay.com]
Sent: Monday, June 09, 2014 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is a must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:

Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new 
one, create a new one and update the listener to use it.
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on 
how Barbican and Neutron LBaaS will interact. There are currently two 
ideas in play and both will work. If you have another idea please free 
to add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets 
from Barbican. For those that aren't up to date with the Neutron LBaaS 
API Revision, the project/tenant/user provides a secret (container?) id 
when enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican 
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will 
be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to 
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
think.

2. Push orchestration decisions to API users. This idea comes with two 
assumptions. The first assumption is that most providers' customers use 
the cloud via a GUI, which in turn can handle any orchestration 
decisions that need to be made. The second assumption is that power API 
users are savvy and can handle their decisions as well. Using this 
method requires services, such as LBaaS, to register in the form of 
metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which 
services are registered and opt to warn the user of consequences. Power 
users can look at the registered services and make decisions how they 
see fit.

PROS:
 - Very simple to implement. The only code needed to make this a 
reality is at the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in 
this case?
 - Pushes complexity of decision making on to GUI engineers and power 
API users.


I would like to get a consensus on which option to move forward with 
ASAP since the hackathon is coming up and delivering Barbican to 
Neutron LBaaS integration is essential to exposing SSL/TLS 
functionality, which almost everyone has stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My 
reason for choosing option #2 has to deal mostly with the simplicity of 
implementing such a mechanism. Simplicity also means we can implement 
the necessary code and get it approved much faster which seems to be a 
concern for everyone. What option does everyone else want to move 
forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Mike Bayer

On Jun 9, 2014, at 1:08 PM, Mike Bayer mba...@redhat.com wrote:

 
 On Jun 9, 2014, at 12:50 PM, Devananda van der Veen devananda@gmail.com 
 wrote:
 
 There may be some problems with MySQL when testing parallel writes in
 different non-committing transactions, even in READ COMMITTED mode,
 due to InnoDB locking, if the queries use non-unique secondary indexes
 for UPDATE or SELECT..FOR UPDATE queries. This is done by the
 with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
 in Nova. So I would not recommend this approach, even though, in
 principle, I agree it would be a much more efficient way of testing
 database reads/writes.
 
 More details here:
 http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
 http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html
 
 OK, but just to clarify my understanding, what is the approach to testing 
 writes in parallel right now, are we doing CREATE DATABASE for two entirely 
 distinct databases with some kind of generated name for each one?  Otherwise, 
 if the parallel tests are against the same database, this issue exists 
 regardless (unless autocommit mode is used, is FOR UPDATE accepted under 
 those conditions?)

Took a look and this seems to be the case, from oslo.db:

def create_database(engine):
Provide temporary user and database for each particular test.
driver = engine.name

auth = {
'database': ''.join(random.choice(string.ascii_lowercase)
for i in moves.range(10)),
# ...

sqls = [
drop database if exists %(database)s;,
create database %(database)s;
]

Just thinking out loud here, I’ll move these ideas to a new wiki page after 
this post.My idea now is that OK, we provide ad-hoc databases for tests, 
but look into the idea that we create N ad-hoc databases, corresponding to 
parallel test runs - e.g. if we are running five tests concurrently, we make 
five databases.   Tests that use a database will be dished out among this pool 
of available schemas.   In the *typical* case (which means not the case that 
we’re testing actual migrations, that’s a special case) we build up the schema 
on each database via migrations or even create_all() just once, run tests 
within rolled-back transactions one-per-database, then the DBs are torn down 
when the suite is finished.

Sorry for the thread hijack.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Designate Incubation Request

2014-06-09 Thread Mac Innes, Kiall
On Mon, 2014-06-09 at 07:25 -0400, Sean Dague wrote:
 On 06/06/2014 12:06 PM, Mac Innes, Kiall wrote:
  Several of the TC requested we have an openstack-infra managed DevStack
  gate enabled before they would cast their vote - I'm happy to say, we've
  got it :)
  
  With the merge of [1], Designate now has voting devstack /
  requirements / docs jobs. An example of the DevStack run is at [2].
  
  Vote Designate @ [3] :)
  
  Thanks,
  Kiall
  
  [1]: https://review.openstack.org/#/c/98439/
  [2]: https://review.openstack.org/#/c/98442/
  [3]: https://review.openstack.org/#/c/97609/
 
 I'm seeing in [2] api logs that something was run (at least 1 API
 request was processed), but it's hard to see where that is in the
 console logs. Pointers?
 
   -Sean
 

Hey Sean,

Yes - on Saturday, after sending this email on Friday, I noticed the
exercises were not running - devstack-gate has them disabled by default.

We landed a patch to the job this morning to allow us to run them, and
have a series of patches in the check/gate queues to enable the
exercises for all patches. An example of the output is at [1] - this
will be enabled for all patches once [2] lands.

Thanks,
Kiall

[1]:
http://logs.openstack.org/88/98788/6/check/gate-designate-devstack-dsvm/98b5704/console.html
[2]: https://review.openstack.org/#/c/98788/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic]Adding support for ManagedFRU in the IPMI driver

2014-06-09 Thread Lemieux, Luc
Hi I work for Kontron a hardware company that is a member of the foundation 
since this year.

One of our blade product hold 2 complete servers (I7 Haswell chip, 16Gb ram, 
120 Gb SSD each) that are managed by a single IPMI BMC (Board Management 
Controller) using the IPMI ManagedFRU concept. This concept allows both 
servers to be individually managed by the one management IPMI address.

However this concept was not thought of in the Nova Baremetal driver and 
probably is not also in Ironic.

This Managed FRU concept is common within the ATCA hardware for expansion cards 
uTCA and we think that this abstraction might become more and more present in 
future hardware that wants to provide as much processing as possible in as 
little a format as possible. Our SYMkloud box offers up to 9 nodes like I 
described earlier (so 18 I7 Haswell) in a 2U rack form factor.

Where should I start to look to see if adding the ability in Ironic to detect 
throught the bootstrap if a server is of type ManagedFRU, that is more than 
one server for the same IPMI address, then use Redirect type IPMI commands 
(so a special driver I guess) to individually manage these server is a feature 
that could be useful in the long run for Ironic.

We as a company want to get involved in the community and see this as a 
possible contribution that we could make.

Thank you!

Luc Lemieux | Software Designer, Application Ready Platforms | Kontron Canada | 
T 450 437 4661 | E luc.lemi...@ca.kontron.commailto:luc.lemi...@ca.kontron.com
Kontron Canada Inc
4555 Rue Ambroise-Lafortune
Boisbriand (Québec) J7H 0A4

L'information contenue dans le présent document est la propriété de Kontron 
Canada Inc. et est divulguée en toute confidentialité. Cette information ne 
doit pas être utilisée, divulguée ou reproduite sans le consentement écrit 
explicite de Kontron Canada Inc. Si vous n'êtes pas le destinataire prévu et 
avez reçu cette communication par erreur, veuillez contacter l'originateur et 
supprimer toute copie.

The information contained in this document is confidential and property of 
Kontron Canada Inc. Any unauthorized review, use, disclosure or distribution is 
prohibited without express written consent of Kontron Canada Inc. If you are 
not the intended recipient, please contact the sender and destroy all copies of 
the original message and enclosed attachments.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 12:56 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 06/09/2014 01:38 PM, Joe Cropper wrote:

 On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil philip@hp.com wrote:

 Hi Joe,



 Can you give some examples of what that data would be used for ?


 Sure!  For example, in the PowerKVM world, hosts can be dynamically
 configured to run in split-core processor mode.  This setting can be
 dynamically changed and it'd be nice to allow the driver to track this
 somehow -- and it probably doesn't warrant its own explicit field in
 compute_node.  Likewise, PowerKVM also has a concept of the maximum
 SMT level in which its guests can run (which can also vary dynamically
 based on the split-core setting) and it would also be nice to tie such
 settings to the compute node.


 That information is typically stored in the compute_node.cpu_info field.


 Overall, this would give folks writing compute drivers the ability to
 attach the extra spec style data to a compute node for a variety of
 purposes -- two simple examples provided above, but there are many
 more.  :-)


 If it's something that the driver can discover on its own and that the
 driver can/should use in determining the capabilities that the hypervisor
 offers, then at this point, I believe compute_node.cpu_info is the place to
 put that information. It's probably worth renaming the cpu_info field to
 just capabilities instead, to be more generic and indicate that it's a
 place the driver stores discoverable capability information about the
 node...

Thanks, that's a great point!  While that's fair for those items that
are self-discoverable for the driver that also are cpu_info'ish in
nature, there are also some additional use cases I should mention.
Imagine some higher level projects [above nova] want to associate
arbitrary bits of information with the compute host for
project-specific uses.  For example, suppose I have an orchestration
project that does coordinated live migrations and I want to put some
specific restrictions on the # of concurrent migrations that should
occur for the respective compute node (and let the end-user adjust
these values).  Having it directly associated with the compute node in
nova gives us some nice ways to keep data consistency.  I think this
would be a great way to gain some additional parity with some of the
other nova structures such as flavors' extra_specs and instances'
metadata/system_metadata.

Thanks,
Joe


 Now, for *user-defined* taxonomies, I'm a big fan of simple string tagging,
 as is proposed for the server instance model in this spec:

 https://review.openstack.org/#/c/91444/

 Best,
 jay





 It sounds on the face of it that what you’re looking for is pretty
 similar
 to what Extensible Resource Tracker sets out to do
 (https://review.openstack.org/#/c/86050
 https://review.openstack.org/#/c/71557)


 Thanks for pointing this out.  I actually ran across these while I was
 searching the code to see what might already exist in this space.
 Actually, the compute node 'stats' was always a first guess, but these
 are clearly heavily reserved for the resource tracker and wind up
 getting purged/deleted over time since the 'extra specs' I reference
 above aren't necessarily tied to the spawning/deleting of instances.
 In other words, they're not really consumable resources, per-se.
 Unless I'm overlooking a way (perhaps I am) to use this
 extensible-resource-tracker blueprint for arbitrary key-value pairs
 **not** related to instances, I think we need something additional?

 I'd happily create a new blueprint for this as well.




 Phil



 From: Joe Cropper [mailto:cropper@gmail.com]
 Sent: 07 June 2014 07:30
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Arbitrary extra specs for compute nodes?



 Hi Folks,

 I was wondering if there was any such mechanism in the compute node
 structure to hold arbitrary key-value pairs, similar to flavors'
 extra_specs concept?

 It appears there are entries for things like pci_stats, stats and
 recently
 added extra_resources -- but these all tend to have more specific usages
 vs.
 just arbitrary data that may want to be maintained about the compute node
 over the course of its lifetime.

 Unless I'm overlooking an existing construct for this, would this be
 something that folks would welcome a Juno blueprint for--i.e., adding
 extra_specs style column with a JSON-formatted string that could be
 loaded
 as a dict of key-value pairs?

 Thoughts?

 Thanks,

 Joe


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-09 Thread Amir Sadoughi
Salvatore,

The 80% distinction came from a discussion I had at summit, representing that 
the majority of features described by the current security groups could be 
implemented today with OVS without connection tracking. It’s not based on any 
mathematical calculation… more of a pseudo-application of Pareto’s principle. :)

Correct, the OVS tcp_flags feature will be used to implement an emulated 
statefulness for TCP flows whereas non-TCP flows would use the 
source-port-range-min, source-port-range-max extended API to implement 
stateless flows.

Performance measurements would have to come after implementations are made for 
the proposed blueprint. Although, benchmarks of the two existing FirewallDriver 
implementations can be done today. We can measure number of concurrent 
connections until failure, overall bandwidth as percentage of line rate, etc. 
Are there any other specific metrics you would like to see in the benchmark?

Amir

On Jun 3, 2014, at 2:51 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:

I would like to understand how did we get to this 80%/20% distinction.
In other terms, it seems conntrack's RELATED features won't be supported for 
non-tcp traffic. What about the ESTABLISHED feature? The blueprint specs refers 
to tcp_flags=ack.
Or will that be supported through the source port matching extension which is 
being promoted?

More comments inline.

On 3 June 2014 01:22, Amir Sadoughi 
amir.sadou...@rackspace.commailto:amir.sadou...@rackspace.com wrote:
Hi all,

In the Neutron weekly meeting today[0], we discussed the ovs-firewall-driver 
blueprint[1]. Moving forward, OVS features today will give us 80% of the 
iptables security groups behavior. Specifically, OVS lacks connection tracking 
so it won’t have a RELATED feature or stateful rules for non-TCP flows. (OVS 
connection tracking is currently under development, to be released by 2015[2]). 
To make the “20% difference more explicit to the operator and end user, we 
have proposed feature configuration to provide security group rules API 
validation that would validate based on connection tracking ability, for 
example.

I am stilly generally skeptic of API changes which surface backend details on 
user-facing APIs. I understand why you are proposing this however, and I think 
it would be good to get first an assessment of the benefits brought by such a 
change before making a call on changing API behaviour to reflect security group 
implementation on the backend.


Several ideas floated up during the chat today, I wanted to expand the 
discussion to the mailing list for further debate. Some ideas include:
- marking ovs-firewall-driver as experimental in Juno
- What does it mean to be marked as “experimental”?

In this case experimental would be a way to say not 100% functional.  You 
would not expect a public service provider exposing neutron APIs backed by this 
driver, but maybe in some private deployments where the missing features are 
not a concern it could be used.

- performance improvements under a new OVS firewall driver untested so far 
(vthapar is working on this)

From the last comment in your post it seems you already have proof of the 
performance improvement, perhaps you can add those to the Performance Impact 
section on the spec.

- incomplete implementation will cause confusion, educational burden

It's more about technical debt in my opinion, but this is not necessarily the 
case.

- debugging OVS is new to users compared to debugging old iptables

This won't be a concern as long as we have good documentation to back the 
implementation.
As Neutron is usually sloppy with documentation - then it's a concern.

- waiting for upstream OVS to implement (OpenStack K- or even L- cycle)

In my humble opinion, merging the blueprint for Juno will provide us a viable, 
more performant security groups implementation than what we have available 
today.

Amir


[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-02-21.01.log.html
[1] https://review.openstack.org/#/c/89712/
[2] http://openvswitch.org/pipermail/dev/2014-May/040567.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-09 Thread Amir Sadoughi
Paul,

Beyond explicit configuration for the cloud operator, documentation and API 
validation for the end user, is there anything specific you would like to see 
as a “warning label”? Does iptables do TCP sequence number validation? Where we 
can, we should strive to match iptables behavior.

Regarding OVS flows and security groups, we can provide a tool to explain how 
security group rules are mapped to the integration bridge. In the proposed 
solution contained in the blueprint, security group rule flows would be 
distinguished from other agent’s flows via cookie.

Regarding packet logging, I don’t know if OVS is capable of it. If iptables in 
Neutron does not currently support that feature, I don’t think Neutron should 
explicitly support out-of-tree features.

Amir

On Jun 3, 2014, at 6:59 AM, CARVER, PAUL 
pc2...@att.commailto:pc2...@att.com wrote:


Amir Sadoughi wrote:

Specifically, OVS lacks connection tracking so it won’t have a RELATED feature 
or stateful rules
for non-TCP flows. (OVS connection tracking is currently under development, to 
be released by 2015

It definitely needs a big obvious warning label on this. A stateless firewall 
hasn’t been acceptable in serious
security environments for at least a decade. “Real” firewalls do things like 
TCP sequence number validation
to ensure that someone isn’t hi-jacking an existing connection and TCP flag 
validation to make sure that someone
isn’t “fuzzing” by sending invalid combinations of flags in order to uncover 
bugs in servers behind the firewall.


- debugging OVS is new to users compared to debugging old iptables

This one is very important in my opinion. There absolutely needs to be a 
section in the documentation
on displaying and interpreting the rules generated by Neutron. I’m pretty sure 
that if you tell anyone
with Linux admin experience that Neutron security groups are iptables based, 
they should be able to
figure their way around iptables –L or iptables –S without much help.

If they haven’t touched iptables in a while, five minutes reading “man 
iptables” should be enough
for them to figure out the important options and they can readily see the 
relationship between
what they put in a security group and what shows up in the iptables chain. I 
don’t think there’s
anywhere near that ease of use on how to list the OvS ruleset for a VM and see 
how it corresponds
to the Neutron security group.


Finally, logging of packets (including both dropped and permitted connections) 
is mandatory in many
environments. Does OvS have the ability to do the necessary logging? Although 
Neutron
security groups don’t currently enable logging, the capabilities are present in 
the underlying
iptables and can be enabled with some work. If OvS doesn’t support logging of 
connections then
this feature definitely needs to be clearly marked as “not a firewall 
substitute” so that admins
are clearly informed that they still need a “real” firewall for audit 
compliance and may only
consider OvS based Neutron security groups as an additional layer of protection 
behind the
“real” firewall.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-09 Thread Amir Sadoughi
Carl,

You are correct in both distinctions. Like I mentioned to Paul, beyond explicit 
configuration for the cloud operator, documentation and API validation for the 
end user, is there anything specific you would like to see as a “warning label”?

Amir

On Jun 3, 2014, at 9:01 AM, Carl Baldwin 
c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:


How does ovs handle tcp flows?  Does it include stateful tracking of tcp -- as 
your wording below implies -- or does it do stateless inspection of returning 
tcp packets?  It appears it is the latter.  This isn't the same as providing a 
stateful ESTABLISHED feature.  Many users may not fully understand the 
differences.

One of the most basic use cases, which is to ping an outside Ip address from 
inside a nova instance would not work without connection tracking with the 
default security groups which don't allow ingress except related and 
established.  This may surprise many.

Carl

Hi all,

In the Neutron weekly meeting today[0], we discussed the ovs-firewall-driver 
blueprint[1]. Moving forward, OVS features today will give us 80% of the 
iptables security groups behavior. Specifically, OVS lacks connection tracking 
so it won’t have a RELATED feature or stateful rules for non-TCP flows. (OVS 
connection tracking is currently under development, to be released by 2015[2]). 
To make the “20% difference more explicit to the operator and end user, we 
have proposed feature configuration to provide security group rules API 
validation that would validate based on connection tracking ability, for 
example.

Several ideas floated up during the chat today, I wanted to expand the 
discussion to the mailing list for further debate. Some ideas include:
- marking ovs-firewall-driver as experimental in Juno
- What does it mean to be marked as “experimental”?
- performance improvements under a new OVS firewall driver untested so far 
(vthapar is working on this)
- incomplete implementation will cause confusion, educational burden
- debugging OVS is new to users compared to debugging old iptables
- waiting for upstream OVS to implement (OpenStack K- or even L- cycle)

In my humble opinion, merging the blueprint for Juno will provide us a viable, 
more performant security groups implementation than what we have available 
today.

Amir


[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-02-21.01.log.html
[1] https://review.openstack.org/#/c/89712/
[2] http://openvswitch.org/pipermail/dev/2014-May/040567.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Sam Harwell
Option A can be made usable provided you do the following:


1.   Add an endpoint for determining whether or not the current service 
supports optional feature X.

2.   For each optional feature of the API, clearly document that the 
feature is optional, and name the feature it is part of.

3.   If the optional feature is defined within the core Marconi 
specification, require implementations to return a 501 for affected URIs if the 
feature is not supported (this is in addition to, not in place of, item #1 
above).

A description of some key documentation elements I am looking for when a 
service includes optional functionality is listed under the heading “Conceptual 
Grouping” in the following document:
https://github.com/sharwell/openstack.net/wiki/The-JSON-Checklist

Thank you,
Sam Harwell

From: Kurt Griffiths [mailto:kurt.griffi...@rackspace.com]
Sent: Monday, June 09, 2014 2:31 PM
To: OpenStack Dev
Subject: [openstack-dev] [marconi] Reconsidering the unified API model

Folks, this may be a bit of a bombshell, but I think we have been dancing 
around the issue for a while now and we need to address it head on. Let me 
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to 
support several messaging patterns. We could do that using a unified queue 
resource, combining both task distribution and feed semantics. Or we could 
create disjoint resources in the API, or even create two separate services 
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task 
distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, 
implementing task distribution is a relatively straightforward addition. If you 
want both types of semantics, you don’t necessarily gain anything by 
implementing them separately.
Lately we have been talking about writing drivers for traditional message 
brokers that will not be able to support the message feeds part of the API. 
I’ve started to think that having a huge part of the API that may or may not 
“work”, depending on how Marconi is deployed, is not a good story for users, 
esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a 
community. I see three options right now. I’ve listed several—but by no means 
conclusive—pros and cons for each, as well as some counterpoints, based on past 
discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle 
differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads
Against:

  *   Makes it hard for users to create applications that work across multiple 
clouds, since critical functionality may or may not be available in a given 
deployment. (counter: how many users need cross-cloud compatibility? Can they 
degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One 
would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for 
deployment (deploy one service or the other, or both, with clear expectations 
of what messaging patterns are supported in any case).
Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are 
such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just 
one (counter: can scale them independently, perhaps leading to gains in 
efficiency)

Option C. Require every backend to support the entirety of the API as it now 
stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in drivers and API between feed and task distribution 
operations (counter: there may be ways to continue sharing some code if the API 
is split)
Against:

  *   Requires operators to deploy a NoSQL cluster (counter: many operators are 
comfortable with NoSQL today)
  *   Currently requires MongoDB, which is AGPL (counter: a Redis driver is 
under development)
  *   A unified API is hard to tune for performance (counter: Redis driver 
should be able to handle high-throughput use cases, TBD)
I’d love to get everyone’s thoughts on these options; let's brainstorm for a 
bit, then we can home in on the option that makes the most sense. We may need 
to do some POCs or experiments to get enough information to make a good 
decision.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Official bug tags

2014-06-09 Thread Devananda van der Veen
Hi all!

Dmitry called it to my attention last week that we lacked any official
guidelines on bug tags, and I've just gotten around to following up on
it. I've created an official list in launchpad and added that to the
OpenStack bug tag tags list wiki page here:
  https://wiki.openstack.org/wiki/Bug_Tags#Ironic

I've also updated the tags on a few bugs that were close-but-not-quite
(eg, s/docs/documentation/).

Regards,
-Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-09 Thread Kyle Mestery
Hi Luke:

After talking with various infra folks, we've noticed the Tail-f CI
system is not voting anymore. According to some informal research, the
last run for this CI setup was in April [1]. Can you verify this
system is still running? We will need this to be working by the middle
of Juno-2, with a history of voting or we may remove the Tail-f driver
from the tree.

Also, along these lines, I'm curious why DriverLog reports this driver
Green and as tested [2]. What is the criteria for this? I'd like to
propose a patch changing this driver from Green to something else
since it's not running for the past few months.

Thanks,
Kyle

[1] https://review.openstack.org/#/c/76002/
[2] http://stackalytics.com/report/driverlog?project_id=openstack%2Fneutron

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Doug Hellmann
On Mon, Jun 9, 2014 at 3:31 PM, Kurt Griffiths
kurt.griffi...@rackspace.com wrote:
 Folks, this may be a bit of a bombshell, but I think we have been dancing
 around the issue for a while now and we need to address it head on. Let me
 start with some background.

 Back when we started designing the Marconi API, we knew that we wanted to
 support several messaging patterns. We could do that using a unified queue
 resource, combining both task distribution and feed semantics. Or we could
 create disjoint resources in the API, or even create two separate services
 altogether, one each for the two semantic groups.

 The decision was made to go with a unified API for these reasons:

 It would afford hybrid patterns, such as auditing or diagnosing a task
 distribution queue
 Once you implement guaranteed delivery for a message feed over HTTP,
 implementing task distribution is a relatively straightforward addition. If
 you want both types of semantics, you don’t necessarily gain anything by
 implementing them separately.

 Lately we have been talking about writing drivers for traditional message
 brokers that will not be able to support the message feeds part of the API.
 I’ve started to think that having a huge part of the API that may or may not
 “work”, depending on how Marconi is deployed, is not a good story for users,
 esp. in light of the push to make different clouds more interoperable.

 Therefore, I think we have a very big decision to make here as a team and a
 community. I see three options right now. I’ve listed several—but by no
 means conclusive—pros and cons for each, as well as some counterpoints,
 based on past discussions.

 Option A. Allow drivers to only implement part of the API

 For:

 Allows for a wider variety of backends. (counter: may create subtle
 differences in behavior between deployments)
 May provide opportunities for tuning deployments for specific workloads

 Against:

 Makes it hard for users to create applications that work across multiple
 clouds, since critical functionality may or may not be available in a given
 deployment. (counter: how many users need cross-cloud compatibility? Can
 they degrade gracefully?)


 Option B. Split the service in two. Different APIs, different services. One
 would be message feeds, while the other would be something akin to Amazon’s
 SQS.

 For:

 Same as Option A, plus creates a clean line of functionality for deployment
 (deploy one service or the other, or both, with clear expectations of what
 messaging patterns are supported in any case).

 Against:

 Removes support for hybrid messaging patterns (counter: how useful are such
 patterns in the first place?)
 Operators now have two services to deploy and support, rather than just one
 (counter: can scale them independently, perhaps leading to gains in
 efficiency)


 Option C. Require every backend to support the entirety of the API as it now
 stands.

 For:

 Least disruptive in terms of the current API design and implementation
 Affords a wider variety of messaging patterns (counter: YAGNI?)
 Reuses code in drivers and API between feed and task distribution operations
 (counter: there may be ways to continue sharing some code if the API is
 split)

 Against:

 Requires operators to deploy a NoSQL cluster (counter: many operators are
 comfortable with NoSQL today)
 Currently requires MongoDB, which is AGPL (counter: a Redis driver is under
 development)
 A unified API is hard to tune for performance (counter: Redis driver should
 be able to handle high-throughput use cases, TBD)

We went with a single large storage API in ceilometer initially, but
we had some discussions at the Juno summit about it being a bad
decision because it resulted in storing some data like alarm
definitions in database formats that just didn't make sense for that.
Julien and Eoghan may want to fill in more details.

Keystone has separate backends for tenants, tokens, the catalog, etc.,
so you have precedent there for splitting up the features in a way
that makes it easier for driver authors and for building features on
appropriate backends.

Doug


 I’d love to get everyone’s thoughts on these options; let's brainstorm for a
 bit, then we can home in on the option that makes the most sense. We may
 need to do some POCs or experiments to get enough information to make a good
 decision.

 @kgriffs

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Janczuk, Tomasz
I could not agree more with the need to re-think Marconi’s current approach to 
scenario breadth and implementation extensibility/flexibility. The broader the 
HTTP API surface area, the more limited are the implementation choices, and the 
harder are performance trade-offs. Current HTTP APIs of Marconi have a large 
surface area that aspires to serve too many purposes. It seriously limits 
implementation choices. For example, one cannot fully map Marconi’s HTTP APIs 
onto an AMQP messaging model (I tried last week to write a RabbitMQ plug-in for 
Marconi with miserable results).

I strongly believe Marconi would benefit from a very small  HTTP API surface 
that targets queue based messaging semantics. Queue based messaging is a well 
understood and accepted messaging model with a lot of proven prior art and 
customer demand from SQS, to Azure Storage Queues, to IronMQ, etc. While other 
messaging patterns certainly exist, they are niche compared to the basic, queue 
based, publish/consume pattern. If Marconi aspires to support non-queue 
messaging patterns, it should be done in an optional way (with a “MAY” in the 
HTTP API spec, which corresponds to option A below), or as a separate project 
(option B). Regardless the choice, the key to success is in in keeping the 
“MUST” HTTP API endpoints of Marconi limited in scope to the strict queue based 
messaging semantics.

I would be very interested in helping to flesh out such minimalistic HTTP 
surface area.

Thanks,
Tomasz Janczuk
@tjanczuk
HP

From: Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Mon, 9 Jun 2014 19:31:03 +
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [marconi] Reconsidering the unified API model

Folks, this may be a bit of a bombshell, but I think we have been dancing 
around the issue for a while now and we need to address it head on. Let me 
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to 
support several messaging patterns. We could do that using a unified queue 
resource, combining both task distribution and feed semantics. Or we could 
create disjoint resources in the API, or even create two separate services 
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task 
distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, 
implementing task distribution is a relatively straightforward addition. If you 
want both types of semantics, you don’t necessarily gain anything by 
implementing them separately.

Lately we have been talking about writing drivers for traditional message 
brokers that will not be able to support the message feeds part of the API. 
I’ve started to think that having a huge part of the API that may or may not 
“work”, depending on how Marconi is deployed, is not a good story for users, 
esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a 
community. I see three options right now. I’ve listed several—but by no means 
conclusive—pros and cons for each, as well as some counterpoints, based on past 
discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle 
differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads

Against:

  *   Makes it hard for users to create applications that work across multiple 
clouds, since critical functionality may or may not be available in a given 
deployment. (counter: how many users need cross-cloud compatibility? Can they 
degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One 
would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for 
deployment (deploy one service or the other, or both, with clear expectations 
of what messaging patterns are supported in any case).

Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are 
such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just 
one (counter: can scale them independently, perhaps leading to gains in 
efficiency)

Option C. Require every backend to support the entirety of the API as it now 
stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in 

[openstack-dev] [oslo] oslo-specs approval process

2014-06-09 Thread Ben Nemec
Hi all,

While the oslo-specs repository has been available for a while and a
number of specs proposed, we hadn't agreed on a process for actually
approving them (i.e. the normal 2 +2's or something else).  This was
discussed at the Oslo meeting last Friday and the method decided upon by
the people present was that only the PTL (Doug Hellmann, dhellmann on
IRC) would approve specs.

However, he noted that he would still like to see at _least_ 2 +2's on a
spec, and +1's from interested users are always appreciated as well.
Basically he's looking for a consensus from the reviewers.

This e-mail is intended to notify anyone interested in the oslo-specs
process of how it will work going forward, and to provide an opportunity
for anyone not at the meeting to object if they so desire.  Barring a
significant concern being raised, the method outlined above will be
followed from now on.

Meeting discussion log:
http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.log.html#l-66

Thanks.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-09 Thread Eoghan Glynn


 Based on the discussion I'd like to propose these options:
 1. Cinder-certified driver - This is an attempt to move the certification
 to the project level.
 2. CI-tested driver - This is probably the most accurate, at least for what
 we're trying to achieve for Juno: Continuous Integration of Vendor-specific
 Drivers.

Hi Ramy,

Thanks for these constructive suggestions.

The second option is certainly a very direct and specific reflection of
what is actually involved in getting the Cinder project's imprimatur.

The first option is also a bit clearer, in the sense of the scope of the
certification.

Cheers,
Eoghan

 Ramy
 
 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Monday, June 09, 2014 4:50 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] use of the word certified
 
 On 6 June 2014 18:29, Anita Kuno ante...@anteaya.info wrote:
  So there are certain words that mean certain things, most don't, some do.
 
  If words that mean certain things are used then some folks start using
  the word and have expectations around the word and the OpenStack
  Technical Committee and other OpenStack programs find themselves on
  the hook for behaviours that they didn't agree to.
 
  Currently the word under discussion is certified and its derivatives:
  certification, certifying, and others with root word certificate.
 
  This came to my attention at the summit with a cinder summit session
  with the one of the cerficiate words in the title. I had thought my
  point had been made but it appears that there needs to be more
  discussion on this. So let's discuss.
 
  Let's start with the definition of certify:
  cer·ti·fy
  verb (used with object), cer·ti·fied, cer·ti·fy·ing.
  1. to attest as certain; give reliable information of; confirm: He
  certified the truth of his claim.
 
 So the cinder team are attesting that a set of tests have been run against a
 driver: a certified driver.
 
  3. to guarantee; endorse reliably: to certify a document with an
  official seal.
 
 We (the cinder) team) are guaranteeing that the driver has been tested, in at
 least one configuration, and found to pass all of the tempest tests. This is
 a far better state than we were at 6 months ago, where many drivers didn't
 even pass a smoke test.
 
  5. to award a certificate to (a person) attesting to the completion of
  a course of study or the passing of a qualifying examination.
 
 The cinder cert process is pretty much an exam.
 
 
 I think the work certification covers exactly what we are doing. Give
 cinder-core are the people on the hook for any cinder problems (including
 vendor specific ones), and the cinder core are the people who get
 bad-mouthed when there are problems (including vendor specific ones), I
 think this level of certification gives us value.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
   The barbican team was considering making the container mutable but I don't 
think it matters now
since every one has chimed in and wants the container to be immutable. The 
current discussion now is that
the TLS container will be immutable but the meta data will not be.

I'm not sure what is meant by versioning.  If vivek cares to elaborate that 
would be helpful.


On Jun 9, 2014, at 2:30 PM, Samuel Bercovici samu...@radware.com wrote:

 As far as I understand the Current Barbican implementation is immutable.
 Can anyone from Barbican comment on this?
 
 -Original Message-
 From: Jain, Vivek [mailto:vivekj...@ebay.com] 
 Sent: Monday, June 09, 2014 8:34 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 +1 for the idea of making certificate immutable.
 However, if Barbican allows updating certs/containers then versioning is a 
 must.
 
 Thanks,
 Vivek
 
 
 On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:
 
 Hi,
 
 I think that option 2 should be preferred at this stage.
 I also think that certificate should be immutable, if you want a new 
 one, create a new one and update the listener to use it.
 This removes any chance of mistakes, need for versioning etc.
 
 -Sam.
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 10:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Hey everyone,
 
 Per our IRC discussion yesterday I'd like to continue the discussion on 
 how Barbican and Neutron LBaaS will interact. There are currently two 
 ideas in play and both will work. If you have another idea please free 
 to add it so that we may evaluate all the options relative to each other.
 Here are the two current ideas:
 
 1. Create an eventing system for Barbican that Neutron LBaaS (and other
 services) consumes to identify when to update/delete updated secrets 
 from Barbican. For those that aren't up to date with the Neutron LBaaS 
 API Revision, the project/tenant/user provides a secret (container?) id 
 when enabling SSL/TLS functionality.
 
 * Example: If a user makes a change to a secret/container in Barbican 
 then Neutron LBaaS will see an event and take the appropriate action.
 
 PROS:
 - Barbican is going to create an eventing system regardless so it will 
 be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
 calls the user has to make.
 
 CONS:
 - An eventing framework can become complex especially since we need to 
 ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
 think.
 
 2. Push orchestration decisions to API users. This idea comes with two 
 assumptions. The first assumption is that most providers' customers use 
 the cloud via a GUI, which in turn can handle any orchestration 
 decisions that need to be made. The second assumption is that power API 
 users are savvy and can handle their decisions as well. Using this 
 method requires services, such as LBaaS, to register in the form of 
 metadata to a barbican container.
 
 * Example: If a user makes a change to a secret the GUI can see which 
 services are registered and opt to warn the user of consequences. Power 
 users can look at the registered services and make decisions how they 
 see fit.
 
 PROS:
 - Very simple to implement. The only code needed to make this a 
 reality is at the control plane (API) level.
 - This option is more loosely coupled that option #1.
 
 CONS:
 - Potential for services to not register/unregister. What happens in 
 this case?
 - Pushes complexity of decision making on to GUI engineers and power 
 API users.
 
 
 I would like to get a consensus on which option to move forward with 
 ASAP since the hackathon is coming up and delivering Barbican to 
 Neutron LBaaS integration is essential to exposing SSL/TLS 
 functionality, which almost everyone has stated is a #1/#2 priority.
 
 I'll start the decision making process by advocating for option #2. My 
 reason for choosing option #2 has to deal mostly with the simplicity of 
 implementing such a mechanism. Simplicity also means we can implement 
 the necessary code and get it approved much faster which seems to be a 
 concern for everyone. What option does everyone else want to move 
 forward with?
 
 
 
 Cheers,
 --Jorge
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Douglas Mendizabal
Hi all,

I’m strongly in favor of having immutable TLS-typed containers, and very
much opposed to storing every revision of changes done to a container.  I
think that storing versioned containers would add too much complexity to
Barbican, where immutable containers would work well.


I’m still not sold on the idea of registering services with Barbican, even
though (or maybe especially because) Barbican would not be using this data
for anything.  I understand the problem that we’re trying to solve by
associating different resources across projects, but I don’t feel like
Barbican is the right place to do this.

It seems we’re leaning towards option #2, but I would argue that
orchestration of services is outside the scope of Barbican’s role as a
secret-store.  I think this is a problem that may need to be solved at a
higher level.  Maybe an openstack-wide registry of dependend entities
across services?

-Doug

On 6/9/14, 2:54 PM, Tiwari, Arvind arvind.tiw...@hp.com wrote:

As per current implementation, containers are immutable.
Do we have any use case to make it mutable? Can we live with new
container instead of updating an existing container?

Arvind 

-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Monday, June 09, 2014 1:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

As far as I understand the Current Barbican implementation is immutable.
Can anyone from Barbican comment on this?

-Original Message-
From: Jain, Vivek [mailto:vivekj...@ebay.com]
Sent: Monday, June 09, 2014 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is
a must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:

Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new
one, create a new one and update the listener to use it.
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free
to add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets
from Barbican. For those that aren't up to date with the Neutron LBaaS
API Revision, the project/tenant/user provides a secret (container?) id
when enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will
be supported.
 - Decisions are made on behalf of the user which lessens the amount of
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI
think.

2. Push orchestration decisions to API users. This idea comes with two
assumptions. The first assumption is that most providers' customers use
the cloud via a GUI, which in turn can handle any orchestration
decisions that need to be made. The second assumption is that power API
users are savvy and can handle their decisions as well. Using this
method requires services, such as LBaaS, to register in the form of
metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which
services are registered and opt to warn the user of consequences. Power
users can look at the registered services and make decisions how they
see fit.

PROS:
 - Very simple to implement. The only code needed to make this a
reality is at the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in
this case?
 - Pushes complexity of decision making on to GUI engineers and power
API users.


I would like to get a consensus on which option to move forward with
ASAP since the hackathon is coming up and delivering Barbican to
Neutron LBaaS integration is essential to exposing SSL/TLS
functionality, which almost everyone has stated is 

[openstack-dev] [Nova] [Ironic] [TripleO] Fixing HostManager, take two

2014-06-09 Thread Devananda van der Veen
Last week, we tried to fix a bug in the way that Nova's baremetal and
ironic drivers are using the HostManager / HostState classes --
they're incorrectly reporting capabilities in an older fashion, which
is not in use any more, and thus not exposing the node's stats to
the scheduler. The fix actually broke both drivers but went unnoticed
in reviews on the original patch. Reverting that took about a week,
and Ironic patches have been blocked since then, but that's not what
I'm writing about.

I'd like to present my view of all the related patches and propose a
way forward for this fix. I'd also like to thank Hans for looking into
this and proposing a fix in the first place, and thank Hans and many
others for helping to address the resulting issues very quickly.


This is the original bug:
  https://bugs.launchpad.net/nova/+bug/1260265
  BaremetalHostManager cannot distinguish baremetal hosts from other hosts

The original attempted fix (now reverted):
  https://review.openstack.org/#/c/94043

This broke Ironic because it changed the signature of
HostState.__init__(), and it broke Nova-baremetal because it didn't
save stats in update_from_compute_node(). A fix was proposed for
each project...

for Nova:
  https://review.openstack.org/#/c/97806/2

for Ironic:
  https://review.openstack.org/#/c/97447/5

If 97806 had been part of the original 94043, this change would
probably not have negatively affected nova's baremetal driver.
However, it still would have broken Ironic until 97447 could have been
landed. I should have noticed this when the
check-tempest-dsvm-virtual-ironic-nv job on that patch failed (I, like
others, have apparently fallen into the bad habit of ignoring test
results which say non-voting).

So, until such time as the necessary driver and other changes are able
to land in Nova, and at Sean's suggestion, we've proposed a change to
the nova unit tests to watch those internal APIs that Ironic depends
on:
  https://review.openstack.org/#/c/98201

This will at least make it very explicit to any Nova reviewer that a
change to these APIs will affect Ironic. We can also set up a watch on
changes to this file, alerting us if there is a patch changing an API
that we depend on.

As for how to proceed, I would like to suggest the following:
- 97447 be reworked to support both the current and proposed HostState
parameter lists
- 94043 and 97806 be squashed and reproposed, but held until after
97447 and 98201 land
- a new patch be proposed to ironic to remove support for the now-old
HostState parameter list


Thoughts? Suggestions?

Cheers,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread John Wood
The impression I have from this thread is that Containers should remain 
immutable, but it would be helpful to allow services like LBaaS to register as 
interested in a given Container. This could be the full URI to the load 
balancer instance for example. This information would allow clients to see what 
services (and load balancer instances in this example) are using a Container, 
so they can update them if a new Container replaces the old one. They could 
also see what services depend on a Container before trying to remove the 
Container.

A blueprint submission to Barbican tomorrow should provide more details on 
this, and let the Barbican and LBaaS communities weigh in on this feature.

Thanks,
John



From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: Monday, June 09, 2014 2:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

As per current implementation, containers are immutable.
Do we have any use case to make it mutable? Can we live with new container 
instead of updating an existing container?

Arvind

-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Monday, June 09, 2014 1:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

As far as I understand the Current Barbican implementation is immutable.
Can anyone from Barbican comment on this?

-Original Message-
From: Jain, Vivek [mailto:vivekj...@ebay.com]
Sent: Monday, June 09, 2014 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is a must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:

Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new
one, create a new one and update the listener to use it.
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free
to add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets
from Barbican. For those that aren't up to date with the Neutron LBaaS
API Revision, the project/tenant/user provides a secret (container?) id
when enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will
be supported.
 - Decisions are made on behalf of the user which lessens the amount of
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI
think.

2. Push orchestration decisions to API users. This idea comes with two
assumptions. The first assumption is that most providers' customers use
the cloud via a GUI, which in turn can handle any orchestration
decisions that need to be made. The second assumption is that power API
users are savvy and can handle their decisions as well. Using this
method requires services, such as LBaaS, to register in the form of
metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which
services are registered and opt to warn the user of consequences. Power
users can look at the registered services and make decisions how they
see fit.

PROS:
 - Very simple to implement. The only code needed to make this a
reality is at the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in
this case?
 - Pushes complexity of decision making on to GUI engineers and power
API users.


I would like to get a consensus on which option to move forward with
ASAP since the hackathon is coming up and delivering Barbican to
Neutron LBaaS integration is essential to exposing SSL/TLS
functionality, which almost everyone has 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
   The use case was that a cert inside the container could be updated while the 
private key stays the same. IE a new cert would be a resigning of the same old 
key. By immutable we mean to say that the same UUID would be used on the lbaas 
side. This is a heavy handed way of expecting the user to manually update their 
lbaas instances when they update a cert. 

Yes we can live with an immutable container which seems to be the direction 
we are going now.

On Jun 9, 2014, at 2:54 PM, Tiwari, Arvind arvind.tiw...@hp.com wrote:

 As per current implementation, containers are immutable. 
 Do we have any use case to make it mutable? Can we live with new container 
 instead of updating an existing container?
 
 Arvind 
 
 -Original Message-
 From: Samuel Bercovici [mailto:samu...@radware.com] 
 Sent: Monday, June 09, 2014 1:31 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 As far as I understand the Current Barbican implementation is immutable.
 Can anyone from Barbican comment on this?
 
 -Original Message-
 From: Jain, Vivek [mailto:vivekj...@ebay.com]
 Sent: Monday, June 09, 2014 8:34 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 +1 for the idea of making certificate immutable.
 However, if Barbican allows updating certs/containers then versioning is a 
 must.
 
 Thanks,
 Vivek
 
 
 On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:
 
 Hi,
 
 I think that option 2 should be preferred at this stage.
 I also think that certificate should be immutable, if you want a new 
 one, create a new one and update the listener to use it.
 This removes any chance of mistakes, need for versioning etc.
 
 -Sam.
 
 -Original Message-
 From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 10:16 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
 Integration Ideas
 
 Hey everyone,
 
 Per our IRC discussion yesterday I'd like to continue the discussion on 
 how Barbican and Neutron LBaaS will interact. There are currently two 
 ideas in play and both will work. If you have another idea please free 
 to add it so that we may evaluate all the options relative to each other.
 Here are the two current ideas:
 
 1. Create an eventing system for Barbican that Neutron LBaaS (and other
 services) consumes to identify when to update/delete updated secrets 
 from Barbican. For those that aren't up to date with the Neutron LBaaS 
 API Revision, the project/tenant/user provides a secret (container?) id 
 when enabling SSL/TLS functionality.
 
 * Example: If a user makes a change to a secret/container in Barbican 
 then Neutron LBaaS will see an event and take the appropriate action.
 
 PROS:
 - Barbican is going to create an eventing system regardless so it will 
 be supported.
 - Decisions are made on behalf of the user which lessens the amount of 
 calls the user has to make.
 
 CONS:
 - An eventing framework can become complex especially since we need to 
 ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI 
 think.
 
 2. Push orchestration decisions to API users. This idea comes with two 
 assumptions. The first assumption is that most providers' customers use 
 the cloud via a GUI, which in turn can handle any orchestration 
 decisions that need to be made. The second assumption is that power API 
 users are savvy and can handle their decisions as well. Using this 
 method requires services, such as LBaaS, to register in the form of 
 metadata to a barbican container.
 
 * Example: If a user makes a change to a secret the GUI can see which 
 services are registered and opt to warn the user of consequences. Power 
 users can look at the registered services and make decisions how they 
 see fit.
 
 PROS:
 - Very simple to implement. The only code needed to make this a 
 reality is at the control plane (API) level.
 - This option is more loosely coupled that option #1.
 
 CONS:
 - Potential for services to not register/unregister. What happens in 
 this case?
 - Pushes complexity of decision making on to GUI engineers and power 
 API users.
 
 
 I would like to get a consensus on which option to move forward with 
 ASAP since the hackathon is coming up and delivering Barbican to 
 Neutron LBaaS integration is essential to exposing SSL/TLS 
 functionality, which almost everyone has stated is a #1/#2 priority.
 
 I'll start the decision making process by advocating for option #2. My 
 reason for choosing option #2 has to deal mostly with the simplicity of 
 implementing such a mechanism. Simplicity also means we can implement 
 the necessary code and get it approved much faster which seems to be a 
 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
Mike,

For the typical case, your proposal sounds reasonable to me. That
should protect against cross-session locking while still getting the
benefits of testing DML without committing to disk.

The issue I was originally raising is, of course, the special case
-- testing of migrations -- which, I think, could be solved in much
the same way. Given N test runners, create N empty schemata, hand each
migration-test-runner a schema from that pool. When that test runner
is done, drop and recreate that schema.

AIUI, Nodepool is already doing something similar here:
  
https://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/tests/__init__.py#n71

Regards,
Devananda



On Mon, Jun 9, 2014 at 12:58 PM, Mike Bayer mba...@redhat.com wrote:

 On Jun 9, 2014, at 1:08 PM, Mike Bayer mba...@redhat.com wrote:


 On Jun 9, 2014, at 12:50 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 There may be some problems with MySQL when testing parallel writes in
 different non-committing transactions, even in READ COMMITTED mode,
 due to InnoDB locking, if the queries use non-unique secondary indexes
 for UPDATE or SELECT..FOR UPDATE queries. This is done by the
 with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
 in Nova. So I would not recommend this approach, even though, in
 principle, I agree it would be a much more efficient way of testing
 database reads/writes.

 More details here:
 http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
 http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html

 OK, but just to clarify my understanding, what is the approach to testing 
 writes in parallel right now, are we doing CREATE DATABASE for two entirely 
 distinct databases with some kind of generated name for each one?  
 Otherwise, if the parallel tests are against the same database, this issue 
 exists regardless (unless autocommit mode is used, is FOR UPDATE accepted 
 under those conditions?)

 Took a look and this seems to be the case, from oslo.db:

 def create_database(engine):
 Provide temporary user and database for each particular 
 test.
 driver = engine.name

 auth = {
 'database': ''.join(random.choice(string.ascii_lowercase)
 for i in moves.range(10)),
 # ...

 sqls = [
 drop database if exists %(database)s;,
 create database %(database)s;
 ]

 Just thinking out loud here, I’ll move these ideas to a new wiki page after 
 this post.My idea now is that OK, we provide ad-hoc databases for tests, 
 but look into the idea that we create N ad-hoc databases, corresponding to 
 parallel test runs - e.g. if we are running five tests concurrently, we make 
 five databases.   Tests that use a database will be dished out among this 
 pool of available schemas.   In the *typical* case (which means not the case 
 that we’re testing actual migrations, that’s a special case) we build up the 
 schema on each database via migrations or even create_all() just once, run 
 tests within rolled-back transactions one-per-database, then the DBs are torn 
 down when the suite is finished.

 Sorry for the thread hijack.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Douglas Mendizabal
I understand how this could be helpful, but I still don’t understand why
this is Barbican’s problem to solve.

From Jorge’s original email:

 Using this method requires services, such as LBaaS, to register in
the form of metadata to a barbican container.

If our assumptions are that the GUI can handle information, and that power
users are savvy.  Then how does that require Barbican to store the
metadata?  I would argue that the GUI can store its own metadata, and that
Power Users should be savvy enough to update their LBs (via PUT or
whatever) after uploading a new certificate.


-Doug

On 6/9/14, 6:10 PM, John Wood john.w...@rackspace.com wrote:

The impression I have from this thread is that Containers should remain
immutable, but it would be helpful to allow services like LBaaS to
register as interested in a given Container. This could be the full URI
to the load balancer instance for example. This information would allow
clients to see what services (and load balancer instances in this
example) are using a Container, so they can update them if a new
Container replaces the old one. They could also see what services depend
on a Container before trying to remove the Container.

A blueprint submission to Barbican tomorrow should provide more details
on this, and let the Barbican and LBaaS communities weigh in on this
feature.

Thanks,
John



From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: Monday, June 09, 2014 2:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

As per current implementation, containers are immutable.
Do we have any use case to make it mutable? Can we live with new
container instead of updating an existing container?

Arvind

-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Monday, June 09, 2014 1:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

As far as I understand the Current Barbican implementation is immutable.
Can anyone from Barbican comment on this?

-Original Message-
From: Jain, Vivek [mailto:vivekj...@ebay.com]
Sent: Monday, June 09, 2014 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is
a must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, Samuel Bercovici samu...@radware.com wrote:

Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new
one, create a new one and update the listener to use it.
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
Integration Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on
how Barbican and Neutron LBaaS will interact. There are currently two
ideas in play and both will work. If you have another idea please free
to add it so that we may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets
from Barbican. For those that aren't up to date with the Neutron LBaaS
API Revision, the project/tenant/user provides a secret (container?) id
when enabling SSL/TLS functionality.

* Example: If a user makes a change to a secret/container in Barbican
then Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will
be supported.
 - Decisions are made on behalf of the user which lessens the amount of
calls the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to
ensure delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI
think.

2. Push orchestration decisions to API users. This idea comes with two
assumptions. The first assumption is that most providers' customers use
the cloud via a GUI, which in turn can handle any orchestration
decisions that need to be made. The second assumption is that power API
users are savvy and can handle their decisions as well. Using this
method requires services, such as LBaaS, to register in the form of
metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which
services are registered and opt to warn the user of consequences. Power
users can look 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Clint Byrum
Excerpts from Douglas Mendizabal's message of 2014-06-09 16:08:02 -0700:
 Hi all,
 
 I’m strongly in favor of having immutable TLS-typed containers, and very
 much opposed to storing every revision of changes done to a container.  I
 think that storing versioned containers would add too much complexity to
 Barbican, where immutable containers would work well.
 

Agree completely. Create a new one for new values. Keep the old ones
while they're still active.

 
 I’m still not sold on the idea of registering services with Barbican, even
 though (or maybe especially because) Barbican would not be using this data
 for anything.  I understand the problem that we’re trying to solve by
 associating different resources across projects, but I don’t feel like
 Barbican is the right place to do this.
 

Agreed also, this is simply not Barbican or Neutron's role. Be a REST
API for secrets and networking, not all dancing all singing nannies that
prevent any possibly dangerous behavior with said API's.

 It seems we’re leaning towards option #2, but I would argue that
 orchestration of services is outside the scope of Barbican’s role as a
 secret-store.  I think this is a problem that may need to be solved at a
 higher level.  Maybe an openstack-wide registry of dependend entities
 across services?

An optional openstack-wide registry of depended entities is called
Heat.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] Hacking 0.9.1 released

2014-06-09 Thread Joe Gordon
On Mon, Jun 9, 2014 at 12:24 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi folks,

 Hacking 0.9.1 has just been released (hacking 0.9.1 had a minor bug).
 Unlike other dependencies 'OpenStack Proposal Bot' does not automatically
 push out a patch to the new version.


Edit: hacking 0.9.0 had a minor bug


 The recommended way to upgrade to hacking 0.9.1 is to add any new failing
 tests to the exclude list in tox.ini and fix those in subsequent patches
 (example: https://review.openstack.org/#/c/98864/).

 pep8 1.5.x changed a whole bunch of internals, so when upgrading to the
 new hacking please make sure your local checks still work.


 best,
 Joe

 Release Notes:


- New dependency versions, all with new features
- pep8==1.5.6 [*https://github.com/jcrocholl/pep8/blob/master/CHANGES.txt
   https://github.com/jcrocholl/pep8/blob/master/CHANGES.txt
   https://mail.google.com/*]
  - Report E129 instead of E125 for visually indented line with
  same indent as next logical line.
  - Report E265 for space before block comment.
  - Report E713 and E714 when operators ``not in`` and ``is not``
  are  recommended (taken from hacking).
  - Report E131 instead of E121 / E126 if the hanging indent is
  not consistent within the same continuation block.  It helps when 
 error
  E121 or E126 is in the ``ignore`` list.
  - Report E126 instead of E121 when the continuation line is
  hanging with extra indentation, even if indentation is not a 
 multiple of 4.
   - pyflakes==0.8.1
   - flake8==2.1.0
- More rules support noqa
   - Added to: H701, H702, H232, H234, H235, H237
- Gate on Python3 compatibility
- Dropped H901,H902 as those are now in pep8 and enforced by E713 and
E714
- Support for separate localization catalogs
- Rule numbers added to http://docs.openstack.org/developer/hacking/
- Improved performance
- New Rules:
   - H104  File contains nothing but comments
   - H305  imports not grouped correctly
   - H307  like imports should be grouped together
   - H405  multi line docstring summary not separated with an empty
   line
   - H904  Wrap long lines in parentheses instead of a backslash


 Thank you to everyone who contributed to hacking 0.9.1:
 * Joe Gordon
 * Ivan A. Melnikov
 * Ben Nemec
 * Chang Bo Guo
 * Nikola Dipanov
 * Clay Gerrard
 * Cyril Roelandt
 * Dirk Mueller
 * James E. Blair
 * Jeremy Stanley
 * Julien Danjou
 * Lei Zhang
 * Marc Abramowitz
 * Mike Perez
 * Radomir Dopieralski
 * Samuel Merritt
 * YAMAMOTO Takashi
 * ZhiQiang Fan
 * fujioka yuuichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][gate] ceilometer unit test frequently failing in gate

2014-06-09 Thread Joe Gordon
Over the last 7 days ceilometer unit test jobs have a 18% failure rate in
the gate queue [0], while we see expect to see some failures in integration
testing, unit tests should not be failing in the gate with such a high
frequency (and for so long).

It looks like these failures are due to two bugs [1] [2]. I would like to
propose that until these bugs are resolved, that ceilometer refrain from
approving patches as to not negatively impact the gate queue, which is
already in a tenuous state.


best,
Joe

[0]
http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiRmluaXNoZWQ6XCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIHByb2plY3Q6XCJvcGVuc3RhY2svY2VpbG9tZXRlclwiIEFORCBidWlsZF9xdWV1ZTpcImdhdGVcIiBBTkQgKGJ1aWxkX25hbWU6XCJnYXRlLWNlaWxvbWV0ZXItcHl0aG9uMjdcIiBPUiAgYnVpbGRfbmFtZTpcImdhdGUtY2VpbG9tZXRlci1weXRob24yNlwiKSIsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50Iiwib2Zmc2V0IjowLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwMjM1NjkyMDE2MCwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6ImJ1aWxkX3N0YXR1cyJ9
[1] https://bugs.launchpad.net/ceilometer/+bug/1323524
[2] https://bugs.launchpad.net/ceilometer/+bug/1327344
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >