Re: [openstack-dev] [nova][vitrage] host evacuate

2016-08-15 Thread Afek, Ifat (Nokia - IL)





On 15/08/2016, 19:29, "Matt Riedemann"  wrote:

>On 8/15/2016 4:24 AM, Afek, Ifat (Nokia - IL) wrote:
>> Hi,
>>
>> In Vitrage project[1], I would like to add an option to call nova 
>> host-evacuate for a failed host. I noticed that there is an api for ‘nova 
>> evacuate’, and a cli (but no api) for ‘nova host-evacuate’. Is there a way 
>> that I can call 'nova host-evacuate’ from Vitrage code? Did you consider 
>> adding host-evacuate to nova api? Obviously it would improve the performance 
>> for Vitrage use case.
>>
>>
>> Some more details: The Vitrage project is used for analysing the root cause 
>> of OpenStack alarms, and deducing their existence before they are directly 
>> observed. It receives information from various data sources (Nova, Neutron, 
>> Zabbix, etc.) performs configurable actions and notifies other projects like 
>> Nova.
>>
>> For example, in case zabbix detect a host NIC failure, Vitrage calls nova 
>> force-down api to indicate that the host is unavailable. We would like to 
>> add an option to evacuate the failed host.
>>
>> Thanks,
>>
>> Ifat.
>>
>> [1] https://wiki.openstack.org/wiki/Vitrage
>>
>>
>
>This blog post will probably be helpful:
>
>http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/
>
>The 'nova host-evacuate' CLI is a convenience CLI, it's not for a 
>specific API, it just takes a host, finds the instances running on that 
>host and evacuates each of them. You could do something similar in vitrage.
>
>-- 
>
>Thanks,
>
>Matt Riedemann
>


Thanks Matt, 
I already read this blog :-)
I asked about moving the cli code to the api, because performance-wise it would 
help us. 

Thanks,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][SR-IOV] PCI/SR-IOV meeting today is canceled 08/16/16

2016-08-15 Thread Moshe Levi
Sorry for the late mail, but I won't be able to chair the meeting today

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Overlay MTU setup in docker remote driver

2016-08-15 Thread Liping Mao (limao)
Hi Kuryr team,

I just notice, this can be fixed in kuryr bind code.
I submit a bug to track this:
https://bugs.launchpad.net/kuryr-libnetwork/+bug/1613528

And patch sets are here:
https://review.openstack.org/#/c/355712/
https://review.openstack.org/#/c/355714/

Thanks.

Regards,
Liping Mao

On 16/8/15 下午11:20, "Liping Mao (limao)"  wrote:

>Hi Kuryr team,
>
>I open an issue in docker-libnetwork:
>https://github.com/docker/libnetwork/issues/1390
>
>Appreciate for any idea or comments. Thanks.
>
>Regards,
>Liping Mao
>
>
>On 16/8/12 下午4:08, "Liping Mao (limao)"  wrote:
>
>>Hi Kuryr team,
>>
>>When the network in neutron using overlay for vm,
>>it will use dhcp option to control the VM interface MTU,
>>but for docker, the ip address does not get from dhcp.
>>So it will not set up proper MTU in container.
>>
>>Two work-around in my mind now:
>>1. Set the default MTU in docker to 1450 or less.
>>2. Manually configure MTU after container start up.
>>
>>But both of these are not good, the idea way in my mind
>>is when libnetwork Call remote driver create network,
>>kuryr create neutron network, then return Proper MTU to libnetwork,
>>docker use this MTU for this network. But docker remote driver
>>does not support this.
>>
>>Or maybe let user config MTU in remote driver,
>>a little similar with overlay driver:
>>https://github.com/docker/libnetwork/pull/1349
>>
>>But now, seems like remote driver will not do similar things.
>>
>>Any idea to solve this problem? Thanks.
>>
>>
>>Regards,
>>Liping Mao
>>
>>
>>_
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-15 Thread John Dickinson


On 15 Aug 2016, at 1:37, Thierry Carrez wrote:

> Doug Hellmann wrote:
>> [...]
>> Choosing to be a part of a community comes with obligations as well
>> as benefits.  If, after a lengthy discussion of a community-wide
>> goal, involving everyone in the community, a project team is
>> resolutely opposed to the goal, does that not indicate that the
>> needs of the project team and the needs of the broader community
>> are at odds in some way? And if the project team's needs and the
>> community needs are consistently at odds, over the course of a
>> series of such goals, why would the project team want to constrain
>> itself to stay in the community?  Aren't they clearly going in
>> different directions?
>>
>> Understand, it is not my desire to emphasize any differences of
>> this nature. Rather, I want to reduce them. To do that, I am proposing
>> a process through which common goals can be identified, described,
>> and put into action. I do hope, though, that through the course of
>> the discussion of each individual proposal everyone involved will
>> come to understand the idea and by the time a proposal becomes a
>> "goal" to be implemented I "expect" everyone to, at the very least,
>> understand why a goal is important to others, even if they do not
>> agree with it. That understanding should then lead, on the basis
>> of agreeing to be part of a collaborative community, to supporting
>> the goal.
>>
>> I also expect us to discuss a lot of proposals that we do not agree
>> on, and that either need more time to develop or that end up finding
>> another path to resolution. No one seems all that concerned with
>> the concept that they might propose a goal that everyone else doesn't
>> agree with.  :-)
>>
>> So, yes, by the time we pick a goal I expect teams to do the work,
>> because at that point in the process they will see it as the
>> reasonable course of action.  There is still an "escape valve" in
>> place for teams that, after all of the discussion and shaping of
>> the goals is over, still take issue with a goal. By explaining their
>> position in their response, we will have reference documentation
>> to point to when the inevitable "why doesn't X do Y" questions
>> arise. I will be interested to see how often we actually have to
>> use that.
>
> +1

I agree, too. This is a great process that covers nearly everything.

The reason the prioritization language is so important isn't so that
project teams can "get around" the TC or intentionally be different or
otherwise not be good community participants. I want to make sure we
are not setting up a process that tells projects to "toe the line" or
get out.

In a community as large and diverse in scope as OpenStack, it's
impossible for one person or one small group to be familiar enough
with all of the OpenStack projects to understand the design decisions,
trade-offs, and priorities for each one. The TC certainly doesn't want
to micromanage every project.

Supporting a common goal and making progress on it is much different
than "prioritize these goals above all other work". Like you, I expect
that all projects in OpenStack will work together for the common good.
I don't think any open source project can mandate prioritization on
its contributors and expect to maintain long-term growth.

>
>> Excerpts from John Dickinson's message of 2016-08-12 16:04:42 -0700:
>>> [...]
>>> The proposed plan has a lot of good in it, and I'm really happy to see the 
>>> TC
>>> working to bring common goals and vision to the entirety of the OpenStack
>>> community. Drop the "project teams are expected to prioritize these goals 
>>> above
>>> all other work", and my concerns evaporate. I'd be happy to agree to that 
>>> proposal.
>>
>> Saying that the community has goals but that no one is expected to
>> act to bring them about would be a meaningless waste of time and
>> energy.
>
> I think we can find wording that doesn't use the word "priority" (which
> is, I think, what John objects to the most) while still conveying that
> project teams are expected to act to bring them about (once they said
> they agreed with the goal).
>
> How about "project teams are expected to do everything they can to
> complete those goals within the boundaries of the target development
> cycle" ? Would that sound better ?

Any chance you'd go for something like "project teams are expected to
make progress on these goals and report that progress to the TC every
cycle"?

Yes, I like Thierry's proposed wording better than the originally-
proposed language.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Last store and client releases for Newton

2016-08-15 Thread Nikhil Komawar
Hi all,


It's that time in the release cycle when we need to wrap things up. The
non-client libraries are lined up to be released for final version in
R-6 and the client one along with n-3 in R-5.


However, these libraries need some time to be tested in gate and for us
to determine a stable version for a release. So, I request any
outstanding patches against the same to be completed as soon as
possible. We will be releasing a final release (most likely) for both
store and client in this week (i.e. in R-7). This gives us some time to
identify bugs associated and address them prompty without having to rush
at the last minute.


Thanks for your co-operation.


Release schedule: http://releases.openstack.org/newton/schedule.html


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [daisycloud-core] Daisy installer demo is out

2016-08-15 Thread Will Zhou
Hello Zhijiang,

I hit the following issue when trying to run 'installdaisy_el7_noarch.bin'

No package httpd available.
Error: Nothing to do
Failed to issue method call: No such file or directory
Failed to issue method call: Unit httpd.service failed to load: No such
file or directory.
httpd start failed.


So we need to run 'yum install httpd' before it. Thanks.

BTW, can daisy work in two VMs? Thanks.



On Mon, Aug 15, 2016 at 10:44 AM  wrote:

> Hi All,
>
> Daisycloud-core team are pleased to annonce the first release of Daisy
> OpenStack Installer. You can download the demo from
> http://www.daisycloud.org/static/files/installdaisy_el7_noarch.bin. and
> the corresponding document is here:
> http://www.daisycloud.org/static/files/demo_how_to.docx
>
> In this phase, Daisy OpenStack Installer is just a friendly web UI for
> deploying openstack mitaka by using kolla. It will support baremetal
> deployment by using ironic (also bifrost may be). To sum up, Daisy
> installer are trying to make the most use of the upstream projects for
> deploying OpenStack and make them easy to use.
>
> Thanks!
>
>
> B.R.,
> Zhijiang
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

-
​
周正喜
Mobile: 13701280947
​WeChat: 472174291
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-15 Thread Alex Xu
2016-08-16 7:31 GMT+08:00 Jay Pipes :

> On 08/15/2016 03:57 AM, Alex Xu wrote:
>
>> 2016-08-15 12:56 GMT+08:00 Yingxin Cheng > >:
>>
>>
>> Hi,
>>
>> I'm concerned with the dependencies between "os-capabilities"
>> library and all the other OpenStack services such as Nova,
>> Placement, Ironic, etc.
>>
>> Rather than embedding the universal "os-capabilities" in Nova,
>> Cinder, Glance, Ironic services that will introduce complexities if
>> the library versions are different, I'd prefer to hide this library
>> behind the placement service and expose consistent interfaces as
>> well as caps to all the other services. But the drawback here is
>> also obvious: for example, when nova wants to support a new
>> capability, the development will require os-capabilities updates,
>> and related lib version bumps, which is inconvenient and seems
>> unnecessary.
>>
>>
>> +1 for the concern, this is good point. +1 for the hide os-capability
>> behind the placement service and expose consistent interfaces.
>>
>>
>>
>> So IMHO, the possible solution is:
>> * Let each services (Nova, Ironic ...) themselves manage their
>> capabilities under proper namespaces such as "compute", "ironic" or
>> "storage";
>>
>>
>> Actually I think we won't catalog the capabilities by services, we will
>> catalog the capabilities by resource types. Like nova and ironic may
>> share some capabilities, due to they provider compute resource.
>>
>>
>> * Let os-capabilities define as few as caps possible that are only
>> cross-project;
>> * And also use os-capabilities to convert service-defined or
>> user-defined caps to a standardized and distinct form that can be
>> understood by the placement engine.
>>
>>
>> This solution sounds complex for me, you need define the capabilities
>> many different places(in os-capabilities for cross-project caps, in each
>> service for project-only caps), and this may leads to different service
>> define same caps but with own naming. And due to the dependence is a
>> library. When you upgrade the os-capabilities, you have to restart all
>> the services.
>>
>> I'm thinking of another solution:
>>
>> 1. os-capabilities just define the capabilities into the JSON/YAML files.
>> 2. We expose the capabilities through the REST API of placement engine,
>> this REST API will read the capabilities from the JSON/YAML files
>> defined by os-capabilities.
>>
>> This resolves some problems as below:
>> 1. your concern, hide the os-capabilities behind one single API
>> 2. when os-capabilities upgrade, you needn't update all the
>> os-capabilities library for all control nodes. You only need update the
>> os-capabilities for the node running placement engine. Due to the
>> capabilities in the JSON/YAML files, you even needn't restart the
>> placement engine service.
>>
>> But yes, this also can be done by glance metadata API.
>>
>
> Yeah, I think hiding the catalog of constant trait strings behind the
> placement API is probably a good thing to do.
>
> However, I envision that os-traits (I'm now calling it this...) will
> contain some modules for discovery of various traits on a resource provider
> as well as modules that can detect conflicts in a request for a list of
> traits. Those modules would clearly be something that various services
> would want to import, irrespective of the import of the constant trait
> string catalog.
>

If we hide the trait strings behind the placement API, there will be an API
endpoint for validate the request caps. In nova, we can access this API
directly without os-traits.

One thing I'm not sure yet, we have different virt dirvers for different
hypervisors, so whether all the virt drivers have same dependence between
traits. For example, hyperV have gen1 and gen2 which support different
caps, there are some caps are conflicts between gen1 and gen2. But maybe in
another hypervisor, those caps aren't conflicts ether other. But I didn't
find out a real use-case yet. I need dig into more.

There are two solutions in my mind
1. There aren't different dependence for traits, that is easy. The
validation API in the placement engine API is enough.
2. There are different dependence for traits. We need API for compute node
report those dependence. This isn't easy, we need model the dependence in
the placement engine. This sounds like Chris will hate this way :)

Anyway I need to dig into more.


>
> Thoughts?
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)

[openstack-dev] [puppet] weekly meeting #90

2016-08-15 Thread Iury Gregory
Hi Puppeteers!

We'll have our weekly meeting tomorrow at 3pm UTC on #openstack-meeting-4

Here's a first agenda:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160816


Feel free to add topics, and any outstanding bug and patch.

See you tomorrow!
Thanks,


-- 
~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci-cd] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Joshua Harlow

Monty Taylor wrote:

On 08/15/2016 02:47 PM, Joshua Harlow wrote:


With nodepool and Zuul, there's simply no purpose to Jenkins any more
for me.

Fair point, and no disagreement on that effort being good and all;
though I can't quite say what companies that do use jenkins (the
majority?) are supposed to do here. Running two systems that appear to
be (starting to?) do the same things seems like a 'not' worthy effort
(even though yes I know there are some fundamental differences in how
they work and what they do).


I don't have any person need for anything Jenkins pipeline or
Jenkinsfile related. Infra also has neither need nor desire.


That's cool, no haters here, ha.

To each there own :-P



However, neither I nor Infra have any need for chef cookbooks, yet
OpenStack hosts a bunch of lovely repositories that are worked on by a
set of lovely humans who do have a need for them and a desire to work on
them. You might even come to the conclusion that my personal preferences
or needs are not the most important thing. I mean, I obviously disagree
with that conclusion, but I could see how one might make it.

So while I do not anticipate participating in any way (other than
occasionally making snarky comments in IRC) - I'm 100% in support of
fostering collaboration - so if there are humans who would like to
collaborate on a collection of Jenkinsfiles that have groovy code in
them and they'd like to do that in the context of OpenStack - neat.

That said - I will echo a small smidge of what Jay said - it might also
be neat (and maybe this goes into the same repo/set of repos) to foster
a place where people can collaborate on Jenkins Job Builder snippets
too? We already host the development of that tool, which has a vibrant
developer community, and which I know a ton of folks out there use. I
have _no_idea_ the intersection/overlap between Jenkinsfile and jjb are
... but if there are humans out there who are doing things with
Jenkinses - maybe they want to do both things?


I'd be ok with either, since my current understanding of jjb is that it 
is somewhat targeted at the 'old-style' jenkins (where jobs trigger 
other jobs and jobs are highly configured via XML/yaml translators) 
while the jenkinsfile (workflow/pipeline) stuff is more targeted around 
'u write all that crap and workflow and triggers in groovy and it 
becomes more like one file per repo', so the use of jjb and job 
templates and job templating and ... if u had well written jenkinsfile 
groovy files may be diminished.




Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci-cd] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Joshua Harlow




Sure, I get your point. Disagree that it's not a worthy effort to want
to rid the world of impossible-to-reason-about CI configurations, but I
get your point.


I'm more of in disagreement that running two systems that are sorta 
similar is worthy (2 things to maintain, support, operate...), not that 
either (if u pick one or the other) is bad. Def. not saying it's bad to 
'to rid the world of impossible-to-reason-about CI configurations' but 
both jenkins and zuul v3 seem to be doing that now also (taking slighty 
different paths to get there).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-15 Thread Jay Pipes

On 08/15/2016 03:57 AM, Alex Xu wrote:

2016-08-15 12:56 GMT+08:00 Yingxin Cheng >:

Hi,

I'm concerned with the dependencies between "os-capabilities"
library and all the other OpenStack services such as Nova,
Placement, Ironic, etc.

Rather than embedding the universal "os-capabilities" in Nova,
Cinder, Glance, Ironic services that will introduce complexities if
the library versions are different, I'd prefer to hide this library
behind the placement service and expose consistent interfaces as
well as caps to all the other services. But the drawback here is
also obvious: for example, when nova wants to support a new
capability, the development will require os-capabilities updates,
and related lib version bumps, which is inconvenient and seems
unnecessary.


+1 for the concern, this is good point. +1 for the hide os-capability
behind the placement service and expose consistent interfaces.



So IMHO, the possible solution is:
* Let each services (Nova, Ironic ...) themselves manage their
capabilities under proper namespaces such as "compute", "ironic" or
"storage";


Actually I think we won't catalog the capabilities by services, we will
catalog the capabilities by resource types. Like nova and ironic may
share some capabilities, due to they provider compute resource.


* Let os-capabilities define as few as caps possible that are only
cross-project;
* And also use os-capabilities to convert service-defined or
user-defined caps to a standardized and distinct form that can be
understood by the placement engine.


This solution sounds complex for me, you need define the capabilities
many different places(in os-capabilities for cross-project caps, in each
service for project-only caps), and this may leads to different service
define same caps but with own naming. And due to the dependence is a
library. When you upgrade the os-capabilities, you have to restart all
the services.

I'm thinking of another solution:

1. os-capabilities just define the capabilities into the JSON/YAML files.
2. We expose the capabilities through the REST API of placement engine,
this REST API will read the capabilities from the JSON/YAML files
defined by os-capabilities.

This resolves some problems as below:
1. your concern, hide the os-capabilities behind one single API
2. when os-capabilities upgrade, you needn't update all the
os-capabilities library for all control nodes. You only need update the
os-capabilities for the node running placement engine. Due to the
capabilities in the JSON/YAML files, you even needn't restart the
placement engine service.

But yes, this also can be done by glance metadata API.


Yeah, I think hiding the catalog of constant trait strings behind the 
placement API is probably a good thing to do.


However, I envision that os-traits (I'm now calling it this...) will 
contain some modules for discovery of various traits on a resource 
provider as well as modules that can detect conflicts in a request for a 
list of traits. Those modules would clearly be something that various 
services would want to import, irrespective of the import of the 
constant trait string catalog.


Thoughts?
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-15 Thread Jay Pipes

On 08/15/2016 12:56 AM, Yingxin Cheng wrote:

Hi,

I'm concerned with the dependencies between "os-capabilities" library
and all the other OpenStack services such as Nova, Placement, Ironic, etc.

Rather than embedding the universal "os-capabilities" in Nova, Cinder,
Glance, Ironic services that will introduce complexities if the library
versions are different, I'd prefer to hide this library behind the
placement service and expose consistent interfaces as well as caps to
all the other services. But the drawback here is also obvious: for
example, when nova wants to support a new capability, the development
will require os-capabilities updates, and related lib version bumps,
which is inconvenient and seems unnecessary.


It may seem unnecessary and inconvenient, but I honestly don't 
anticipate it being particularly onerous for developers to keep track of.



So IMHO, the possible solution is:
* Let each services (Nova, Ironic ...) themselves manage their
capabilities under proper namespaces such as "compute", "ironic" or
"storage";


-1. What if Ironic wants to utilize a capability that is "compute" -- 
say, an x86 CPU instruction set extension feature? Clearly, we don't 
want a situation where Ironic needs to import Nova in order to get a 
list of these constants?



* Let os-capabilities define as few as caps possible that are only
cross-project;
* And also use os-capabilities to convert service-defined or
user-defined caps to a standardized and distinct form that can be
understood by the placement engine.


User-defined capabilities should be in the placement API only I think. 
There simply isn't anything to do for these in a shared library because, 
well, they aren't shared things. They are deployment-specific and belong 
as just data that is returned from the deployment's particular placement 
API after being added as a custom trait/capability string by an 
administrator.


Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci-cd] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Jay Pipes

On 08/15/2016 03:47 PM, Joshua Harlow wrote:





I've been experimenting/investigating/playing around with the 'new'
jenkins pipeline support (see https://jenkins.io/doc/pipeline/ for those
who don't know what this is) and it got me thinking that there are
probably X other people/groups/companies that are doing the same thing
and that to me raises the question of 'why don't we work together'.


Why not work together on the thing that people are working together on:
Zuul v3 :)


Fair question,

So in part, because most/some companies (afaik) aren't dropping there
CI/CD solution that they've been working on for years (via say jenkins)
for zuul v3; for better or worse that's the reality that I see things in.

Getting people to adopt and/or replace things for zuul v3 (which doesn't
yet exist?) at such a fundamental level (without such a system most
companies CI/CD does not exist) may take years (if ever).

So nothing against the zuul v3 folks (their effort is worthy IMHO) I
just don't see a way to get there at the current time; thus being more
pragmatic makes me want to contribute on something that has a little
less risk and is a little more 'already known'.


Sure, understood :)


Example of a ci-cd workflow for this (in visual form, for those who are
visually tuned):

https://jenkins.io/images/pipeline/realworld-pipeline-flow.png

Is anyone else looking into how to build jenkinsfiles (or there
equivalents) for the openstack project repos? Perhaps we can work on
them together or perhaps even we can include those same jenkinsfiles in
the project repos themselves (thus making it that much easier to point
jenkins at the external repos and run them through tests, functional
tests, integration tests and so-on).


Honestly, I find Jenkins clunky, Java-centric, GUI-centric, and at this
point, mostly pointless for anyone who doesn't need a GUI for "building"
CI jobs.


No disagreement, the jenkins pipeline and jenkinsfile effort (that seems
to go back for a few years now?) seems to be going in a way that isn't
GUI centric and ... (so ya progress!)


Indeed, groovy is miles better than shoving the world into an XML file 
(JJB FTW!).



With nodepool and Zuul, there's simply no purpose to Jenkins any more
for me.


Fair point, and no disagreement on that effort being good and all;
though I can't quite say what companies that do use jenkins (the
majority?) are supposed to do here. Running two systems that appear to
be (starting to?) do the same things seems like a 'not' worthy effort
(even though yes I know there are some fundamental differences in how
they work and what they do).


Sure, I get your point. Disagree that it's not a worthy effort to want 
to rid the world of impossible-to-reason-about CI configurations, but I 
get your point.



Then again, I also think Slack is slow and pointless compared to IRC.
But, hey, emoji.


Those emjois are the best!

Overall though I don't disagree (I don't like being split over IRC and
slack), but tis the world we (at least some of us) live in.


Yeah, I know... I live in it too.

/me goes to find a decent emoji to slack @cdent...

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci-cd] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Monty Taylor
On 08/15/2016 02:47 PM, Joshua Harlow wrote:

>> With nodepool and Zuul, there's simply no purpose to Jenkins any more
>> for me.
> 
> Fair point, and no disagreement on that effort being good and all;
> though I can't quite say what companies that do use jenkins (the
> majority?) are supposed to do here. Running two systems that appear to
> be (starting to?) do the same things seems like a 'not' worthy effort
> (even though yes I know there are some fundamental differences in how
> they work and what they do).

I don't have any person need for anything Jenkins pipeline or
Jenkinsfile related. Infra also has neither need nor desire.

However, neither I nor Infra have any need for chef cookbooks, yet
OpenStack hosts a bunch of lovely repositories that are worked on by a
set of lovely humans who do have a need for them and a desire to work on
them. You might even come to the conclusion that my personal preferences
or needs are not the most important thing. I mean, I obviously disagree
with that conclusion, but I could see how one might make it.

So while I do not anticipate participating in any way (other than
occasionally making snarky comments in IRC) - I'm 100% in support of
fostering collaboration - so if there are humans who would like to
collaborate on a collection of Jenkinsfiles that have groovy code in
them and they'd like to do that in the context of OpenStack - neat.

That said - I will echo a small smidge of what Jay said - it might also
be neat (and maybe this goes into the same repo/set of repos) to foster
a place where people can collaborate on Jenkins Job Builder snippets
too? We already host the development of that tool, which has a vibrant
developer community, and which I know a ton of folks out there use. I
have _no_idea_ the intersection/overlap between Jenkinsfile and jjb are
... but if there are humans out there who are doing things with
Jenkinses - maybe they want to do both things?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-15 Thread Rob Crittenden
Review https://review.openstack.org/#/c/317739/ added a new dynamic 
metadata handler to nova. The basic jist is that rather than serving 
metadata statically, it can be done dyamically, so that certain values 
aren't provided until they are needed, mostly for security purposes 
(like credentials to enroll in an AD domain). The metadata is configured 
as URLs to a REST service.


Very little is passed into the REST call, mostly UUIDs of the instance, 
image, etc. to ensure a stable API. What this means though is that the 
REST service may need to make calls into nova or glance to get 
information, like looking up the image metadata in glance.


Currently the dynamic metadata handler _can_ generate auth headers if an 
authenticated request is made to it, but consider that a common use case 
is fetching metadata from within an instance using something like:


% curl http://169.254.169.254/openstack/2016-10-06/vendor_data2.json

This will come into the nova metadata service unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a relative newbie) both 
authenticated and unauthenticated requests are accepted such that IF an 
authenticated request comes it, those credentials can be used, otherwise 
fall back to something else?


2. If an unauthenticated request comes in, how best to obtain a token to 
use? Is it best to create a service user for the REST services (perhaps 
several), use a shared user, something else?


I guess if config_drive is True then this isn't really a problem as the 
metadata will be there in the instance already.


thanks

rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Andrew Laski
On Mon, Aug 15, 2016, at 10:33 AM, Jay Pipes wrote:
> On 08/15/2016 09:27 AM, Andrew Laski wrote:
> > Currently in Nova we're discussion adding a "capabilities" API to expose
> > to users what actions they're allowed to take, and having compute hosts
> > expose "capabilities" for use by the scheduler. As much fun as it would
> > be to have the same term mean two very different things in Nova to
> > retain some semblance of sanity let's rename one or both of these
> > concepts.
> >
> > An API "capability" is going to be an action, or URL, that a user is
> > allowed to use. So "boot an instance" or "resize this instance" are
> > capabilities from the API point of view. Whether or not a user has this
> > capability will be determined by looking at policy rules in place and
> > the capabilities of the host the instance is on. For instance an
> > upcoming volume multiattach feature may or may not be allowed for an
> > instance depending on host support and the version of nova-compute code
> > running on that host.
> >
> > A host "capability" is a description of the hardware or software on the
> > host that determines whether or not that host can fulfill the needs of
> > an instance looking for a home. So SSD or x86 could be host
> > capabilities.
> > https://github.com/jaypipes/os-capabilities/blob/master/os_capabilities/const.py
> > has a list of some examples.
> >
> > Some possible replacement terms that have been thrown out in discussions
> > are features, policies(already used), grants, faculties. But none of
> > those seemed to clearly fit one concept or the other, except policies.
> >
> > Any thoughts on this hard problem?
> 
> I know, naming is damn hard, right? :)
> 
> After some thought, I think I've changed my mind on referring to the 
> adjectives as "capabilities" and actually think that the term 
> "capabilities" is better left for the policy-like things.
> 
> My vote is the following:
> 
> GET /capabilities <-- returns a set of *actions* or *abilities* that the 
> user is capable of performing
> 
> GET /traits <-- returns a set of *adjectives* or *attributes* that may 
> describe a provider of some resource

Traits sounds good to me.

> 
> I can rename os-capabilities to os-traits, which would make Sean Mooney 
> happy I think and also clear up the terminology mismatch.
> 
> Thoughts?
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

2016-08-15 Thread Anil Rao
Hi,

My (original) question regarding the uniqueness of Neutron port MAC addresses 
didn't concern SR-IOV support or vendor NICs. It was simply about the behavior 
w.r.t. virtual networks. I believe Armando has confirmed what I had been 
suspecting.

Thanks,
Anil

-Original Message-
From: Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com] 
Sent: Friday, August 12, 2016 1:46 AM
To: Moshe Levi
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

That was my feeling Moshe, thanks for checking.

Anil, which card and drivers are you using exactly?

You should probably contact your card vendor and check if they have a fix for 
the issue, which seems more like a bug on their implementation of the embedded 
switch, the card or the driver.

Best regards,
Miguel  Ángel.

On Thu, Aug 11, 2016 at 12:49 PM, Moshe Levi  wrote:
> Hi Anil,
>
>
> I tested it with Mellanox NIC and it working
>
> 16: enp6s0d1:  mtu 1500 qdisc mq state UP 
> mode DEFAULT group default qlen 1000
> link/ether 00:02:c9:e9:c2:12 brd ff:ff:ff:ff:ff:ff
> vf 0 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 1 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 2 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 3 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 4 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 5 MAC fa:16:3e:0d:8c:a2, vlan 192, spoof checking on, link-state enable
> vf 6 MAC fa:16:3e:0d:8c:a2, vlan 190, spoof checking on, link-state enable
> vf 7 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, 
> link-state auto
>
> I guess the problem is with the SR-IOV NIC/ driver you are using maybe 
> you should contact them
>
>
> -Original Message-
> From: Moshe Levi
> Sent: Wednesday, August 10, 2016 5:59 PM
> To: 'Miguel Angel Ajo Pelayo' ; OpenStack 
> Development Mailing List (not for usage questions) 
> 
> Cc: Armando M. 
> Subject: RE: [openstack-dev] [neutron] Neutron Port MAC Address 
> Uniqueness
>
> Miguel,
>
> I talked to our driver architect and according to him this is vendor 
> implementation (according to him this  should work with  Mellanox NIC) I need 
> to verify that this indeed working.
> I will update after I will prepare SR-IOV setup and try it myself.
>
>
> -Original Message-
> From: Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com]
> Sent: Wednesday, August 10, 2016 12:04 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Cc: Armando M. ; Moshe Levi 
> Subject: Re: [openstack-dev] [neutron] Neutron Port MAC Address 
> Uniqueness
>
> @moshe, any insight on this?
>
> I guess that'd depend on the nic internal switch implementation and how the 
> switch ARP tables are handled there (per network, or global per switch).
>
> If that's the case for some sr-iov vendors (or all), would it make sense to 
> have a global switch to create globally unique mac addresses (for the same 
> neutron deployment, of course).
>
> On Wed, Aug 10, 2016 at 7:38 AM, huangdenghui  wrote:
>> hi Armando
>> I think this feature causes problem in sriov scenario, since 
>> sriov NIC don't support the vf has the same mac,even the port belongs 
>> to the different network.
>>
>>
>> 发自网易邮箱手机版
>>
>>
>> On 2016-08-10 04:55 , Armando M. Wrote:
>>
>>
>>
>> On 9 August 2016 at 13:53, Anil Rao  wrote:
>>>
>>> Is the MAC address of a Neutron port on a tenant virtual network 
>>> globally unique or unique just within that particular tenant network?
>>
>>
>> The latter:
>>
>> https://github.com/openstack/neutron/blob/master/neutron/db/models_v2.
>> py#L139
>>
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Anil
>>>
>>>
>>> 
>>> _ _ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>>
>> _
>> _  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] [new][oslo] oslo.db 4.7.1 release (mitaka)

2016-08-15 Thread no-reply
We are psyched to announce the release of:

oslo.db 4.7.1: Oslo Database library

This release is part of the mitaka stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

With package available at:

https://pypi.python.org/pypi/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.

Changes in oslo.db 4.7.0..4.7.1
---

7b63380 Capture DatabaseError for deadlock check
da8fe0c Catch empty value DBDuplicate errors
7f61fc5 Updated from global requirements
01fbc65 Updated from global requirements


Diffstat (except docs and test files)
-

oslo_db/sqlalchemy/exc_filters.py| 6 +++---
requirements.txt | 2 +-
setup.cfg| 2 +-
4 files changed, 11 insertions(+), 6 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 5f8237d..0a52fc5 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ alembic>=0.8.0 # MIT
-Babel>=1.3 # BSD
+Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci-cd] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Joshua Harlow





I've been experimenting/investigating/playing around with the 'new'
jenkins pipeline support (see https://jenkins.io/doc/pipeline/ for those
who don't know what this is) and it got me thinking that there are
probably X other people/groups/companies that are doing the same thing
and that to me raises the question of 'why don't we work together'.


Why not work together on the thing that people are working together on:
Zuul v3 :)


Fair question,

So in part, because most/some companies (afaik) aren't dropping there 
CI/CD solution that they've been working on for years (via say jenkins) 
for zuul v3; for better or worse that's the reality that I see things in.


Getting people to adopt and/or replace things for zuul v3 (which doesn't 
yet exist?) at such a fundamental level (without such a system most 
companies CI/CD does not exist) may take years (if ever).


So nothing against the zuul v3 folks (their effort is worthy IMHO) I 
just don't see a way to get there at the current time; thus being more 
pragmatic makes me want to contribute on something that has a little 
less risk and is a little more 'already known'.





Example of a ci-cd workflow for this (in visual form, for those who are
visually tuned):

https://jenkins.io/images/pipeline/realworld-pipeline-flow.png

Is anyone else looking into how to build jenkinsfiles (or there
equivalents) for the openstack project repos? Perhaps we can work on
them together or perhaps even we can include those same jenkinsfiles in
the project repos themselves (thus making it that much easier to point
jenkins at the external repos and run them through tests, functional
tests, integration tests and so-on).


Honestly, I find Jenkins clunky, Java-centric, GUI-centric, and at this
point, mostly pointless for anyone who doesn't need a GUI for "building"
CI jobs.


No disagreement, the jenkins pipeline and jenkinsfile effort (that seems 
to go back for a few years now?) seems to be going in a way that isn't 
GUI centric and ... (so ya progress!)




With nodepool and Zuul, there's simply no purpose to Jenkins any more
for me.


Fair point, and no disagreement on that effort being good and all; 
though I can't quite say what companies that do use jenkins (the 
majority?) are supposed to do here. Running two systems that appear to 
be (starting to?) do the same things seems like a 'not' worthy effort 
(even though yes I know there are some fundamental differences in how 
they work and what they do).




Then again, I also think Slack is slow and pointless compared to IRC.
But, hey, emoji.


Those emjois are the best!

Overall though I don't disagree (I don't like being split over IRC and 
slack), but tis the world we (at least some of us) live in.




Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam status report

2016-08-15 Thread Loo, Ruby
Hi,

Here is this week's subteam report for Ironic. As usual, this is pulled 
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff between  1 Aug 2016 and 15 Aug 2016)
- Ironic: 232 bugs (+16) + 199 wishlist items (-5). 36 new (+15), 160 in 
progress, 0 critical, 33 high (+2) and 19 incomplete
- Inspector: 9 bugs (-2) + 20 wishlist items. 0 new, 9 in progress (-2), 0 
critical, 2 high and 2 incomplete
- Nova bugs with Ironic tag: 11. 0 new, 0 critical, 0 high
- I suspect this time we might need an earlier feature freeze to win some time 
to sort all these bugs :/

Network isolation (Neutron/Ironic work) (jroll, TheJulia, devananda)

* trello: 
https://trello.com/c/HWVHhxOj/1-multi-tenant-networking-network-isolation
- multitenant networking is done \o/
- portgroups still todo

Gate improvements (jlvillal, lucasagomes, dtantsur)
===
- We should consider using bindep for installing binary dependencies instead of 
letting devstack-gate install a bunch of thing automagically. Details: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101590.html
- ++ +1
- though I believe Andreas is planning to post the patch, maybe? 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101599.html good 
for us then :)

Multiple compute hosts (jroll, devananda)
=
* trello: https://trello.com/c/OXYBHStp/7-multiple-compute-hosts
- done! \o/

Generic boot-from-volume (TheJulia, dtantsur, lucasagomes)
==
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- No change since last week
- Volume connection information revisions still in review. 
https://review.openstack.org/#/q/topic:bug/1526231+status:open+project:openstack/ironic
- Boot from Volume spec still in review. 
https://review.openstack.org/#/c/294995/
- Code for Boot from Volume still under active development. 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691

Agent top-level API promotion (dtantsur)

* trello: 
https://trello.com/c/37YuKIB8/28-promote-agent-vendor-passthru-to-core-api
- MERGED

Driver composition (dtantsur)
=
* trello: https://trello.com/c/fTya14y6/14-driver-composition
- We seem to have a consensus now on the defaults, see 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101477.html
- TheJuila will submit an update to the spec
- dtantsur will continue hacking soon :)

OpenStackClient plugin for ironic (thrash, dtantsur, rloo)
==
* trello: https://trello.com/c/ckqtq3kG/16-openstackclient-plugin-for-ironic
- Still under review: chassis https://review.openstack.org/345815
- Port commands needs addressing review comments

Notifications (mariojv)
===
* trello: https://trello.com/c/MD8HNcwJ/17-notifications
- still have to respond to comments on first patch in series 
https://review.openstack.org/#/c/298461/

Keystone policy support (JayF, devananda)
=
* trello: https://trello.com/c/P5q3Es2z/15-keystone-policy-support
- mostly done! a couple follow-up / docs patches left

Software metrics (JayF, alineb)
===
* trello: https://trello.com/c/XtPGyHcP/18-software-metrics
- basically done, there's still a few patches adding metrics but I expect those 
to keep coming in

Active node creation (TheJulia)
===
* trello: https://trello.com/c/BwA91osf/22-active-node-creation
- Tempesting test still pending review: 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1526315

Serial console (yossy, hshiina, yuikotakadamori)

* trello: https://trello.com/c/nm3I8djr/20-serial-console
- follow-up patches:
- https://review.openstack.org/#/c/335378/: merged
- https://review.openstack.org/#/c/293872/: needs review
- nova patch:
- https://review.openstack.org/#/c/328157/: needs review(-2 has been 
removed by Nova PTL)

Enhanced root device hints (lucasagomes)

* trello: https://trello.com/c/f9DTEvDB/21-enhanced-root-device-hints
- nothing last week, got stuck on other priorities

Rescue mode (JayF)
==
* trello: https://trello.com/c/PwH1pexJ/23-rescue-mode
- draft patches up
- spec still needs reviewing

Inspector (dtansur)
===
- ironic-inspector 4.1.0 released with local_link_connection discovery
- 
http://docs.openstack.org/releasenotes/ironic-inspector/current-series.html#id1

Bifrost (TheJulia)
==
- Some test timeouts are occuring, largely due to the 

[openstack-dev] [new][openstack] osc-lib 1.0.1 release (newton)

2016-08-15 Thread no-reply
We are tickled pink to announce the release of:

osc-lib 1.0.1: OpenStackClient Library

This release is part of the newton release series.

With source available at:

https://git.openstack.org/cgit/openstack/osc-lib

With package available at:

https://pypi.python.org/pypi/osc-lib

Please report issues through launchpad:

https://bugs.launchpad.net/python-openstackclient

For more details, please see below.

1.0.1
^

Bug Fixes

* Add additional precedence fixes to the argument precedence
  problems in os-client-config 1.18.0 and earlier.  This all will be
  removed when os-client-config 1.19.x is the minimum allwed version
  in OpenStack's global requirements.txt.

Changes in osc-lib 1.0.0..1.0.1
---

ea65b90 More hacks to fix broken o-c-c precedence


Diffstat (except docs and test files)
-

osc_lib/cli/client_config.py | 20 
.../notes/arg-precedence-1ba9fd6929650830.yaml   |  7 +++
2 files changed, 27 insertions(+)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci-cd] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Jay Pipes

On 08/15/2016 01:19 PM, Joshua Harlow wrote:

Hi folks,

I've been experimenting/investigating/playing around with the 'new'
jenkins pipeline support (see https://jenkins.io/doc/pipeline/ for those
who don't know what this is) and it got me thinking that there are
probably X other people/groups/companies that are doing the same thing
and that to me raises the question of 'why don't we work together'.


Why not work together on the thing that people are working together on: 
Zuul v3 :)



Example of a ci-cd workflow for this (in visual form, for those who are
visually tuned):

https://jenkins.io/images/pipeline/realworld-pipeline-flow.png

Is anyone else looking into how to build jenkinsfiles (or there
equivalents) for the openstack project repos? Perhaps we can work on
them together or perhaps even we can include those same jenkinsfiles in
the project repos themselves (thus making it that much easier to point
jenkins at the external repos and run them through tests, functional
tests, integration tests and so-on).


Honestly, I find Jenkins clunky, Java-centric, GUI-centric, and at this 
point, mostly pointless for anyone who doesn't need a GUI for "building" 
CI jobs.


With nodepool and Zuul, there's simply no purpose to Jenkins any more 
for me.


Then again, I also think Slack is slow and pointless compared to IRC. 
But, hey, emoji.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Newbie question regarding Ceilometer notification plugin

2016-08-15 Thread Wanjing Xu (waxu)
We are trying to develop a program which can draw the VMs topology.  For
this we need to listen VM creation/deletion event and act upon it.  I
would think Ceilometer is the right place for this.  So I download and
read Ceilometer a little bit.  If somebody can give me a pointer to where
I can start to hook my program with the notification mechanism of the
ceilometer, that would be a great.

Thanks and any suggestions are welcome!

Wanjing Xu



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday August 16th at 19:00 UTC

2016-08-15 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday August 16th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-08-09-19.04.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-08-09-19.04.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-08-09-19.04.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] reference a type of alarm in template

2016-08-15 Thread Rosensweig, Elisha (Nokia - IL)
Hi,

The "type" means where it was generated - aodh, vitrage, nagios...

I think you are looking for"name", a field that describes the actual problem. 
We should add that to our documentation to clarify.

Sent from Nine

From: Yujun Zhang 
Sent: Aug 15, 2016 16:10
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [vitrage] reference a type of alarm in template

I have a question on how to reference a type of alarm in template so that we 
can build scenarios.

In the template sample [1], an alarm entity has three keys: `category`, `type` 
and `template_id`. It seems `type` is the only information to distinguish 
different alarms. However, when an alarm is raised by aodh, it seems all alarms 
are assigned entity type `aodh` [2], so are they shown in dashboard.

Suppose we have two different types of alarms from `aodh`, e.g. 
`volume.corrupt` and `volume.deleted`. How should I reference them separately 
in a template?

×
[Screen Shot 2016-08-15 at 8.44.56 PM.png]
×

×

[1] 
https://github.com/openstack/vitrage/blob/master/etc/vitrage/templates.sample/deduced_host_disk_space_to_instance_at_risk.yaml#L8
[2] 
https://github.com/openstack/vitrage/blob/master/vitrage/datasources/aodh/transformer.py#L75

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Driver composition defaults call

2016-08-15 Thread Devananda van der Veen


On 08/11/2016 10:37 AM, Julia Kreger wrote:
> Yesterday as a group (jroll, rloo, dtantsur, matt128, devananda,
> vdrok, and myself) discussed defaults for driver composition.
> 
> The options we discussed were:
> 
> * The existing specification[0] - Global and hardware_type
> default_FOO_interface configuration, global enabled_FOO_interfaces in
> configs, supported_FOO_interface in the hardware_type.
> 
> * Sambetts proposal[1] - To base any defaults on the intersection of
> enabled_FOO_interfaces and the supported_FOO_interface lists taking
> the first common option.
> 
> During the discussion the group came to the conclusion that if we were
> to enable the ability to set defaults solely by list ordering, as put
> forth in sambetts proposal, the question would then shift to who knows
> best. The operator, or the vendor via the hardware_type. This further
> evolved into substantial amounts of potential configuration which we
> seemed to agree was confusing and unrealistic. We eventually circled
> back to the original intent of the global configuration
> default_FOO_interface which was to make an operator’s life easier by
> allowing the definition of what would by in large be an explicitly
> chosen environmental or operating default.
> 
> Circling back to the intent allowed us to focus the discussion further
> and decide the following:
> 
> 1. If the client creating the node does not set an explicit
> FOO_interface, we must save whatever is determined as the default, in
> node.FOO_interface.
> 
> 2. To determine a default if one is not explicitly set via a
> default_FOO_interface, the intersection between the hardware_type
> definition supported_FOO_interfaces and the enabled_FOO_interfaces
> global configuration would be used to determine the default.
> 
> Having determined the two preceding items, we reached a consensus that
> the resulting default that is determined, must be present in
> enabled_FOO_interfaces list. The result of this is that there should
> be no implicit enablement of drivers, and the operator should be
> selecting the interfaces possible for their environment in the
> enabled_FOO_interfaces global configuration setting.
> 
> In following up with sambetts this morning and discussing the concerns
> that drove his proposal initially, Sam and I determined that any
> implicit enablement of drivers was not ideal, that an operator
> explicit default override for its intended purpose seemed okay, and
> that the determination of any default should come from the the
> intersection of what is supported versus what is enabled, as the
> larger group reached a consensus on.  That this would ultimately
> result in default_FOO_interface dropping from the hardware_type and
> only being present as global configuration option for an operator to
> use if they so choose to do so.  This seems in-line with what the
> group reached while on the call yesterday.
> 
> Conceptually, this leaves us with something that looks like the
> following when nodes are created without a valid FOO_interface in the
> initial API post.
> 
> 1. The hardware_type's supported_FOO_interfaces is in order of
> preference by the vendor, for example: supported_FOO_interface = [BAR,
> CAR, DAR] this represents: if BAR enabled then use BAR else if CAR
> enabled then use CAR else if DAR enabled then use DAR.
> 
> 2. possible_FOO_interfaces to use for a hardware_type are calculated
> by intersecting enabled_FOO_interfaces and the hardware_type's
> supported_FOO_interfaces, order as in supported_FOO_interface is
> maintained.
> 
> 3. If configuration option default_FOO_interface is set AND
> default_FOO_interface is in possible_FOO_interfaces THEN
> node.FOO_interface is set to default_FOO_interface
> 
> 4. If configuration option default_FOO_interface is set AND
> default_FOO_interface is NOT in possible_FOO_interfaces THEN node
> create fails
> 
> 5. If configuration option default_FOO_interface is NOT set THEN
> node.FOO_interface is set to the first interface in
> possible_FOO_interface
> 

Thanks, Julia, for this excellent summary of the discussion.

This logic appears sound and, I believe, is as simple as possible, while not
unduly limiting operator or vendor choice.

+1

-Deva


> Thank you Sam for typing out the above logic.  I think this means we
> seem to have some consensus on the direction to move forward, at least
> I hope. :)
> 
> -Julia
> 
> [0] 
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/driver-composition-reform.html
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-July/099257.html
> 
> On Mon, Aug 8, 2016 at 8:51 AM, Julia Kreger
>  wrote:
>> Thank you for sending the corrected link Mathieu!  I thought I fixed it
>> before I sent the email, but... *shrug*
>>
>> Anyway!  Looking at the doodle, the mutually available time is 4 PM UTC on
>> this Wednesday (8/10/16).  If there are no objections, I guess we will hear
>> those seeking to discuss defaults on 

[openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-15 Thread Farhad Sunavala
Please use the following configuration
devstack: stable/mitakanetworking-sfc: masterLinux kerenl:     3.19.8  
(if you have a multi-node setup).OVS:                2.4+
Weekly IRC meeting information is atMeetings/ServiceFunctionChainingMeeting - 
OpenStack

  
|  
|   
|   
|   ||

   |

  |
|  
|   |  
Meetings/ServiceFunctionChainingMeeting - OpenStack
   |   |

  |

  |

 

#openstack-meeting-4
Thanks,Farhad.


Date: Mon, 15 Aug 2016 13:39:05 +0200
From: Alioune 
To: "OpenStack Development Mailing List (not for usage questions)"
    
Subject: [openstack-dev] [neutron][networking-sfc] Unable to create
    openstack SFC
Message-ID:
    
Content-Type: text/plain; charset="utf-8"

Hi all,
I'm trying to launch Openstack SFC as explained in[1] by creating 2 SFs, 1
Web Server (DST) and the DHCP namespace as the SRC.
I've installed OVS (Open vSwitch) 2.3.90 with Linux kernel 3.13.0-62 and
the neutron L2-agent runs correctly.
I followed the process by creating classifier, port pairs and port_group
but I got a wrong message "delete_port_chain failed." when creating
port_chain [2]
I tried to create the neutron ports with and without the option
"--no-security-groups" then tcpdpump on SFs tap interfaces but the ICMP
packets don't go through the SFs.

Can anyone advice to fix? that ?
What's your channel on IRC ?

Regards,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Driver composition defaults call

2016-08-15 Thread Loo, Ruby
Hi Julia,

Thanks for discussing with Sam and sending out this email. I like the 5 steps 
described below!

--ruby

On 2016-08-11, 1:37 PM, "Julia Kreger" 
> wrote:

Yesterday as a group (jroll, rloo, dtantsur, matt128, devananda,
vdrok, and myself) discussed defaults for driver composition.

The options we discussed were:

* The existing specification[0] - Global and hardware_type
default_FOO_interface configuration, global enabled_FOO_interfaces in
configs, supported_FOO_interface in the hardware_type.

* Sambetts proposal[1] - To base any defaults on the intersection of
enabled_FOO_interfaces and the supported_FOO_interface lists taking
the first common option.

During the discussion the group came to the conclusion that if we were
to enable the ability to set defaults solely by list ordering, as put
forth in sambetts proposal, the question would then shift to who knows
best. The operator, or the vendor via the hardware_type. This further
evolved into substantial amounts of potential configuration which we
seemed to agree was confusing and unrealistic. We eventually circled
back to the original intent of the global configuration
default_FOO_interface which was to make an operator’s life easier by
allowing the definition of what would by in large be an explicitly
chosen environmental or operating default.

Circling back to the intent allowed us to focus the discussion further
and decide the following:

1. If the client creating the node does not set an explicit
FOO_interface, we must save whatever is determined as the default, in
node.FOO_interface.

2. To determine a default if one is not explicitly set via a
default_FOO_interface, the intersection between the hardware_type
definition supported_FOO_interfaces and the enabled_FOO_interfaces
global configuration would be used to determine the default.

Having determined the two preceding items, we reached a consensus that
the resulting default that is determined, must be present in
enabled_FOO_interfaces list. The result of this is that there should
be no implicit enablement of drivers, and the operator should be
selecting the interfaces possible for their environment in the
enabled_FOO_interfaces global configuration setting.

In following up with sambetts this morning and discussing the concerns
that drove his proposal initially, Sam and I determined that any
implicit enablement of drivers was not ideal, that an operator
explicit default override for its intended purpose seemed okay, and
that the determination of any default should come from the the
intersection of what is supported versus what is enabled, as the
larger group reached a consensus on.  That this would ultimately
result in default_FOO_interface dropping from the hardware_type and
only being present as global configuration option for an operator to
use if they so choose to do so.  This seems in-line with what the
group reached while on the call yesterday.

Conceptually, this leaves us with something that looks like the
following when nodes are created without a valid FOO_interface in the
initial API post.

1. The hardware_type's supported_FOO_interfaces is in order of
preference by the vendor, for example: supported_FOO_interface = [BAR,
CAR, DAR] this represents: if BAR enabled then use BAR else if CAR
enabled then use CAR else if DAR enabled then use DAR.

2. possible_FOO_interfaces to use for a hardware_type are calculated
by intersecting enabled_FOO_interfaces and the hardware_type's
supported_FOO_interfaces, order as in supported_FOO_interface is
maintained.

3. If configuration option default_FOO_interface is set AND
default_FOO_interface is in possible_FOO_interfaces THEN
node.FOO_interface is set to default_FOO_interface

4. If configuration option default_FOO_interface is set AND
default_FOO_interface is NOT in possible_FOO_interfaces THEN node
create fails

5. If configuration option default_FOO_interface is NOT set THEN
node.FOO_interface is set to the first interface in
possible_FOO_interface

Thank you Sam for typing out the above logic.  I think this means we
seem to have some consensus on the direction to move forward, at least
I hope. :)

-Julia

[0] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/driver-composition-reform.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-July/099257.html

On Mon, Aug 8, 2016 at 8:51 AM, Julia Kreger
> wrote:
Thank you for sending the corrected link Mathieu!  I thought I fixed it
before I sent the email, but... *shrug*

Anyway!  Looking at the doodle, the mutually available time is 4 PM UTC on
this Wednesday (8/10/16).  If there are no objections, I guess we will hear
those seeking to discuss defaults on conferencing[0] bridge number  at
that time.

-Julia

[0] https://wiki.openstack.org/wiki/Infrastructure/Conferencing


Re: [openstack-dev] [infra] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Joshua Harlow

Much appreciated!

-Josh

Anita Kuno wrote:


I suggest using the [infra] tag in the subject line if you are looking
for input from the infra team. I have changed it for my reply.

Thanks,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Anita Kuno

On 16-08-15 01:19 PM, Joshua Harlow wrote:

Hi folks,

I've been experimenting/investigating/playing around with the 'new' 
jenkins pipeline support (see https://jenkins.io/doc/pipeline/ for 
those who don't know what this is) and it got me thinking that there 
are probably X other people/groups/companies that are doing the same 
thing and that to me raises the question of 'why don't we work together'.


Example of a ci-cd workflow for this (in visual form, for those who 
are visually tuned):


https://jenkins.io/images/pipeline/realworld-pipeline-flow.png

Is anyone else looking into how to build jenkinsfiles (or there 
equivalents) for the openstack project repos? Perhaps we can work on 
them together or perhaps even we can include those same jenkinsfiles 
in the project repos themselves (thus making it that much easier to 
point jenkins at the external repos and run them through tests, 
functional tests, integration tests and so-on).


This kind of pipeline *slightly* competes with the zuul and job 
templates and such in the infra repos (but maybe compete is not the 
best wording, and perhaps complement is a better wording); but seeing 
how certain companies (at least mine) use jenkins it would be very 
nice to be able to collaborate on these (for the greater good).


Thoughts?

-Josh

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I suggest using the [infra] tag in the subject line if you are looking 
for input from the infra team. I have changed it for my reply.


Thanks,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ci-cd] Jenkinsfile support (or repo/s for them?)

2016-08-15 Thread Joshua Harlow

Hi folks,

I've been experimenting/investigating/playing around with the 'new' 
jenkins pipeline support (see https://jenkins.io/doc/pipeline/ for those 
who don't know what this is) and it got me thinking that there are 
probably X other people/groups/companies that are doing the same thing 
and that to me raises the question of 'why don't we work together'.


Example of a ci-cd workflow for this (in visual form, for those who are 
visually tuned):


https://jenkins.io/images/pipeline/realworld-pipeline-flow.png

Is anyone else looking into how to build jenkinsfiles (or there 
equivalents) for the openstack project repos? Perhaps we can work on 
them together or perhaps even we can include those same jenkinsfiles in 
the project repos themselves (thus making it that much easier to point 
jenkins at the external repos and run them through tests, functional 
tests, integration tests and so-on).


This kind of pipeline *slightly* competes with the zuul and job 
templates and such in the infra repos (but maybe compete is not the best 
wording, and perhaps complement is a better wording); but seeing how 
certain companies (at least mine) use jenkins it would be very nice to 
be able to collaborate on these (for the greater good).


Thoughts?

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Doug Hellmann
Excerpts from Jim Meyer's message of 2016-08-15 09:37:36 -0700:
> A fast reply where others will expand further (I hope):
> 
> > On Aug 15, 2016, at 9:01 AM, Doug Hellmann  wrote:
> > 
> >> My vote is the following:
> >> 
> >> GET /capabilities <-- returns a set of *actions* or *abilities* that the 
> >> user is capable of performing
> > 
> > Does this relate in any way to how DefCore already uses "capabilities”?
> 
> Only a bit, and not in a way I’d be deeply concerned about.
> 
> The Interoperability Working Group (née DefCore) points to a specific Tempest 
> test which asserts that a service has a “capability” and determines if that 
> capability is “core,” thus required to be provided in this way in order to 
> claim that the underlying cloud is an OpenStack cloud.

Do you think that the meanings of "capability of a cloud" and
"capability of a user of a cloud" are far enough apart to avoid
confusion?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Jim Meyer
A fast reply where others will expand further (I hope):

> On Aug 15, 2016, at 9:01 AM, Doug Hellmann  wrote:
> 
>> My vote is the following:
>> 
>> GET /capabilities <-- returns a set of *actions* or *abilities* that the 
>> user is capable of performing
> 
> Does this relate in any way to how DefCore already uses "capabilities”?

Only a bit, and not in a way I’d be deeply concerned about.

The Interoperability Working Group (née DefCore) points to a specific Tempest 
test which asserts that a service has a “capability” and determines if that 
capability is “core,” thus required to be provided in this way in order to 
claim that the underlying cloud is an OpenStack cloud.

> 
>> 
>> GET /traits <-- returns a set of *adjectives* or *attributes* that may 
>> describe a provider of some resource

+1 for where this is going.

—j

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vitrage] host evacuate

2016-08-15 Thread Matt Riedemann

On 8/15/2016 4:24 AM, Afek, Ifat (Nokia - IL) wrote:

Hi,

In Vitrage project[1], I would like to add an option to call nova host-evacuate 
for a failed host. I noticed that there is an api for ‘nova evacuate’, and a 
cli (but no api) for ‘nova host-evacuate’. Is there a way that I can call 'nova 
host-evacuate’ from Vitrage code? Did you consider adding host-evacuate to nova 
api? Obviously it would improve the performance for Vitrage use case.


Some more details: The Vitrage project is used for analysing the root cause of 
OpenStack alarms, and deducing their existence before they are directly 
observed. It receives information from various data sources (Nova, Neutron, 
Zabbix, etc.) performs configurable actions and notifies other projects like 
Nova.

For example, in case zabbix detect a host NIC failure, Vitrage calls nova 
force-down api to indicate that the host is unavailable. We would like to add 
an option to evacuate the failed host.

Thanks,

Ifat.

[1] https://wiki.openstack.org/wiki/Vitrage






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This blog post will probably be helpful:

http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/

The 'nova host-evacuate' CLI is a convenience CLI, it's not for a 
specific API, it just takes a host, finds the instances running on that 
host and evacuates each of them. You could do something similar in vitrage.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Jay Pipes

On 08/15/2016 12:01 PM, Doug Hellmann wrote:

Excerpts from Jay Pipes's message of 2016-08-15 10:33:49 -0400:

On 08/15/2016 09:27 AM, Andrew Laski wrote:

Currently in Nova we're discussion adding a "capabilities" API to expose
to users what actions they're allowed to take, and having compute hosts
expose "capabilities" for use by the scheduler. As much fun as it would
be to have the same term mean two very different things in Nova to
retain some semblance of sanity let's rename one or both of these
concepts.

An API "capability" is going to be an action, or URL, that a user is
allowed to use. So "boot an instance" or "resize this instance" are
capabilities from the API point of view. Whether or not a user has this
capability will be determined by looking at policy rules in place and
the capabilities of the host the instance is on. For instance an
upcoming volume multiattach feature may or may not be allowed for an
instance depending on host support and the version of nova-compute code
running on that host.

A host "capability" is a description of the hardware or software on the
host that determines whether or not that host can fulfill the needs of
an instance looking for a home. So SSD or x86 could be host
capabilities.
https://github.com/jaypipes/os-capabilities/blob/master/os_capabilities/const.py
has a list of some examples.

Some possible replacement terms that have been thrown out in discussions
are features, policies(already used), grants, faculties. But none of
those seemed to clearly fit one concept or the other, except policies.

Any thoughts on this hard problem?


I know, naming is damn hard, right? :)

After some thought, I think I've changed my mind on referring to the
adjectives as "capabilities" and actually think that the term
"capabilities" is better left for the policy-like things.

My vote is the following:

GET /capabilities <-- returns a set of *actions* or *abilities* that the
user is capable of performing


Does this relate in any way to how DefCore already uses "capabilities"?


Sorry, I have no idea, Doug :( I don't really pay attention to defcore. 
Could you explain how defcore uses the term capabilities, please?


Thanks!
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Puppet] Puppet OpenStack Application Module

2016-08-15 Thread Andrew Woodward
I'd like to propose the creation of a puppet module to make use of Puppet
Application orchestrator. This would consist of a Puppet-4 compatible
module that would define applications that would wrap the existing modules.

This will allow for the establishment of a shared module that is capable of
expressing OpenStack applications using the new language schematics in
Puppet 4 [1] for multi-node application orchestration.

I'd expect that initial testing env would consist of deploying a PE stack,
and using docker containers as node primitives. This is necessary until a
FOSS deployer component like [2] becomes stable, at which point we can
switch to it and use the FOSS PM as well. Once the env is up, I plan to
wrap p-o-i profiles to deploy the cloud and launch tempest for functional
testing.

[1] https://docs.puppet.com/pe/latest/app_orchestration_workflow.html

[2] https://github.com/ripienaar/mcollective-choria

-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2016-08-15 10:33:49 -0400:
> On 08/15/2016 09:27 AM, Andrew Laski wrote:
> > Currently in Nova we're discussion adding a "capabilities" API to expose
> > to users what actions they're allowed to take, and having compute hosts
> > expose "capabilities" for use by the scheduler. As much fun as it would
> > be to have the same term mean two very different things in Nova to
> > retain some semblance of sanity let's rename one or both of these
> > concepts.
> >
> > An API "capability" is going to be an action, or URL, that a user is
> > allowed to use. So "boot an instance" or "resize this instance" are
> > capabilities from the API point of view. Whether or not a user has this
> > capability will be determined by looking at policy rules in place and
> > the capabilities of the host the instance is on. For instance an
> > upcoming volume multiattach feature may or may not be allowed for an
> > instance depending on host support and the version of nova-compute code
> > running on that host.
> >
> > A host "capability" is a description of the hardware or software on the
> > host that determines whether or not that host can fulfill the needs of
> > an instance looking for a home. So SSD or x86 could be host
> > capabilities.
> > https://github.com/jaypipes/os-capabilities/blob/master/os_capabilities/const.py
> > has a list of some examples.
> >
> > Some possible replacement terms that have been thrown out in discussions
> > are features, policies(already used), grants, faculties. But none of
> > those seemed to clearly fit one concept or the other, except policies.
> >
> > Any thoughts on this hard problem?
> 
> I know, naming is damn hard, right? :)
> 
> After some thought, I think I've changed my mind on referring to the 
> adjectives as "capabilities" and actually think that the term 
> "capabilities" is better left for the policy-like things.
> 
> My vote is the following:
> 
> GET /capabilities <-- returns a set of *actions* or *abilities* that the 
> user is capable of performing

Does this relate in any way to how DefCore already uses "capabilities"?

Doug

> 
> GET /traits <-- returns a set of *adjectives* or *attributes* that may 
> describe a provider of some resource
> 
> I can rename os-capabilities to os-traits, which would make Sean Mooney 
> happy I think and also clear up the terminology mismatch.
> 
> Thoughts?
> -jay
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-15 Thread Joshua Harlow

Sean Dague wrote:

On 08/14/2016 06:23 PM, Patrick East wrote:


We were talking through some of the implications of this change in
#openstack-nova, and the following further concerns came out.

1) Unix permissions for services in distros

Both Ubuntu and RHEL have a dedicated service user per service. Nova
services run under nova user, cinder services under cinder. For those
services to share a lock path you need to do more than share the path.

You must also put both services in a group. Make the lockpath group
writable, and ensure all lockfiles get written with g+w permissions
(potentially overriding default system umask to get there).

2) Services in containers

For people pushing towards putting services in containers, you'd need to
do all sorts of additional work to make this lock path actually a shared
construct between 2 containers.


These are both pretty problematic changes for the entire deploy space
without good answers.

-Sean



Very good points, both really push me toward a long-term solution that 
involves an actual lock-management-service (that isn't a single 
directory); but I know this is a larger change (thankfully all the 
supporting primitives, services, and libraries should be existing/ready 
for this kind of change). I'd even go as far to say that the 3 services 
I would *currently* recommend (etcd, zookeeper, redis) are more than 
mature enough for this usage by now.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Monday, August 15, 2016 3:34 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova][API] Need naming suggestions for
> "capabilities"
> 
> On 08/15/2016 09:27 AM, Andrew Laski wrote:
> > Currently in Nova we're discussion adding a "capabilities" API to
> > expose to users what actions they're allowed to take, and having
> > compute hosts expose "capabilities" for use by the scheduler. As much
> > fun as it would be to have the same term mean two very different
> > things in Nova to retain some semblance of sanity let's rename one or
> > both of these concepts.
> >
> > An API "capability" is going to be an action, or URL, that a user is
> > allowed to use. So "boot an instance" or "resize this instance" are
> > capabilities from the API point of view. Whether or not a user has
> > this capability will be determined by looking at policy rules in
> place
> > and the capabilities of the host the instance is on. For instance an
> > upcoming volume multiattach feature may or may not be allowed for an
> > instance depending on host support and the version of nova-compute
> > code running on that host.
> >
> > A host "capability" is a description of the hardware or software on
> > the host that determines whether or not that host can fulfill the
> > needs of an instance looking for a home. So SSD or x86 could be host
> > capabilities.
> > https://github.com/jaypipes/os-
> capabilities/blob/master/os_capabilitie
> > s/const.py
> > has a list of some examples.
> >
> > Some possible replacement terms that have been thrown out in
> > discussions are features, policies(already used), grants, faculties.
> > But none of those seemed to clearly fit one concept or the other,
> except policies.
> >
> > Any thoughts on this hard problem?
> 
> I know, naming is damn hard, right? :)
> 
> After some thought, I think I've changed my mind on referring to the
> adjectives as "capabilities" and actually think that the term
> "capabilities" is better left for the policy-like things.
> 
> My vote is the following:
> 
> GET /capabilities <-- returns a set of *actions* or *abilities* that
> the user is capable of performing
> 
> GET /traits <-- returns a set of *adjectives* or *attributes* that may
> describe a provider of some resource
> 
> I can rename os-capabilities to os-traits, which would make Sean Mooney
> happy I think and also clear up the terminology mismatch.
[Mooney, Sean K] yep I like that suggestion though I'm fine with either.
os-traits is nice and short and I like the delineation between attributes and 
abilities.
> 
> Thoughts?
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][l2gw]Idea of redundant L2GW

2016-08-15 Thread Eigo Kunimoto
Hi,

I want to connect VXLAN segments to VLAN segments using a L2GW, and
I want the L2GW to be redundant which is achieved by two or more
baremetals.

Is there some ideas for that? Or should I use STP/RSTP on OpenvSwitches?

Regard,
Eigo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Possible REST API design change for get-me-a-network (2.37)

2016-08-15 Thread Matt Riedemann

On 8/15/2016 10:37 AM, Matt Riedemann wrote:

On 8/11/2016 6:26 PM, Andrew Laski wrote:



On Thu, Aug 11, 2016, at 06:54 PM, Chris Friesen wrote:

On 08/11/2016 03:53 PM, Matt Riedemann wrote:

I wanted to bring this up for awareness since we're getting close to
feature
freeze and want consensus before it gets too late.

Ken'ichi brought up a good question on my REST API change for the 2.37
microversion:

https://review.openstack.org/#/c/316398/

The way I had written this was to just add special auto/none values
for the
networks 'uuid' field in the server create request schema.

The catch with using auto/none is that they can't be specified with
any other
values, like other networks, or a port, or really anything else.
It's just a
list with a single entry and that's either uuid=auto or uuid=none.

Ken'ichi's point was, why not just make "networks" in this case map
to 'auto' or
'none' or the list that exists today.

I like the idea, it's cleaner and it probably allows moving some of the
validation from the REST API code into the json schema (I think, not
totally
sure about that yet).

It is a change for how networks are requested today so there would
be some
conditional logic change pushed on the client - my tempest test
change and
novaclient changes would have to be updated for sure.

So I'm looking for input on that idea before we get too late, is
that difference
worth the work and syntax change in how the client requests a
network when
creating a server?


I like the idea...having magic values for 'uuid' that aren't actually
uuids and
magically don't work with other parameters is sort of gross.


+1. It's cleaner and better represents what's being requested IMO.



Chris


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK, sdague agreed with the suggested new approach to make
server.networks a list or enum, so I'm working on that change this week.

I have a question still. Today you can specify a network uuid of
br- and the REST API will strip the br- prefix. This is a
carryover from super old quantum behavior which is no longer valid with
neutron. With my 2.37 change I was killing that behavior, now I won't
technically need that anymore since I won't be going down that code path
for validation.

So should I continue to try and fix this as part of the 2.37
microversion or just leave it as-is to avoid additional complexity in
this change?



Alternatively we consider passing non-UUIDs for network IDs as a bug and 
fix the schema and drop that code. Then it's not blocked on a 
microversion or feature freeze schedule.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Possible REST API design change for get-me-a-network (2.37)

2016-08-15 Thread Matt Riedemann

On 8/11/2016 6:26 PM, Andrew Laski wrote:



On Thu, Aug 11, 2016, at 06:54 PM, Chris Friesen wrote:

On 08/11/2016 03:53 PM, Matt Riedemann wrote:

I wanted to bring this up for awareness since we're getting close to feature
freeze and want consensus before it gets too late.

Ken'ichi brought up a good question on my REST API change for the 2.37
microversion:

https://review.openstack.org/#/c/316398/

The way I had written this was to just add special auto/none values for the
networks 'uuid' field in the server create request schema.

The catch with using auto/none is that they can't be specified with any other
values, like other networks, or a port, or really anything else. It's just a
list with a single entry and that's either uuid=auto or uuid=none.

Ken'ichi's point was, why not just make "networks" in this case map to 'auto' or
'none' or the list that exists today.

I like the idea, it's cleaner and it probably allows moving some of the
validation from the REST API code into the json schema (I think, not totally
sure about that yet).

It is a change for how networks are requested today so there would be some
conditional logic change pushed on the client - my tempest test change and
novaclient changes would have to be updated for sure.

So I'm looking for input on that idea before we get too late, is that difference
worth the work and syntax change in how the client requests a network when
creating a server?


I like the idea...having magic values for 'uuid' that aren't actually
uuids and
magically don't work with other parameters is sort of gross.


+1. It's cleaner and better represents what's being requested IMO.



Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK, sdague agreed with the suggested new approach to make 
server.networks a list or enum, so I'm working on that change this week.


I have a question still. Today you can specify a network uuid of 
br- and the REST API will strip the br- prefix. This is a 
carryover from super old quantum behavior which is no longer valid with 
neutron. With my 2.37 change I was killing that behavior, now I won't 
technically need that anymore since I won't be going down that code path 
for validation.


So should I continue to try and fix this as part of the 2.37 
microversion or just leave it as-is to avoid additional complexity in 
this change?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Jay Pipes

On 08/15/2016 10:50 AM, Dean Troyer wrote:

On Mon, Aug 15, 2016 at 9:33 AM, Jay Pipes > wrote:

On 08/15/2016 09:27 AM, Andrew Laski wrote:

After some thought, I think I've changed my mind on referring to
the adjectives as "capabilities" and actually think that the
term "capabilities" is better left for the policy-like things.


My vote is the following:

GET /capabilities <-- returns a set of *actions* or *abilities* that
the user is capable of performing

GET /traits <-- returns a set of *adjectives* or *attributes* that
may describe a provider of some resource

I can rename os-capabilities to os-traits, which would make Sean
Mooney happy I think and also clear up the terminology mismatch.


/me didn't stop writing previous email to read this first...

I think traits may be preferable to what I wrote a minute ago (using
qualifiying words) as this definition maintains separation for the
semantics of 'what can I do' vs 'what am I like'.

Plus 'trait' is a word that if/when surfaced into the UI will not
collide with anything else yet (that I know of).  It is a lot like how
OSC uses 'property', but may not be totally incompatible.


Right, the difference being a property has a key/value structure whereas 
a trait in this context is a simple string tag structure.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVO Status Dashboard

2016-08-15 Thread John Davidge
Thanks Victor,

This is a nice way to track the work. If this is going to replace the
etherpad[1] then can I suggest including links to reviews in place of the
In Progress/Done text for each entry? The color coding will preserve the
completion status.

John

[1] https://etherpad.openstack.org/p/newton-ovo-progress

On 8/12/16, 4:15 PM, "Morales, Victor"  wrote:

>Hey neutrinos,
>
>First of all, the high priority for OVO implementation in newton release
>are
>the implementation and integration of port, subnet and network objects,
>but
>given that more people is joining to this initiative and also many
>patches are
>related directly and indirectly to this, results in something hard to
>track.
>So, I decided to create this document[1] to visualize and coordinate
>efforts.
>Feel free to include, modify or add missing things but even more try to
>review
>existing patches and help us to achieve our goal.
>
>[1]
>https://docs.google.com/spreadsheets/d/1FeeQlQITsZSj_wpOXiLbS36dirb_arX0XE
>WBdFVPMB8/edit?usp=sharing
>
>Regards/Saludos
>Victor Morales
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Overlay MTU setup in docker remote driver

2016-08-15 Thread Liping Mao (limao)
Hi Kuryr team,

I open an issue in docker-libnetwork:
https://github.com/docker/libnetwork/issues/1390

Appreciate for any idea or comments. Thanks.

Regards,
Liping Mao


On 16/8/12 下午4:08, "Liping Mao (limao)"  wrote:

>Hi Kuryr team,
>
>When the network in neutron using overlay for vm,
>it will use dhcp option to control the VM interface MTU,
>but for docker, the ip address does not get from dhcp.
>So it will not set up proper MTU in container.
>
>Two work-around in my mind now:
>1. Set the default MTU in docker to 1450 or less.
>2. Manually configure MTU after container start up.
>
>But both of these are not good, the idea way in my mind
>is when libnetwork Call remote driver create network,
>kuryr create neutron network, then return Proper MTU to libnetwork,
>docker use this MTU for this network. But docker remote driver
>does not support this.
>
>Or maybe let user config MTU in remote driver,
>a little similar with overlay driver:
>https://github.com/docker/libnetwork/pull/1349
>
>But now, seems like remote driver will not do similar things.
>
>Any idea to solve this problem? Thanks.
>
>
>Regards,
>Liping Mao
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-15 Thread Sean Dague
On 08/14/2016 06:23 PM, Patrick East wrote:

> I like the sound of a more unified way to interact with compute node
> services. Having a standardized approach for inter-service
> synchronization for controlling system resources would be sweet (even if
> it is just a more sane way of using local file locks). Anyone know if
> there is existing work in this area we can build off of? Or is the path
> forward a new cross-project spec to try and lock down some requirements,
> use-cases, etc.?
> 
> As far as spending time to hack together solutions via the config
> settings for this.. we'll its pretty minimal wrt size of effort compared
> to solving the large issue. Don't get me wrong though, I'm a fan of
> doing both in parallel. Even if we have resources jump on board
> immediately I'm not convinced we have a great chance to "fix" this for N
> in a more elegant fashion, much less any of the older releases affected
> by this. That leads me to believe we still need the shared config
> setting for at least a little while in Devstack, and documentation for
> existing deployments or ones going up with N.

We were talking through some of the implications of this change in
#openstack-nova, and the following further concerns came out.

1) Unix permissions for services in distros

Both Ubuntu and RHEL have a dedicated service user per service. Nova
services run under nova user, cinder services under cinder. For those
services to share a lock path you need to do more than share the path.

You must also put both services in a group. Make the lockpath group
writable, and ensure all lockfiles get written with g+w permissions
(potentially overriding default system umask to get there).

2) Services in containers

For people pushing towards putting services in containers, you'd need to
do all sorts of additional work to make this lock path actually a shared
construct between 2 containers.


These are both pretty problematic changes for the entire deploy space
without good answers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Dean Troyer
On Mon, Aug 15, 2016 at 9:33 AM, Jay Pipes  wrote:

> On 08/15/2016 09:27 AM, Andrew Laski wrote:
>
>> After some thought, I think I've changed my mind on referring to the
>> adjectives as "capabilities" and actually think that the term
>> "capabilities" is better left for the policy-like things.
>>
>
> My vote is the following:
>
> GET /capabilities <-- returns a set of *actions* or *abilities* that the
> user is capable of performing
>
> GET /traits <-- returns a set of *adjectives* or *attributes* that may
> describe a provider of some resource
>
> I can rename os-capabilities to os-traits, which would make Sean Mooney
> happy I think and also clear up the terminology mismatch.
>

/me didn't stop writing previous email to read this first...

I think traits may be preferable to what I wrote a minute ago (using
qualifiying words) as this definition maintains separation for the
semantics of 'what can I do' vs 'what am I like'.

Plus 'trait' is a word that if/when surfaced into the UI will not collide
with anything else yet (that I know of).  It is a lot like how OSC uses
'property', but may not be totally incompatible.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Dean Troyer
On Mon, Aug 15, 2016 at 8:27 AM, Andrew Laski  wrote:

> An API "capability" is going to be an action, or URL, that a user is
> allowed to use. So "boot an instance" or "resize this instance" are
> capabilities from the API point of view. Whether or not a user has this
> capability will be determined by looking at policy rules in place and
> the capabilities of the host the instance is on. For instance an
> upcoming volume multiattach feature may or may not be allowed for an
> instance depending on host support and the version of nova-compute code
> running on that host.
>
> A host "capability" is a description of the hardware or software on the
> host that determines whether or not that host can fulfill the needs of
> an instance looking for a home. So SSD or x86 could be host
> capabilities.
> https://github.com/jaypipes/os-capabilities/blob/master/
> os_capabilities/const.py
> has a list of some examples.
>

We have spent a good amount of time thinking about naming resources in
OpenStackClient and I think you have just stated what I would see as the
best compromise here, just qualifying 'capability' to get specific.  There
are far too many things in OpenStack to be able to use bare words to name
anymore, we passed that a while back.

Even though this hinges on using capability slightly differently (actions
vs properties), 'api capability' and 'host capability' are themselves a
good solution, if not ideal.

If the difference between action and property is too strong to overcome, I
would suggest just using 'host property' or 'host attribute'.  It looks
like that list will include a variety of specific things, all of which
though are intended to convey information about the host.

Also, if these terms get surfaces to the user-visible level, I would like
to see them fit existing usage as much as possible so we don't get another
version of the 'metadata' vs 'properties' debate.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Jay Pipes

On 08/15/2016 09:27 AM, Andrew Laski wrote:

Currently in Nova we're discussion adding a "capabilities" API to expose
to users what actions they're allowed to take, and having compute hosts
expose "capabilities" for use by the scheduler. As much fun as it would
be to have the same term mean two very different things in Nova to
retain some semblance of sanity let's rename one or both of these
concepts.

An API "capability" is going to be an action, or URL, that a user is
allowed to use. So "boot an instance" or "resize this instance" are
capabilities from the API point of view. Whether or not a user has this
capability will be determined by looking at policy rules in place and
the capabilities of the host the instance is on. For instance an
upcoming volume multiattach feature may or may not be allowed for an
instance depending on host support and the version of nova-compute code
running on that host.

A host "capability" is a description of the hardware or software on the
host that determines whether or not that host can fulfill the needs of
an instance looking for a home. So SSD or x86 could be host
capabilities.
https://github.com/jaypipes/os-capabilities/blob/master/os_capabilities/const.py
has a list of some examples.

Some possible replacement terms that have been thrown out in discussions
are features, policies(already used), grants, faculties. But none of
those seemed to clearly fit one concept or the other, except policies.

Any thoughts on this hard problem?


I know, naming is damn hard, right? :)

After some thought, I think I've changed my mind on referring to the 
adjectives as "capabilities" and actually think that the term 
"capabilities" is better left for the policy-like things.


My vote is the following:

GET /capabilities <-- returns a set of *actions* or *abilities* that the 
user is capable of performing


GET /traits <-- returns a set of *adjectives* or *attributes* that may 
describe a provider of some resource


I can rename os-capabilities to os-traits, which would make Sean Mooney 
happy I think and also clear up the terminology mismatch.


Thoughts?
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-15 Thread Assaf Muller
+ Jakub.

On Wed, Aug 10, 2016 at 9:54 AM,   wrote:
> Hi,
>> [Mooney, Sean K]
>> In ovs 2.5 only linux kernel conntrack was supported assuming you had a 4.x
>> kernel that supported it. that means that the feature was not available on
>> bsd,windows or with dpdk.
> Yup, I also thought about something like that.
> I think I was at-least-slightly misguided by
> http://docs.openstack.org/draft/networking-guide/adv-config-ovsfwdriver.html
> and there is currently a statement
> "The native OVS firewall implementation requires kernel and user space 
> support for conntrack, thus requiring minimum versions of the Linux kernel 
> and Open vSwitch. All cases require Open vSwitch version 2.5 or newer."

I agree, that statement is misleading.

>
> Do you agree that this is something to change? I think it is not OK to state 
> OVS 2.6 without that being released, but in case I am not confusing then:
> -OVS firewall driver with OVS that uses kernel datapath requires OVS 2.5 and 
> Linux kernel 4.3
> -OVS firewall driver with OVS that uses userspace datapath with DPDK (aka 
> ovs-dpdk  aka DPDK vhost-user aka netdev datapath) doesn't have a Linux 
> kernel prerequisite
> That is documented in table in " ### Q: Are all features available with all 
> datapaths?":
> http://openvswitch.org/support/dist-docs/FAQ.md.txt
> where currently 'Connection tracking' row says 'NO' for 'Userspace' - but 
> that's exactly what has been merged recently /to become feature of OVS 2.6
>
> Also when it comes to performance I came across
> http://openvswitch.org/pipermail/dev/2016-June/071982.html, but I would guess 
> that devil could be the exact flows/ct actions that will be present in 
> real-life scenario.
>
>
> BR,
> Konstantin
>
>
>> -Original Message-
>> From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
>> Sent: Tuesday, August 09, 2016 2:29 PM
>> To: Volenbovskyi Kostiantyn, INI-ON-FIT-CXD-ELC
>> ; openstack-
>> d...@lists.openstack.org
>> Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack 
>> security
>> group driver with ovs-dpdk
>>
>>
>> > -Original Message-
>> > From: kostiantyn.volenbovs...@swisscom.com
>> > [mailto:kostiantyn.volenbovs...@swisscom.com]
>> > Sent: Tuesday, August 9, 2016 12:58 PM
>> > To: openstack-dev@lists.openstack.org; Mooney, Sean K
>> > 
>> > Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack
>> > security group driver with ovs-dpdk
>> >
>> > Hi,
>> > (sorry for using incorrect threading)
>> >
>> > > > About 2 weeks ago I did some light testing with the conntrack
>> > > > security group driver and the newly
>> > > >
>> > > > Merged upserspace conntrack support in ovs.
>> > > >
>> > By 'recently' - whether you mean patch v4
>> > http://openvswitch.org/pipermail/dev/2016-June/072700.html
>> > or you used OVS 2.5 itself (which I think includes v2 of the same
>> > patch series)?
>> [Mooney, Sean K] I used http://openvswitch.org/pipermail/dev/2016-
>> June/072700.html or specifically i used the following commit
>> https://github.com/openvswitch/ovs/commit/0c87efe4b5017de4c5ae99e7b9c3
>> 6e8a6e846669
>> which is just after userspace conntrack was merged,
>> >
>> > So in general - I am a bit confused about conntrack support in OVS.
>> >
>> > OVS 2.5 release notes http://openvswitch.org/pipermail/announce/2016-
>> > February/81.html state:
>> > "This release includes the highly anticipated support for connection
>> > tracking in the Linux kernel.  This feature makes it possible to
>> > implement stateful firewalls and will be the basis for future stateful
>> > features such as NAT and load-balancing.  Work is underway to bring
>> > connection tracking to the userspace datapath (used by DPDK) and the
>> > port to Hyper-V."  - in the way that 'work is underway' (=work is
>> > ongoing) means that a time of OVS 2.5 release the feature was not
>> > 'classified' as ready?
>> [Mooney, Sean K]
>> In ovs 2.5 only linux kernel conntrack was supported assuming you had a 4.x
>> kernel that supported it. that means that the feature was not available on
>> bsd,windows or with dpdk.
>>
>> In the upcoming ovs 2.6 release conntrack support has been added to the
>> Netdev datapath which is used with dpdk and on bsd. As far as I am aware
>> windows conntrack support is still Missing but I may be wrong.
>>
>> If you are interested the devstack local.conf I used to test that it 
>> functioned is
>> available here http://paste.openstack.org/show/552434/
>>
>> I used an OpenStack vm using the Ubuntu 16.04 and 2 e1000 interfaces to do 
>> the
>> testing.
>>
>>
>> >
>> >
>> > BR,
>> > Konstantin
>> >
>> >
>> >
>> > > On Sat, Aug 6, 2016 at 8:16 PM, Mooney, Sean K
>> > 
>> > > wrote:
>> > > > Hi just a quick fyi,
>> > > >
>> > > > About 2 weeks ago I did some light testing with the conntrack
>> > security
>> > > > group driver and the newly
>> > > >
>> 

[openstack-dev] [new][bandit] Release 1.1.0 (httpoxy, important fixes)

2016-08-15 Thread Kelsey, Timothy John

Hi folks,
New bandit release 1.1.0 has been tagged. Importantly, this includes a security 
fix for a bug[1] in HTML formatted reports that could permit XSS.

[New Features]
- New test for HTTPoxy bug (CVE-2016-5386)
- Man page added

[Bug Fixes]
- XSS bug fixed in HTML output (Security fix)
- Various typos and spelling errors fixed

[Behind the Scenes]
- Catch general exceptions per-file
- Many docs improvements
- Py3.5 bits

[1] https://bugs.launchpad.net/ossn/+bug/1612988



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Andrew Laski
Currently in Nova we're discussion adding a "capabilities" API to expose
to users what actions they're allowed to take, and having compute hosts
expose "capabilities" for use by the scheduler. As much fun as it would
be to have the same term mean two very different things in Nova to
retain some semblance of sanity let's rename one or both of these
concepts.

An API "capability" is going to be an action, or URL, that a user is
allowed to use. So "boot an instance" or "resize this instance" are
capabilities from the API point of view. Whether or not a user has this
capability will be determined by looking at policy rules in place and
the capabilities of the host the instance is on. For instance an
upcoming volume multiattach feature may or may not be allowed for an
instance depending on host support and the version of nova-compute code
running on that host.

A host "capability" is a description of the hardware or software on the
host that determines whether or not that host can fulfill the needs of
an instance looking for a home. So SSD or x86 could be host
capabilities.
https://github.com/jaypipes/os-capabilities/blob/master/os_capabilities/const.py
has a list of some examples.

Some possible replacement terms that have been thrown out in discussions
are features, policies(already used), grants, faculties. But none of
those seemed to clearly fit one concept or the other, except policies.

Any thoughts on this hard problem?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] reference a type of alarm in template

2016-08-15 Thread Yujun Zhang
I have a question on how to reference a type of alarm in template so that
we can build scenarios.

In the template sample [1], an alarm entity has three keys: `category`,
`type` and `template_id`. It seems `type` is the only information to
distinguish different alarms. However, when an alarm is raised by aodh, it
seems all alarms are assigned entity type `aodh` [2], so are they shown in
dashboard.

Suppose we have two different types of alarms from `aodh`, e.g.
`volume.corrupt` and `volume.deleted`. How should I reference them
separately in a template?

×
[image: Screen Shot 2016-08-15 at 8.44.56 PM.png]
×

×

[1]
https://github.com/openstack/vitrage/blob/master/etc/vitrage/templates.sample/deduced_host_disk_space_to_instance_at_risk.yaml#L8
[2]
https://github.com/openstack/vitrage/blob/master/vitrage/datasources/aodh/transformer.py#L75
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-ovn]NAT and external network in OVN

2016-08-15 Thread Wilence Yao
Hi Chandra,

thanks for your help.

How can I configure nat by ovn api(ovn-nbctl/ovn-sbctl) as
https://github.com/openvswitch/ovs/blob/master/tutorial/OVN-Tutorial.md ?

or through openstack api(networking-ovn) ?

Regards,
Wilence

On Wed, Aug 10, 2016 at 2:34 PM, Chandra Vejendla <
chandra.vejen...@gmail.com> wrote:

> On Tue, Aug 9, 2016 at 11:03 PM, Wilence Yao 
> wrote:
> >
> > Hi all,
> >
> > What's the current situation about NAT(floatingip or snat) and external
> network?
> > How can I get any material of NAT developing, design or testing ?
> >
> > Thanks for any help
> >
> >
> > Wilence Yao
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> Hi Wilence,
>
> Here are the links to the WIP patch [1] and high level design [2] for
> NAT support
> in networking-ovn.
>
> [1] https://review.openstack.org/#/c/346646/
> [2] https://etherpad.openstack.org/p/Integration_with_OVN_L3_Gateway
>
> Regards,
> Chandra
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [nova] locking concern with os-brick

2016-08-15 Thread Sean Dague
On 08/13/2016 06:07 PM, Matt Riedemann wrote:

> 
> I checked a tempest-dsvm CI run upstream and we don't follow this
> recommendation for our own CI on all changes in OpenStack, so before we
> make this note in the release notes, I'd like to see us use the same
> lock_path for c-vol and n-cpu in devstack for our CI runs.
> 
> Also, it should really be a note in the help text of the actual
> lock_path option IMO since it's a latent and persistent thing that
> people are going to need to remember after newton has long been released
> and people deploying OpenStack for the first time AFTER newton shouldn't
> have to know there was a release note telling them not to shoot
> themselves in the foot, it should be in the config option help text.

That patch to do this is where this all started, because I was not
comfortable landing that as a default change in master until all the
affected projects had landed release notes / docs around this. Otherwise
this could very well have snuck in to devstack, done the right thing
there, and never been noticed by others.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-15 Thread Sean Dague
On 08/14/2016 06:23 PM, Patrick East wrote:

> I like the sound of a more unified way to interact with compute node
> services. Having a standardized approach for inter-service
> synchronization for controlling system resources would be sweet (even if
> it is just a more sane way of using local file locks). Anyone know if
> there is existing work in this area we can build off of? Or is the path
> forward a new cross-project spec to try and lock down some requirements,
> use-cases, etc.?
> 
> As far as spending time to hack together solutions via the config
> settings for this.. we'll its pretty minimal wrt size of effort compared
> to solving the large issue. Don't get me wrong though, I'm a fan of
> doing both in parallel. Even if we have resources jump on board
> immediately I'm not convinced we have a great chance to "fix" this for N
> in a more elegant fashion, much less any of the older releases affected
> by this. That leads me to believe we still need the shared config
> setting for at least a little while in Devstack, and documentation for
> existing deployments or ones going up with N.

So I think this breaks down into:

1) What are the exactly calls in os-brick that need this? What goes
wrong if they don't have it?

2) How do we communicate the need in a way that won't be missed by folks?

3) What is the least worst solution to this for Newton?

4) How do we make sure we don't do this again in future releases?

5) What is the more ideal long term solution here?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-15 Thread Alioune
Hi all,
I'm trying to launch Openstack SFC as explained in[1] by creating 2 SFs, 1
Web Server (DST) and the DHCP namespace as the SRC.
I've installed OVS (Open vSwitch) 2.3.90 with Linux kernel 3.13.0-62 and
the neutron L2-agent runs correctly.
I followed the process by creating classifier, port pairs and port_group
but I got a wrong message "delete_port_chain failed." when creating
port_chain [2]
I tried to create the neutron ports with and without the option
"--no-security-groups" then tcpdpump on SFs tap interfaces but the ICMP
packets don't go through the SFs.

Can anyone advice to fix? that ?
What's your channel on IRC ?

Regards,


[1] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
[2]
vagrant@ubuntu:~/openstack_sfc$ ./08-os_create_port_chain.sh
delete_port_chain failed.
vagrant@ubuntu:~/openstack_sfc$ cat 08-os_create_port_chain.sh
#!/bin/bash

neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2
--flow-classifier FC1 PC1

[3] Output OVS Flows

vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-tun -O OpenFlow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0xbc2e9105125301dc, duration=9615.385s, table=0, n_packets=146,
n_bytes=11534, priority=1,in_port=1 actions=resubmit(,2)
 cookie=0xbc2e9105125301dc, duration=9615.382s, table=0, n_packets=0,
n_bytes=0, priority=0 actions=drop
 cookie=0xbc2e9105125301dc, duration=9615.382s, table=2, n_packets=5,
n_bytes=490, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,20)
 cookie=0xbc2e9105125301dc, duration=9615.381s, table=2, n_packets=141,
n_bytes=11044, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,22)
 cookie=0xbc2e9105125301dc, duration=9615.380s, table=3, n_packets=0,
n_bytes=0, priority=0 actions=drop
 cookie=0xbc2e9105125301dc, duration=9615.380s, table=4, n_packets=0,
n_bytes=0, priority=0 actions=drop
 cookie=0xbc2e9105125301dc, duration=8617.106s, table=4, n_packets=0,
n_bytes=0, priority=1,tun_id=0x40e
actions=push_vlan:0x8100,set_field:4097->vlan_vid,resubmit(,10)
 cookie=0xbc2e9105125301dc, duration=9615.379s, table=6, n_packets=0,
n_bytes=0, priority=0 actions=drop
 cookie=0xbc2e9105125301dc, duration=9615.379s, table=10, n_packets=0,
n_bytes=0, priority=1
actions=learn(table=20,hard_timeout=300,priority=1,cookie=0xbc2e9105125301dc,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0xbc2e9105125301dc, duration=9615.378s, table=20, n_packets=5,
n_bytes=490, priority=0 actions=resubmit(,22)
 cookie=0xbc2e9105125301dc, duration=9615.342s, table=22, n_packets=146,
n_bytes=11534, priority=0 actions=drop
vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-int -O OpenFlow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0xbc2e9105125301dc, duration=6712.090s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=7,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6709.623s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=8,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6555.755s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=10,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6559.596s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=9,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6461.028s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=11,icmp_type=136 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6712.071s, table=0, n_packets=13,
n_bytes=546, priority=10,arp,in_port=7 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6709.602s, table=0, n_packets=0,
n_bytes=0, priority=10,arp,in_port=8 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6555.727s, table=0, n_packets=0,
n_bytes=0, priority=10,arp,in_port=10 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6559.574s, table=0, n_packets=12,
n_bytes=504, priority=10,arp,in_port=9 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=6461.005s, table=0, n_packets=15,
n_bytes=630, priority=10,arp,in_port=11 actions=resubmit(,24)
 cookie=0xbc2e9105125301dc, duration=9620.388s, table=0, n_packets=514,
n_bytes=49656, priority=0 actions=NORMAL
 cookie=0xbc2e9105125301dc, duration=9619.277s, table=0, n_packets=0,
n_bytes=0, priority=20,mpls actions=resubmit(,10)
 cookie=0xbc2e9105125301dc, duration=6712.111s, table=0, n_packets=25,
n_bytes=2674, priority=9,in_port=7 actions=resubmit(,25)
 cookie=0xbc2e9105125301dc, duration=6559.621s, table=0, n_packets=24,
n_bytes=2576, priority=9,in_port=9 actions=resubmit(,25)
 cookie=0xbc2e9105125301dc, duration=6555.777s, table=0, n_packets=2,
n_bytes=140, priority=9,in_port=10 actions=resubmit(,25)
 cookie=0xbc2e9105125301dc, duration=6461.082s, table=0, n_packets=47,
n_bytes=4830, priority=9,in_port=11 actions=resubmit(,25)
 cookie=0xbc2e9105125301dc, duration=6709.646s, table=0, n_packets=3,

[openstack-dev] [nova][vitrage] host evacuate

2016-08-15 Thread Afek, Ifat (Nokia - IL)
Hi,

In Vitrage project[1], I would like to add an option to call nova host-evacuate 
for a failed host. I noticed that there is an api for ‘nova evacuate’, and a 
cli (but no api) for ‘nova host-evacuate’. Is there a way that I can call 'nova 
host-evacuate’ from Vitrage code? Did you consider adding host-evacuate to nova 
api? Obviously it would improve the performance for Vitrage use case.


Some more details: The Vitrage project is used for analysing the root cause of 
OpenStack alarms, and deducing their existence before they are directly 
observed. It receives information from various data sources (Nova, Neutron, 
Zabbix, etc.) performs configurable actions and notifies other projects like 
Nova. 

For example, in case zabbix detect a host NIC failure, Vitrage calls nova 
force-down api to indicate that the host is unavailable. We would like to add 
an option to evacuate the failed host.

Thanks,

Ifat.

[1] https://wiki.openstack.org/wiki/Vitrage






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting reminder - 08/15/2016

2016-08-15 Thread Deja, Dawid
Hi,

This is a reminder that we’ll have a team meeting today at #openstack-meeting 
at 16.00 UTC.

Agenda:

  *   Review action items
  *   Current status (progress, issues, roadblocks, further plans)
  *   Open discussion

Dawid Deja
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][oslo] Lets move forward on making oslo.privsep logging more sane

2016-08-15 Thread Thierry Carrez
Matt Riedemann wrote:
> [...]
> I think we could approve Angus' and maybe iterate/improve on it later
> for what Walter wants, which is making oslo.privsep logging more
> configurable like processutils so the caller can get the captured
> stdout/stderr and decide what to do with it.

+2ed (the solution, the rationale and the patch)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-15 Thread Dmitry Tantsur

Hi everyone, happy Monday :)

I'd like to start the discussion about CI-testing the optional 
composable services in the CI (I'm primarily interested in Ironic, but I 
know there are a lot more).


Currently every time we change something in an optional service, we have 
to create a DO-NOT-MERGE patch making the service in question not 
optional. This approach has several problems:


1. It's not usually done for global refactorings.

2. The current CI does not have any specific tests to check that the 
services in question actually works at all (e.g. in my experience the CI 
was green even though nova-compute could not reach ironic).


3. If something breaks, it's hard to track the problem down to a 
specific patch, as there is no history of gate runs.


4. It does not test the environment files we provide for enabling the 
service.


So, are there any plans to start covering optional services? Maybe at 
least a non-HA job with all environment files included? It would be cool 
to also somehow provide additional checks though. Or, in case of ironic, 
to disable the regular nova compute, so that the ping test runs on an 
ironic instance.


WDYT?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-15 Thread Thierry Carrez
Sean Dague wrote:
> On 08/12/2016 01:10 PM, Walter A. Boring IV wrote:
>> I believe there is a compromise that we could implement in Cinder that
>> enables us to have a deprecation
>> of unsupported drivers that aren't meeting the Cinder driver
>> requirements and allow upgrades to work
>> without outright immediately removing a driver.
>>
>>  1. Add a 'supported = True' attribute to every driver.
>>  2. When a driver no longer meets Cinder community requirements, put a
>> patch up against the driver
>>  3. When c-vol service starts, check the supported flag.  If the flag is
>> False, then log an exception, and disable the driver.
>>  4. Allow the admin to put an entry in cinder.conf for the driver in
>> question "enable_unsupported_driver = True".  This will allow the
>> c-vol service to start the driver and allow it to work.  Log a
>> warning on every driver call.
>>  5. This is a positive acknowledgement by the operator that they are
>> enabling a potentially broken driver. Use at your own risk.
>>  6. If the vendor doesn't get the CI working in the next release, then
>> remove the driver. 
>>  7. If the vendor gets the CI working again, then set the supported flag
>> back to True and all is good. 
>>
>> This allows a deprecation period for a driver, and keeps operators who
>> upgrade their deployment from losing access to their volumes they have
>> on those back-ends.  It will give them time to contact the community
>> and/or do some research, and find out what happened to the driver.  
>> This also potentially gives the operator time to find a new supported
>> backend and start migrating volumes.  I say potentially, because the
>> driver may be broken, or it may work enough to migrate volumes off of it
>> to a new backend.
>>
>> Having unsupported drivers in tree is terrible for the Cinder community,
>> and in the long run terrible for operators.
>> Instantly removing drivers because CI is unstable is terrible for
>> operators in the short term, because as soon as they upgrade OpenStack,
>> they lose all access to managing their existing volumes.   Just because
>> we leave a driver in tree in this state, doesn't mean that the operator
>> will be able to migrate if the drive is broken, but they'll have a
>> chance depending on the state of the driver in question.  It could be
>> horribly broken, but the breakage might be something fixable by someone
>> that just knows Python.   If the driver is gone from tree entirely, then
>> that's a lot more to overcome.
>>
>> I don't think there is a way to make everyone happy all the time, but I
>> think this buys operators a small window of opportunity to still manage
>> their existing volumes before the driver is removed.  It also still
>> allows the Cinder community to deal with unsupported drivers in a way
>> that will motivate vendors to keep their stuff working.
> 
> This seems very reasonable. It allows the cinder team to mark stuff
> unsupported at any point that vendors do not meet their upstream
> commitments, but still provides some path forward for operators that
> didn't realize their chosen vendor abandoned them and the community
> until after they are in the midst of upgrade. It's very important that
> the cinder team is able to keep a very visible hammer for vendors not
> living up to their commitments.
> 
> Keeping some visible data around drivers that are flapping (going
> unsupported, showing up with CI to get back out of the state,
> disappearing again) would be great as well, to further give operators
> data on what vendors are working in good faith and which aren't.

I like this a lot, and it certainly would address the deprecation policy
part.

Sean: I was wondering if that would not still be considered breaking
upgrades, though... Since you end up upgrading and your c-vol would not
restart until you set enable_unsupported_driver = True ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-15 Thread Thierry Carrez
Doug Hellmann wrote:
> [...]
> Choosing to be a part of a community comes with obligations as well
> as benefits.  If, after a lengthy discussion of a community-wide
> goal, involving everyone in the community, a project team is
> resolutely opposed to the goal, does that not indicate that the
> needs of the project team and the needs of the broader community
> are at odds in some way? And if the project team's needs and the
> community needs are consistently at odds, over the course of a
> series of such goals, why would the project team want to constrain
> itself to stay in the community?  Aren't they clearly going in
> different directions?
> 
> Understand, it is not my desire to emphasize any differences of
> this nature. Rather, I want to reduce them. To do that, I am proposing
> a process through which common goals can be identified, described,
> and put into action. I do hope, though, that through the course of
> the discussion of each individual proposal everyone involved will
> come to understand the idea and by the time a proposal becomes a
> "goal" to be implemented I "expect" everyone to, at the very least,
> understand why a goal is important to others, even if they do not
> agree with it. That understanding should then lead, on the basis
> of agreeing to be part of a collaborative community, to supporting
> the goal.
> 
> I also expect us to discuss a lot of proposals that we do not agree
> on, and that either need more time to develop or that end up finding
> another path to resolution. No one seems all that concerned with
> the concept that they might propose a goal that everyone else doesn't
> agree with.  :-)
> 
> So, yes, by the time we pick a goal I expect teams to do the work,
> because at that point in the process they will see it as the
> reasonable course of action.  There is still an "escape valve" in
> place for teams that, after all of the discussion and shaping of
> the goals is over, still take issue with a goal. By explaining their
> position in their response, we will have reference documentation
> to point to when the inevitable "why doesn't X do Y" questions
> arise. I will be interested to see how often we actually have to
> use that.

+1

> Excerpts from John Dickinson's message of 2016-08-12 16:04:42 -0700:
>> [...]
>> The proposed plan has a lot of good in it, and I'm really happy to see the TC
>> working to bring common goals and vision to the entirety of the OpenStack
>> community. Drop the "project teams are expected to prioritize these goals 
>> above
>> all other work", and my concerns evaporate. I'd be happy to agree to that 
>> proposal.
> 
> Saying that the community has goals but that no one is expected to
> act to bring them about would be a meaningless waste of time and
> energy.

I think we can find wording that doesn't use the word "priority" (which
is, I think, what John objects to the most) while still conveying that
project teams are expected to act to bring them about (once they said
they agreed with the goal).

How about "project teams are expected to do everything they can to
complete those goals within the boundaries of the target development
cycle" ? Would that sound better ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][drivers] Backend and volume health reporting

2016-08-15 Thread Avishay Traeger
On Sun, Aug 14, 2016 at 5:53 PM, John Griffith 
wrote:
>
> ​I'd like to get a more detailed use case and example of a problem you
> want to solve with this.  I have a number of concerns including those I
> raised in your "list manageable volumes" proposal.​  Most importantly
> there's really no clear definition of what these fields mean and how they
> should be interpreted.
>

I didn't specify what anything means yet on purpose - the idea was to first
gather information here about what various backends can report, then we
make an educated decision about what health states make sense to expose.

I see Cinder's potential as a single pane of glass management for all of my
cloud's storage.  Once I do some initial configuration, I hope to look at
the backend's UI as little as possible.  Today a user can create a volume,
but can't know anything about it's resiliency or availability.  The user
has a volume that's "available" and is happy.  But what does the user
really care about?  In my opinion not Cinder's internal state machine, but
things like "Is my data safe?" and "Is my data accessible?"  That's the
problem that I want to solve here.


> For backends, I'm not sure what you want to solve that can't be handled
> already by the scheduler and report-capabilities periodic job?  You can
> already report back from your backend to the scheduler that you shouldn't
> be used for any scheduling activities going forward.  More detailed info
> than that might be useful, but I'm not sure it wouldn't fall into an
> already existing OpenStack monitoring project like Monasca?
>

My storage requires maintenance and now all volumes are unaccessible.  I
have management access and create as many volumes as I want, but no
attach.  Or the storage is down totally.  Or it is up but
performance/reliability is degraded due to rebuilds in progress.  Or
multiple disks failed, and I lost data from 100 volumes.

In all these cases, all I see is that my volumes are available/in-use.  To
have any real insight into what is going on the admin has to go to the
storage backend and use vendor-specific APIs to find out.  Why not abstract
these APIs as well, to allow the admin to monitor the storage?  It can be
as simple as "Hey, there's a problem, your volumes aren't accessible - go
look at the backend's UI" - without going into details.

Do you propose every vendor write a Monasca plugin?  It doesn't seem to be
in line with their goal...

As far as volumes, I personally don't think volumes should have more than a
> few states.  They're either "ok" and available for an operation or they're
> not.
>

I agree.  In my opinion volumes have way too many states today.  But that's
another topic.  What I am proposing is not new states, or a new state
machine, but rather a simple health property: volume['health'] = "healthy",
volume['health'] = "error".  Whatever the backend reports.


> The list you have seems ok to me, but I don't see a ton of value in fault
> prediction or going to great lengths to avoid something failing. The
> current model we have of a volume being "ok" until it's "not" seems
> perfectly reasonable to me.  Typically my experience is that trying to be
> clever and polling/monitoring to try and preemptively change the status of
> a volume does little more than result in complexity, confusion and false
> status changes of resources.  I'm pretty strongly opposed to having a level
> of granularity of the volume here.  At least for now, I'd rather see what
> you have in mind for the backend and nail that down to something that's
> solid and basically bullet proof before trying to tackle thousands of
> volumes which have transient states.  And of course the biggest question I
> have still "what problem" you hope to solve here?
>

This is not about fault prediction, or preemptive changes, or anything
fancy like that.  It's simply reporting on the current health.  "You have
lost the data in this volume, sorry".  "Don't bother trying to attach this
volume right now, it's not accessible."  "The storage is currently doing
something with your volume and performance will suck."

I don't know exactly what we want to expose - I'd rather answer that after
getting feedback from vendors about what information is available.  But
providing some real, up to date, health status on storage resources is a
big value for customers.

Thanks,
Avishay


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] os-capabilities library created

2016-08-15 Thread Alex Xu
2016-08-15 12:56 GMT+08:00 Yingxin Cheng :

> Hi,
>
> I'm concerned with the dependencies between "os-capabilities" library and
> all the other OpenStack services such as Nova, Placement, Ironic, etc.
>
> Rather than embedding the universal "os-capabilities" in Nova, Cinder,
> Glance, Ironic services that will introduce complexities if the library
> versions are different, I'd prefer to hide this library behind the
> placement service and expose consistent interfaces as well as caps to all
> the other services. But the drawback here is also obvious: for example,
> when nova wants to support a new capability, the development will require
> os-capabilities updates, and related lib version bumps, which is
> inconvenient and seems unnecessary.
>

+1 for the concern, this is good point. +1 for the hide os-capability
behind the placement service and expose consistent interfaces.


>
> So IMHO, the possible solution is:
> * Let each services (Nova, Ironic ...) themselves manage their
> capabilities under proper namespaces such as "compute", "ironic" or
> "storage";
>

Actually I think we won't catalog the capabilities by services, we will
catalog the capabilities by resource types. Like nova and ironic may share
some capabilities, due to they provider compute resource.


> * Let os-capabilities define as few as caps possible that are only
> cross-project;
> * And also use os-capabilities to convert service-defined or user-defined
> caps to a standardized and distinct form that can be understood by the
> placement engine.
>
>
This solution sounds complex for me, you need define the capabilities many
different places(in os-capabilities for cross-project caps, in each service
for project-only caps), and this may leads to different service define same
caps but with own naming. And due to the dependence is a library. When you
upgrade the os-capabilities, you have to restart all the services.

I'm thinking of another solution:

1. os-capabilities just define the capabilities into the JSON/YAML files.
2. We expose the capabilities through the REST API of placement engine,
this REST API will read the capabilities from the JSON/YAML files defined
by os-capabilities.

This resolves some problems as below:
1. your concern, hide the os-capabilities behind one single API
2. when os-capabilities upgrade, you needn't update all the os-capabilities
library for all control nodes. You only need update the os-capabilities for
the node running placement engine. Due to the capabilities in the JSON/YAML
files, you even needn't restart the placement engine service.

But yes, this also can be done by glance metadata API.


> My two cents,
> Yingxin
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Newton Midcycle - Tuesday dinner

2016-08-15 Thread Martin Hickey

Hi John,

We will be able to accommodate 15+, as need be. See you all then.

Regards,
Martin



From:   John Schwarz 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   14/08/2016 13:50
Subject:[openstack-dev]  [neutron] Newton Midcycle - Tuesday dinner



Hi guys,

For those of us who'll arrive on Tuesday to Cork, Martin Hickey has
arranged a dinner at "Gourmet Burger Bistro" [1], 8 Bridge Street, at
19:30. Last I heard the reservation was for 15 people so this should
accommodate all who filled out in [2] that they will arrive on
Tuesday.

[1]: http://www.gourmetburgerbistro.ie/
[2]: https://etherpad.openstack.org/p/newton-neutron-midcycle

See you then,

--
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev