Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-12 Thread John Dickinson


On 12 Aug 2016, at 13:31, Doug Hellmann wrote:

> Excerpts from John Dickinson's message of 2016-08-12 13:02:59 -0700:
>>
>> On 12 Aug 2016, at 7:28, Doug Hellmann wrote:
>>
>>> Excerpts from John Dickinson's message of 2016-08-11 15:00:56 -0700:

 On 10 Aug 2016, at 8:29, Doug Hellmann wrote:

> Excerpts from Doug Hellmann's message of 2016-07-29 16:55:22 -0400:
>> One of the outcomes of the discussion at the leadership training
>> session earlier this year was the idea that the TC should set some
>> community-wide goals for accomplishing specific technical tasks to
>> get the projects synced up and moving in the same direction.
>>
>> After several drafts via etherpad and input from other TC and SWG
>> members, I've prepared the change for the governance repo [1] and
>> am ready to open this discussion up to the broader community. Please
>> read through the patch carefully, especially the "goals/index.rst"
>> document which tries to lay out the expectations for what makes a
>> good goal for this purpose and for how teams are meant to approach
>> working on these goals.
>>
>> I've also prepared two patches proposing specific goals for Ocata
>> [2][3].  I've tried to keep these suggested goals for the first
>> iteration limited to "finish what we've started" type items, so
>> they are small and straightforward enough to be able to be completed.
>> That will let us experiment with the process of managing goals this
>> time around, and set us up for discussions that may need to happen
>> at the Ocata summit about implementation.
>>
>> For future cycles, we can iterate on making the goals "harder", and
>> collecting suggestions for goals from the community during the forum
>> discussions that will happen at summits starting in Boston.
>>
>> Doug
>>
>> [1] https://review.openstack.org/349068 describe a process for managing 
>> community-wide goals
>> [2] https://review.openstack.org/349069 add ocata goal "support python 
>> 3.5"
>> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
>> libraries"
>>
>
> The proposal was discussed at the TC meeting yesterday [4], and
> left open to give more time to comment. I've added all of the PTLs
> for big tent projects as reviewers on the process patch [1] to
> encourage comments from them.
>
> Please also look at the associated patches with the specific goals
> for this cycle (python 3.5 support and cleaning up Oslo incubated
> code).  So far most of the discussion has focused on the process,
> but we need folks to think about the specific things they're going
> to be asked to do during Ocata as well.
>
> Doug
>
> [4] 
> http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-09-20.01.log.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Commonality in goals and vision is what unites any community. I
 definitely support the TC's effort to define these goals for OpenStack
 and to champion them. However, I have a few concerns about the process
 that has been proposed.

 I'm concerned with the mandate that all projects must prioritize these
 goals above all other work. Thinking about this from the perspective of
 the employers of OpenStack contributors, and I'm finding it difficult
 to imagine them (particularly smaller ones) getting behind this
 prioritization mandate. For example, if I've got a user or deployer
 issue that requires an upstream change, am I to prioritize Py35
 compatibility over "broken in production"? Am I now to schedule my own
 work on known bugs or missing features only after these goals have
 been met? Is that what I should ask other community members to do too?
>>>
>>> There is a difference between priority and urgency. Clearly "broken
>>> in production" is more urgent than other planned work. It's less
>>> clear that, over the span of an entire 6 month release cycle, one
>>> production outage is the most important thing the team would have
>>> worked on.
>>>
>>> The point of the current wording is to make it clear that because these
>>> are goals coming from the entire community, teams are expected to place
>>> a high priority on completing them. In some cases that may mean
>>> working on community goals instead of working on internal team goals. We
>>> all face this tension all the time, so that's nothing new.
>>
>> It's not an issue of choosing to work on community goals. It's an
>> issue of prioritizing these over things that affect current production
>> deployments and things that are 

Re: [openstack-dev] [OpenStack-docs] [neutron] [api] [doc] API reference for neutron stadium projects (re: API status report)

2016-08-12 Thread Anne Gentle
On Fri, Aug 12, 2016 at 3:29 AM, Akihiro Motoki  wrote:

> this mail focuses on neutron-specific topics. I dropped cinder and ironic
> tags.
>
> 2016-08-11 23:52 GMT+09:00 Anne Gentle :
> >
> >
> > On Wed, Aug 10, 2016 at 2:49 PM, Anne Gentle <
> annegen...@justwriteclick.com>
> > wrote:
> >>
> >> Hi all,
> >> I wanted to report on status and answer any questions you all have about
> >> the API reference and guide publishing process.
> >>
> >> The expectation is that we provide all OpenStack API information on
> >> developer.openstack.org. In order to meet that goal, it's simplest for
> now
> >> to have all projects use the RST+YAML+openstackdocstheme+os-api-ref
> >> extension tooling so that users see available OpenStack APIs in a
> sidebar
> >> navigation drop-down list.
> >>
> >> --Migration--
> >> The current status for migration is that all WADL content is migrated
> >> except for trove. There is a patch in progress and I'm in contact with
> the
> >> team to assist in any way. https://review.openstack.org/#/c/316381/
> >>
> >> --Theme, extension, release requirements--
> >> The current status for the theme, navigation, and Sphinx extension
> tooling
> >> is contained in the latest post from Graham proposing a solution for the
> >> release number switchover and offers to help teams as needed:
> >> http://lists.openstack.org/pipermail/openstack-dev/2016-
> August/101112.html I
> >> hope to meet the requirements deadline to get those changes landed.
> >> Requirements freeze is Aug 29.
> >>
> >> --Project coverage--
> >> The current status for project coverage is that these projects are now
> >> using the RST+YAML in-tree workflow and tools and publishing to
> >> http://developer.openstack.org/api-ref/ so they will be
> >> included in the upcoming API navigation sidebar intended to span all
> >> OpenStack APIs:
> >>
> >> designate http://developer.openstack.org/api-ref/dns/
> >> glance http://developer.openstack.org/api-ref/image/
> >> heat http://developer.openstack.org/api-ref/orchestration/
> >> ironic http://developer.openstack.org/api-ref/baremetal/
> >> keystone http://developer.openstack.org/api-ref/identity/
> >> manila http://developer.openstack.org/api-ref/shared-file-systems/
> >> neutron-lib http://developer.openstack.org/api-ref/networking/
> >> nova http://developer.openstack.org/api-ref/compute/
> >> sahara http://developer.openstack.org/api-ref/data-processing/
> >> senlin http://developer.openstack.org/api-ref/clustering/
> >> swift http://developer.openstack.org/api-ref/object-storage/
> >> zaqar http://developer.openstack.org/api-ref/messaging/
> >>
> >> These projects are using the in-tree workflow and common tools, but do
> not
> >> have a publish job in project-config in the jenkins/jobs/projects.yaml
> file.
> >>
> >> ceilometer
> >
> >
> > Sorry, in reviewing further today I found another project that does not
> have
> > a publish job but has in-tree source files:
> >
> > cinder
> >
> > Team cinder: can you let me know where you are in your publishing comfort
> > level? Please add an api-ref-jobs: line with a target of block-storage to
> > jenkins/jobs/projects.yaml in the project-config repo to ensure
> publishing
> > is correct.
> >
> > Another issue is the name of the target directory for the final URL. Team
> > ironic can I change your api-ref-jobs: line to bare-metal instead of
> > baremetal? It'll be better for search engines and for alignment with the
> > other projects URLs: https://review.openstack.org/354135
> >
> > I've also uncovered a problem where a neutron project's API does not
> have an
> > official service name, and am working on a solution but need help from
> the
> > neutron team: https://review.openstack.org/#/c/351407
>
> I followed the discussion in https://review.openstack.org/#/c/351407
> and my understanding of the conclusion is to add API reference source
> of neutron stadium projects
> to neutron-lib and publish them under
> http://developer.openstack.org/api-ref/networking/ .
> I sounds reasonable to me.
>
> We can have a dedicated pages for each stadium project like networking-sfc
> like api-ref/networking/service-function-chaining.
> At now all APIs are placed under v2/ directory, but it is not good
> both from user and
> maintenance perspectives.
>
>
> So, the next thing we need to clarify is what names and directory
> structure are appropropriate
> from the documentation perspective.
> My proposal is to prepare a dedicated directory per networking project
> repository.
> The directory name should be a function name rather than a project
> name. For example,
> - neutron => ???
> - neutron-lbaas => load-balancer
> - neutron-vpnaas => vpn
> - neutron-fwaas => firewall
> - neutron-dynamic-routing => dynamic-routing
> - networking-sfc => service-function-chaining
> - networking-l2gw => layer2-gateway
> - (networking-bgpvpn) => bgp-vpn
>
> My remaining open questions are:
>
> - Is 'v2' directory needed?
> 

[openstack-dev] [Magnum] Using common tooling for API docs

2016-08-12 Thread Hongbin Lu
Hi team,

As mentioned in the email below, Magnum are not using common tooling for 
generating API docs, so we are excluded from the common navigation of OpenStack 
API. I think we need to prioritize the work to fix it. BTW, I notice there is a 
WIP patch [1] for generating API docs by using Swagger. However, I am not sure 
if Swagger belongs to “common tooling” (docs team, please confirm).

[1] https://review.openstack.org/#/c/317368/

Best regards,
Hongbin

From: Anne Gentle [mailto:annegen...@justwriteclick.com]
Sent: August-10-16 3:50 PM
To: OpenStack Development Mailing List; openstack-d...@lists.openstack.org
Subject: [openstack-dev] [api] [doc] API status report

Hi all,
I wanted to report on status and answer any questions you all have about the 
API reference and guide publishing process.

The expectation is that we provide all OpenStack API information on 
developer.openstack.org. In order to meet that 
goal, it's simplest for now to have all projects use the 
RST+YAML+openstackdocstheme+os-api-ref extension tooling so that users see 
available OpenStack APIs in a sidebar navigation drop-down list.

--Migration--
The current status for migration is that all WADL content is migrated except 
for trove. There is a patch in progress and I'm in contact with the team to 
assist in any way. https://review.openstack.org/#/c/316381/

--Theme, extension, release requirements--
The current status for the theme, navigation, and Sphinx extension tooling is 
contained in the latest post from Graham proposing a solution for the release 
number switchover and offers to help teams as needed: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101112.html I 
hope to meet the requirements deadline to get those changes landed. 
Requirements freeze is Aug 29.

--Project coverage--
The current status for project coverage is that these projects are now using 
the RST+YAML in-tree workflow and tools and publishing to 
http://developer.openstack.org/api-ref/ so they will be included 
in the upcoming API navigation sidebar intended to span all OpenStack APIs:

designate http://developer.openstack.org/api-ref/dns/
glance http://developer.openstack.org/api-ref/image/
heat http://developer.openstack.org/api-ref/orchestration/
ironic http://developer.openstack.org/api-ref/baremetal/
keystone http://developer.openstack.org/api-ref/identity/
manila http://developer.openstack.org/api-ref/shared-file-systems/
neutron-lib http://developer.openstack.org/api-ref/networking/
nova http://developer.openstack.org/api-ref/compute/
sahara http://developer.openstack.org/api-ref/data-processing/
senlin http://developer.openstack.org/api-ref/clustering/
swift http://developer.openstack.org/api-ref/object-storage/
zaqar http://developer.openstack.org/api-ref/messaging/

These projects are using the in-tree workflow and common tools, but do not have 
a publish job in project-config in the jenkins/jobs/projects.yaml file.

ceilometer

--Projects not using common tooling--
These projects have API docs but are not yet using the common tooling, as far 
as I can tell. Because of the user experience, I'm making a judgement call that 
these cannot be included in the common navigation. I have patched the 
projects.yaml file in the governance repo with the URLs I could screen-scrape, 
but if I'm incorrect please do patch the projects.yaml in the governance repo.

astara
cloudkitty
congress
magnum
mistral
monasca
solum
tacker
trove

Please reach out if you have questions or need assistance getting started with 
the new common tooling, documented here: 
http://docs.openstack.org/contributor-guide/api-guides.html.

For searchlight, looking at http://developer.openstack.org/api-ref/search/ they 
have the build job, but the info is not complete yet.

One additional project I'm not sure what to do with is networking-nfc, since 
I'm not sure it is considered a neutron API. Can I get help to sort that 
question out?

--Redirects from old pages--
We have been adding .htaccess redirects from the old api-ref-servicename.html 
on developer.openstack.org as teams are 
comfortable with the accuracy of information and build stability. Please help 
out by patching the api-site repository's .htaccess file when you are ready to 
redirect. These projects could be ready for redirects but do not have them:

designate
glance
heat
sahara
senlin
swift

I'm available for questions so please reach out as needed. I hope this covers 
our current status.

A million thank yous to everyone who got us this far! Great teamwork, great 
docs work, great UI work, and great API work everyone.
Anne

--
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-12 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2016-08-12 13:02:59 -0700:
> 
> On 12 Aug 2016, at 7:28, Doug Hellmann wrote:
> 
> > Excerpts from John Dickinson's message of 2016-08-11 15:00:56 -0700:
> >>
> >> On 10 Aug 2016, at 8:29, Doug Hellmann wrote:
> >>
> >>> Excerpts from Doug Hellmann's message of 2016-07-29 16:55:22 -0400:
>  One of the outcomes of the discussion at the leadership training
>  session earlier this year was the idea that the TC should set some
>  community-wide goals for accomplishing specific technical tasks to
>  get the projects synced up and moving in the same direction.
> 
>  After several drafts via etherpad and input from other TC and SWG
>  members, I've prepared the change for the governance repo [1] and
>  am ready to open this discussion up to the broader community. Please
>  read through the patch carefully, especially the "goals/index.rst"
>  document which tries to lay out the expectations for what makes a
>  good goal for this purpose and for how teams are meant to approach
>  working on these goals.
> 
>  I've also prepared two patches proposing specific goals for Ocata
>  [2][3].  I've tried to keep these suggested goals for the first
>  iteration limited to "finish what we've started" type items, so
>  they are small and straightforward enough to be able to be completed.
>  That will let us experiment with the process of managing goals this
>  time around, and set us up for discussions that may need to happen
>  at the Ocata summit about implementation.
> 
>  For future cycles, we can iterate on making the goals "harder", and
>  collecting suggestions for goals from the community during the forum
>  discussions that will happen at summits starting in Boston.
> 
>  Doug
> 
>  [1] https://review.openstack.org/349068 describe a process for managing 
>  community-wide goals
>  [2] https://review.openstack.org/349069 add ocata goal "support python 
>  3.5"
>  [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
>  libraries"
> 
> >>>
> >>> The proposal was discussed at the TC meeting yesterday [4], and
> >>> left open to give more time to comment. I've added all of the PTLs
> >>> for big tent projects as reviewers on the process patch [1] to
> >>> encourage comments from them.
> >>>
> >>> Please also look at the associated patches with the specific goals
> >>> for this cycle (python 3.5 support and cleaning up Oslo incubated
> >>> code).  So far most of the discussion has focused on the process,
> >>> but we need folks to think about the specific things they're going
> >>> to be asked to do during Ocata as well.
> >>>
> >>> Doug
> >>>
> >>> [4] 
> >>> http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-09-20.01.log.html
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> Commonality in goals and vision is what unites any community. I
> >> definitely support the TC's effort to define these goals for OpenStack
> >> and to champion them. However, I have a few concerns about the process
> >> that has been proposed.
> >>
> >> I'm concerned with the mandate that all projects must prioritize these
> >> goals above all other work. Thinking about this from the perspective of
> >> the employers of OpenStack contributors, and I'm finding it difficult
> >> to imagine them (particularly smaller ones) getting behind this
> >> prioritization mandate. For example, if I've got a user or deployer
> >> issue that requires an upstream change, am I to prioritize Py35
> >> compatibility over "broken in production"? Am I now to schedule my own
> >> work on known bugs or missing features only after these goals have
> >> been met? Is that what I should ask other community members to do too?
> >
> > There is a difference between priority and urgency. Clearly "broken
> > in production" is more urgent than other planned work. It's less
> > clear that, over the span of an entire 6 month release cycle, one
> > production outage is the most important thing the team would have
> > worked on.
> >
> > The point of the current wording is to make it clear that because these
> > are goals coming from the entire community, teams are expected to place
> > a high priority on completing them. In some cases that may mean
> > working on community goals instead of working on internal team goals. We
> > all face this tension all the time, so that's nothing new.
> 
> It's not an issue of choosing to work on community goals. It's an
> issue of prioritizing these over things that affect current production
> deployments and things that are needed to increase adoption.
> 
> >
> >> I agree with 

Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-12 Thread John Dickinson


On 12 Aug 2016, at 7:28, Doug Hellmann wrote:

> Excerpts from John Dickinson's message of 2016-08-11 15:00:56 -0700:
>>
>> On 10 Aug 2016, at 8:29, Doug Hellmann wrote:
>>
>>> Excerpts from Doug Hellmann's message of 2016-07-29 16:55:22 -0400:
 One of the outcomes of the discussion at the leadership training
 session earlier this year was the idea that the TC should set some
 community-wide goals for accomplishing specific technical tasks to
 get the projects synced up and moving in the same direction.

 After several drafts via etherpad and input from other TC and SWG
 members, I've prepared the change for the governance repo [1] and
 am ready to open this discussion up to the broader community. Please
 read through the patch carefully, especially the "goals/index.rst"
 document which tries to lay out the expectations for what makes a
 good goal for this purpose and for how teams are meant to approach
 working on these goals.

 I've also prepared two patches proposing specific goals for Ocata
 [2][3].  I've tried to keep these suggested goals for the first
 iteration limited to "finish what we've started" type items, so
 they are small and straightforward enough to be able to be completed.
 That will let us experiment with the process of managing goals this
 time around, and set us up for discussions that may need to happen
 at the Ocata summit about implementation.

 For future cycles, we can iterate on making the goals "harder", and
 collecting suggestions for goals from the community during the forum
 discussions that will happen at summits starting in Boston.

 Doug

 [1] https://review.openstack.org/349068 describe a process for managing 
 community-wide goals
 [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
 [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
 libraries"

>>>
>>> The proposal was discussed at the TC meeting yesterday [4], and
>>> left open to give more time to comment. I've added all of the PTLs
>>> for big tent projects as reviewers on the process patch [1] to
>>> encourage comments from them.
>>>
>>> Please also look at the associated patches with the specific goals
>>> for this cycle (python 3.5 support and cleaning up Oslo incubated
>>> code).  So far most of the discussion has focused on the process,
>>> but we need folks to think about the specific things they're going
>>> to be asked to do during Ocata as well.
>>>
>>> Doug
>>>
>>> [4] 
>>> http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-09-20.01.log.html
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> Commonality in goals and vision is what unites any community. I
>> definitely support the TC's effort to define these goals for OpenStack
>> and to champion them. However, I have a few concerns about the process
>> that has been proposed.
>>
>> I'm concerned with the mandate that all projects must prioritize these
>> goals above all other work. Thinking about this from the perspective of
>> the employers of OpenStack contributors, and I'm finding it difficult
>> to imagine them (particularly smaller ones) getting behind this
>> prioritization mandate. For example, if I've got a user or deployer
>> issue that requires an upstream change, am I to prioritize Py35
>> compatibility over "broken in production"? Am I now to schedule my own
>> work on known bugs or missing features only after these goals have
>> been met? Is that what I should ask other community members to do too?
>
> There is a difference between priority and urgency. Clearly "broken
> in production" is more urgent than other planned work. It's less
> clear that, over the span of an entire 6 month release cycle, one
> production outage is the most important thing the team would have
> worked on.
>
> The point of the current wording is to make it clear that because these
> are goals coming from the entire community, teams are expected to place
> a high priority on completing them. In some cases that may mean
> working on community goals instead of working on internal team goals. We
> all face this tension all the time, so that's nothing new.

It's not an issue of choosing to work on community goals. It's an
issue of prioritizing these over things that affect current production
deployments and things that are needed to increase adoption.

>
>> I agree with Hongbin Lu's comments that the resulting goals might fit
>> into the interests of the majority but fundamentally violate the
>> interests of a minority of project teams. As an example, should the TC
>> decide that a future goal is for projects to implement a particular
>> 

Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Jeremy Stanley
On 2016-08-12 21:20:34 +0200 (+0200), Julien Danjou wrote:
[...]
> If bindep.txt is present, are the "standard" packages still installed?
> If yes, this is going to be more challenging to get bindep.txt right, as
> a missing entry will go unnoticed.

As Andreas mentioned, we have a fallback list[*] which gets
installed in most (non-devstack) jobs when you don't have a
bindep.txt or other-requirements.txt in your repo. That said, the
addition/modification/removal of that file is accounted for in jobs
that test a change doing that, so you can see whether it will work
on our infrastructure simply by proposing the change to your project
and seeing if any of your jobs fail due to missing packages.

[*] http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/data/bindep-fallback.txt
 >
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-12 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Friday, August 12, 2016 2:20 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] os-capabilities library created
> 
> On 08/12/2016 04:05 AM, Daniel P. Berrange wrote:
> > On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:
> >> Hi Novas and anyone interested in how to represent capabilities in a
> >> consistent fashion.
> >>
> >> I spent an hour creating a new os-capabilities Python library this
> evening:
> >>
> >> http://github.com/jaypipes/os-capabilities
> >>
> >> Please see the README for examples of how the library works and how
> >> I'm thinking of structuring these capability strings and symbols. I
> >> intend os-capabilities to be the place where the OpenStack community
> >> catalogs and collates standardized features for hardware, devices,
> >> networks, storage, hypervisors, etc.
> >>
> >> Let me know what you think about the structure of the library and
> >> whether you would be interested in owning additions to the library
> of
> >> constants in your area of expertise.
> >
> > How are you expecting that these constants are used ? It seems
> > unlikely the, say nova code, code is going to be explicitly accessing
> > any of the individual CPU flag constants.
> 
> These capability strings are what deployers will associate with a
> flavor in Nova and they will be passed in the request to the placement
> API in either a "requirements" or a "preferences" list. In order to
> ensure that two OpenStack clouds refer to various capabilities (not
> just CPU flags, see below), we need a curated list of these
> standardized constants.
> 
>  > It should surely just be entirely metatadata
> > driven - eg libvirt driver would just parse libvirt capabilities XML
> > and extract all the CPU flag strings & simply export them.
> 
> You are just thinking in terms of (lib)virt/compute capabilities.
> os-capabilities intends to provide a standard set of capability
> constants for more than virt/compute, including storage, network
> devices and more.
> 
> But, yes, I imagine discovery code running on a compute node with the
> *libvirt* virt driver could indeed simply query the libvirt
> capabilities XML snippet and translate those capability codes into os-
> capabilities constants. Remember, VMWare and Hyper-V also need to do
> this discovery and translation to a standardized set of constants. So
> does ironic-inspector when it queries an IPMI interface of course.
> 
>  > It would be very
> > undesirable to have to add new code to os-capabilities every time
> that
> > Intel/AMD create new CPU flags for new features, and force users to
> > upgrade openstack to be able to express requirements on those CPU
> flags.
> 
> I don't see how we would be able to expose a particular new CPU flag
> *across disparate OpenStack clouds* unless we have some standardized
> set of constants that has been curated. Not all OpenStack clouds run
> libvirt. And again, think bigger than just virt/compute.
[Mooney, Sean K] just as an aside I think Libvirt actually gets its capability
Information from udev. Again that wont help you on windows but it's at least not
Requiring Libvirt.  os-capabilities could retrieve info via udev also 
potentially.

Ipmi will allow you to discover some capabilities of the system but
It might be worth considering redfish fit for capabilities discovery
http://www.dmtf.org/standards/redfish
https://www.brighttalk.com/webcast/9077/163783

On a personal note could we call os-capabilities, os-caps?
Its shorter and  I have misspelled capabilities 4 different ways in typing this 
Response which I have now fixed it.
> 
> Best,
> -jay
> 
> >> Next steps for the library include:
> >>
> >> * Bringing in other top-level namespaces like disk: or net: and
> >> working with contributors to fill in the capability strings and
> symbols.
> >> * Adding constraints functionality to the library. For instance,
> >> building in information to the os-capabilities interface that would
> >> allow a set of capabilities to be cross-checked for set violations.
> >> As an example, a resource provider having DISK_GB inventory cannot
> >> have *both* the disk:ssd
> >> *and* the disk:hdd capability strings associated with it -- clearly
> >> the disk storage is either SSD or spinning disk.
> >
> > Regards,
> > Daniel
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Andreas Jaeger
On 08/12/2016 09:20 PM, Julien Danjou wrote:
> On Fri, Aug 12 2016, Andreas Jaeger wrote:
> 
>> Projects are encouraged to create their own bindep files. Besides
>> documenting what is required, it also gives a speedup in running tests
>> since you install only what you need and not all packages that some
>> other project might need and are installed  by default. Each test system
>> comes with a basic installation and then we either add the repo defined
>> package list or the large default list.
> 
> This is awesome, I never heard of this, so I'm glad you sent it.
> 
> I'd love to move telemetry projects to use this (we have a lot of
> bindeps for our tests), and I've just one question (for now).
> 
> If bindep.txt is present, are the "standard" packages still installed?
> If yes, this is going to be more challenging to get bindep.txt right, as
> a missing entry will go unnoticed.
> 

We have a backed-in minimal-ish set of packages on each node - and a
long list of default packages to install on top of it. If there's a
bindep.txt file, then the long list is not used.

So, you might still miss packages that are in the minimal-ish set - but
those can be fixed once you get bugs ;)

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-08-12 14:43:57 -0400:
> On 08/12/2016 01:10 PM, Walter A. Boring IV wrote:
> > 
> >> I was leaning towards a separate repo until I started thinking about all
> >> the overhead and complications this would cause. It's another repo for
> >> cores to watch. It would cause everyone extra complication in setting up
> >> their CI, which is already one of the biggest roadblocks. It would make
> >> it a little harder to do things like https://review.openstack.org/297140
> >> and https://review.openstack.org/346470 to be able to generate this:
> >> http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
> >> setup, more moving parts to break, and just generally more
> >> complications.
> >>
> >> All things that can be solved for sure. I just question whether it would
> >> be worth having that overhead. Frankly, there are better things I'd like
> >> to spend my time on.
> >>
> >> I think at this point my first preference would actually be to define a
> >> new tag. This addresses both the driver removal issue as well as the
> >> backporting of driver bug fixes. I would like to see third party drivers
> >> recognized and treated as being different, because in reality they are
> >> very different than the rest of the code. Having something like
> >> follows_deprecation_but_has_third_party_drivers_that_dont would make a
> >> clear statement that their is a vendor component to this project that
> >> really has to be treated differently and has different concerns
> >> deployers need to be aware of.
> >>
> >> Barring that, I think my next choice would be to remove the tag. That
> >> would really be unfortunate as we do want to make it clear to users that
> >> Cinder will not arbitrarily break APIs or do anything between releases
> >> without warning when it comes to non-third party drivers. But if that is
> >> what we need to do to effectively communicate what to expect from
> >> Cinder, then I'm OK with that.
> >>
> >> My last choice (of the ones I'm favorable towards) would be marking a
> >> driver as untested/unstable/abandoned/etc rather than removing it. We
> >> could flag these a certain way and have then spam the logs like crazy
> >> after upgrade to make it very and painfully clear that they are not
> >> being maintained. But as Duncan pointed out, this doesn't have as much
> >> impact for getting vendor attention. It's amazing the level of executive
> >> involvement that can happen after a patch is put up for driver removal
> >> due to non-compliance.
> >>
> >> Sean
> >>
> >> __
> > I believe there is a compromise that we could implement in Cinder that
> > enables us to have a deprecation
> > of unsupported drivers that aren't meeting the Cinder driver
> > requirements and allow upgrades to work
> > without outright immediately removing a driver.
> > 
> >  1. Add a 'supported = True' attribute to every driver.
> >  2. When a driver no longer meets Cinder community requirements, put a
> > patch up against the driver
> >  3. When c-vol service starts, check the supported flag.  If the flag is
> > False, then log an exception, and disable the driver.
> >  4. Allow the admin to put an entry in cinder.conf for the driver in
> > question "enable_unsupported_driver = True".  This will allow the
> > c-vol service to start the driver and allow it to work.  Log a
> > warning on every driver call.
> >  5. This is a positive acknowledgement by the operator that they are
> > enabling a potentially broken driver. Use at your own risk.
> >  6. If the vendor doesn't get the CI working in the next release, then
> > remove the driver. 
> >  7. If the vendor gets the CI working again, then set the supported flag
> > back to True and all is good. 
> > 
> > 
> > This allows a deprecation period for a driver, and keeps operators who
> > upgrade their deployment from losing access to their volumes they have
> > on those back-ends.  It will give them time to contact the community
> > and/or do some research, and find out what happened to the driver.  
> > This also potentially gives the operator time to find a new supported
> > backend and start migrating volumes.  I say potentially, because the
> > driver may be broken, or it may work enough to migrate volumes off of it
> > to a new backend.
> > 
> > Having unsupported drivers in tree is terrible for the Cinder community,
> > and in the long run terrible for operators.
> > Instantly removing drivers because CI is unstable is terrible for
> > operators in the short term, because as soon as they upgrade OpenStack,
> > they lose all access to managing their existing volumes.   Just because
> > we leave a driver in tree in this state, doesn't mean that the operator
> > will be able to migrate if the drive is broken, but they'll have a
> > chance depending on the state of the driver in question.  It could be
> > horribly broken, 

Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Robert Collins
On 12 August 2016 at 12:09, Clay Gerrard  wrote:
>
>
> On Fri, Aug 12, 2016 at 11:52 AM, Andreas Jaeger  wrote:
>>
>> On 08/12/2016 08:37 PM, Clay Gerrard wrote:
>> >
>> > ... but ... it doesn't have a --install option?  Do you know if that is
>> > strictly out-of-scope or roadmap or ... ?
>>
>>
>> Right now we don't need it - we take the output and pipe that to yum/apt
>> etc...
>>
>> See
>>
>> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/install-distro-packages.sh
>>
>
> the -b option is great - thanks for the pointer!
>
>   -b, --brief   List only missing packages one per line.
>
> It should have been more obvious to me that it meant "you should totally use
> this as input into your package manager"!
>
> But, to be clear, when you say "we don't need it" - you *mean" - "yeah, we
> totally need it and added it as bash in a different project"?  ;)
>
> but also *not* strictly out-of-scope?  Or not sure?  Or patches welcome and
> we'll see!?  Or .. we can *both* continue to use our existing tools to solve
> this problem in the same way we always have?  :P

When I designed it it seemed more important to get a clear list than
to be able to drive package managers - it just limited the degree of
integration needed; that said I see no reason it can't be added if
folk find it useful - as long as we still keep the clean query UI :)

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Julien Danjou
On Fri, Aug 12 2016, Andreas Jaeger wrote:

> Projects are encouraged to create their own bindep files. Besides
> documenting what is required, it also gives a speedup in running tests
> since you install only what you need and not all packages that some
> other project might need and are installed  by default. Each test system
> comes with a basic installation and then we either add the repo defined
> package list or the large default list.

This is awesome, I never heard of this, so I'm glad you sent it.

I'd love to move telemetry projects to use this (we have a lot of
bindeps for our tests), and I've just one question (for now).

If bindep.txt is present, are the "standard" packages still installed?
If yes, this is going to be more challenging to get bindep.txt right, as
a missing entry will go unnoticed.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Clay Gerrard
On Fri, Aug 12, 2016 at 11:52 AM, Andreas Jaeger  wrote:
>
> On 08/12/2016 08:37 PM, Clay Gerrard wrote:
> >
> > ... but ... it doesn't have a --install option?  Do you know if that is
> > strictly out-of-scope or roadmap or ... ?
>
>
> Right now we don't need it - we take the output and pipe that to yum/apt
> etc...
>
> See
>
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/install-distro-packages.sh
>

the -b option is great - thanks for the pointer!

  -b, --brief   List only missing packages one per line.

It should have been more obvious to me that it meant "you should totally
use this as input into your package manager"!

But, to be clear, when you say "we don't need it" - you *mean" - "yeah, we
totally need it and added it as bash in a different project"?  ;)

but also *not* strictly out-of-scope?  Or not sure?  Or patches welcome and
we'll see!?  Or .. we can *both* continue to use our existing tools to
solve this problem in the same way we always have?  :P

Thanks again,

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Why module has been deleted even if it is consumed by the project?

2016-08-12 Thread Matthew Thode
On 08/12/2016 01:47 PM, Andrey Pavlov wrote:
> ec2-api (and gce-api) already has this job, so I just add them to
> projects.txt, right?
> 
> https://review.openstack.org/#/c/354899/

yep, looks good, +2

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Andreas Jaeger
On 08/12/2016 08:37 PM, Clay Gerrard wrote:
> I'd noticed other-requirements.txt around, but figured it needed a bunch
> of custom tooling to actually make it useful.
> 
> And... it's a subprocess wrapper to a handful of package management
> tools (surprised to see emerge and pacmac - Kudos!) and a custom format
> for describing package requirements...
> 
> ... but ... it doesn't have a --install option?  Do you know if that is
> strictly out-of-scope or roadmap or ... ?


Right now we don't need it - we take the output and pipe that to yum/apt
etc...

See
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/install-distro-packages.sh

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Andreas Jaeger
On 08/12/2016 08:07 PM, John Dickinson wrote:
> bindep is great, and we've been using it in Swift for a while now. I'd 
> definitely recommend it to other projects.
> 
> Andreas, I didn't see a patch proposed to Swift to move the file. I don't 
> want to get in the way of your tool, though. Is there a patch that will be 
> proposed, or should I do that myself?

It's coming now ;) I didn't want to patch all 130+ repos at a time, so
did it in two steps and work on the second now,

Thanks for double checking,
Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Why module has been deleted even if it is consumed by the project?

2016-08-12 Thread Andrey Pavlov
ec2-api (and gce-api) already has this job, so I just add them to
projects.txt, right?

https://review.openstack.org/#/c/354899/

and after this will be merged I can revert botocore requirement to
global-requirements, right?

On Fri, Aug 12, 2016 at 6:21 PM, Matthew Thode 
wrote:

> On 08/12/2016 09:53 AM, Andrey Pavlov wrote:
> > Thanks for the answer,
> >
> > I don't know why - but requirements job is not in the list of jobs. It
> > present only in comment from jenkins.
> > Logs are here
> > - http://logs.openstack.org/67/354667/1/check/gate-ec2-api-
> requirements/4e9e1da/
> >
> > And I would like to include ec2api project
> > to openstack/requirements/projects.txt
> > Is it possible?
>
> Ya, you can you'll first need to add the check-requirements job to your
> list of test in project-config.  Once it's added there we can add your
> project to projects.txt
>
> The relevant section of the readme is here.
>
> https://github.com/openstack/requirements/#enforcement-in-projects
>
> --
> -- Matthew Thode (prometheanfire)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Sean Dague
On 08/12/2016 01:10 PM, Walter A. Boring IV wrote:
> 
>> I was leaning towards a separate repo until I started thinking about all
>> the overhead and complications this would cause. It's another repo for
>> cores to watch. It would cause everyone extra complication in setting up
>> their CI, which is already one of the biggest roadblocks. It would make
>> it a little harder to do things like https://review.openstack.org/297140
>> and https://review.openstack.org/346470 to be able to generate this:
>> http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
>> setup, more moving parts to break, and just generally more
>> complications.
>>
>> All things that can be solved for sure. I just question whether it would
>> be worth having that overhead. Frankly, there are better things I'd like
>> to spend my time on.
>>
>> I think at this point my first preference would actually be to define a
>> new tag. This addresses both the driver removal issue as well as the
>> backporting of driver bug fixes. I would like to see third party drivers
>> recognized and treated as being different, because in reality they are
>> very different than the rest of the code. Having something like
>> follows_deprecation_but_has_third_party_drivers_that_dont would make a
>> clear statement that their is a vendor component to this project that
>> really has to be treated differently and has different concerns
>> deployers need to be aware of.
>>
>> Barring that, I think my next choice would be to remove the tag. That
>> would really be unfortunate as we do want to make it clear to users that
>> Cinder will not arbitrarily break APIs or do anything between releases
>> without warning when it comes to non-third party drivers. But if that is
>> what we need to do to effectively communicate what to expect from
>> Cinder, then I'm OK with that.
>>
>> My last choice (of the ones I'm favorable towards) would be marking a
>> driver as untested/unstable/abandoned/etc rather than removing it. We
>> could flag these a certain way and have then spam the logs like crazy
>> after upgrade to make it very and painfully clear that they are not
>> being maintained. But as Duncan pointed out, this doesn't have as much
>> impact for getting vendor attention. It's amazing the level of executive
>> involvement that can happen after a patch is put up for driver removal
>> due to non-compliance.
>>
>> Sean
>>
>> __
> I believe there is a compromise that we could implement in Cinder that
> enables us to have a deprecation
> of unsupported drivers that aren't meeting the Cinder driver
> requirements and allow upgrades to work
> without outright immediately removing a driver.
> 
>  1. Add a 'supported = True' attribute to every driver.
>  2. When a driver no longer meets Cinder community requirements, put a
> patch up against the driver
>  3. When c-vol service starts, check the supported flag.  If the flag is
> False, then log an exception, and disable the driver.
>  4. Allow the admin to put an entry in cinder.conf for the driver in
> question "enable_unsupported_driver = True".  This will allow the
> c-vol service to start the driver and allow it to work.  Log a
> warning on every driver call.
>  5. This is a positive acknowledgement by the operator that they are
> enabling a potentially broken driver. Use at your own risk.
>  6. If the vendor doesn't get the CI working in the next release, then
> remove the driver. 
>  7. If the vendor gets the CI working again, then set the supported flag
> back to True and all is good. 
> 
> 
> This allows a deprecation period for a driver, and keeps operators who
> upgrade their deployment from losing access to their volumes they have
> on those back-ends.  It will give them time to contact the community
> and/or do some research, and find out what happened to the driver.  
> This also potentially gives the operator time to find a new supported
> backend and start migrating volumes.  I say potentially, because the
> driver may be broken, or it may work enough to migrate volumes off of it
> to a new backend.
> 
> Having unsupported drivers in tree is terrible for the Cinder community,
> and in the long run terrible for operators.
> Instantly removing drivers because CI is unstable is terrible for
> operators in the short term, because as soon as they upgrade OpenStack,
> they lose all access to managing their existing volumes.   Just because
> we leave a driver in tree in this state, doesn't mean that the operator
> will be able to migrate if the drive is broken, but they'll have a
> chance depending on the state of the driver in question.  It could be
> horribly broken, but the breakage might be something fixable by someone
> that just knows Python.   If the driver is gone from tree entirely, then
> that's a lot more to overcome.
> 
> I don't think there is a way to make everyone happy all the time, but 

Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Clay Gerrard
I'd noticed other-requirements.txt around, but figured it needed a bunch of
custom tooling to actually make it useful.

And... it's a subprocess wrapper to a handful of package management tools
(surprised to see emerge and pacmac - Kudos!) and a custom format for
describing package requirements...

... but ... it doesn't have a --install option?  Do you know if that is
strictly out-of-scope or roadmap or ... ?

-Clay

On Fri, Aug 12, 2016 at 10:31 AM, Andreas Jaeger  wrote:

> TL;DR: Projects can use bindep.txt to document in a programmatic way
> their binary dependencies
>
> Python developers record their dependencies on other Python packages in
> requirements.txt and test-requirements.txt. But some packages
> havedependencies outside of python and we should document
> thesedependencies as well so that operators, developers, and CI systems
> know what needs to be available for their programs.
>
> Bindep is a solution to this, it allows a repo to document
> binarydependencies in a single file. It even enablies specification of
> which distribution the package belongs to - Debian, Fedora, Gentoo,
> openSUSE, RHEL, SLES and Ubuntu have different package names - and
> allows profiles, like a test profile.
>
> Bindep is one of the tools the OpenStack Infrastructure team has written
> and maintains. It is in use by already over 130 repositories.
>
> For better bindep adoption, in the just released bindep 2.1.0 we have
> changed the name of the default file used by bindep from
> other-requirements.txt to bindep.txt and have pushed changes [3] to
> master branches of repositories for this.
>
> Projects are encouraged to create their own bindep files. Besides
> documenting what is required, it also gives a speedup in running tests
> since you install only what you need and not all packages that some
> other project might need and are installed  by default. Each test system
> comes with a basic installation and then we either add the repo defined
> package list or the large default list.
>
> In the OpenStack CI infrastructure, we use the "test" profile for
> installation of packages. This allows projects to document their run
> time dependencies - the default packages - and the additional packages
> needed for testing.
>
> Be aware that bindep is not used by devstack based tests, those have
> their own way to document dependencies.
>
> A side effect is that your tests run faster, they have less packages to
> install. A Ubuntu Xenial test node installs 140 packages and that can
> take between 2 and 5 minutes. With a smaller bindep file, this can change.
>
> Let's look at the log file for a normal installation with using the
> default dependencies:
> 2 upgraded, 139 newly installed, 0 to remove and 41 not upgraded
> Need to get 148 MB of archives.
> After this operation, 665 MB of additional disk space will be used.
>
> Compare this with the openstack-manuals repostiry that uses bindep -
> this example was 20 seconds and not minutes:
> 0 upgraded, 17 newly installed, 0 to remove and 43 not upgraded.
> Need to get 35.8 MB of archives.
> After this operation, 128 MB of additional disk space will be used.
>
> If you want to learn more about bindep, read the Infra Manual on package
> requirements [1] or the bindep manual [2].
>
> If you have further questions about bindep, feel free to ask the Infra
> team on #openstack-infra.
>
> Thanks to Anita for reviewing and improving this blog post and to the
> OpenStack Infra team that maintains bindep, especially to Jeremy Stanley
> and Robert Collins.
>
> Note I'm sending this out while not all our test clouds have images that
> know about bindep.txt (they only handle other-requirements.txt). The
> infra team is in the process of ensuring updated images in all our test
> clouds for later today. Thanks, Paul!
>
> Andreas
>
>
> References:
> [1] http://docs.openstack.org/infra/manual/drivers.html#
> package-requirements
> [2] http://docs.openstack.org/infra/bindep/
> [3] https://review.openstack.org/#/q/branch:master+topic:bindep-mv
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread John Dickinson
bindep is great, and we've been using it in Swift for a while now. I'd 
definitely recommend it to other projects.

Andreas, I didn't see a patch proposed to Swift to move the file. I don't want 
to get in the way of your tool, though. Is there a patch that will be proposed, 
or should I do that myself?

--John




On 12 Aug 2016, at 10:31, Andreas Jaeger wrote:

> TL;DR: Projects can use bindep.txt to document in a programmatic way
> their binary dependencies
>
> Python developers record their dependencies on other Python packages in
> requirements.txt and test-requirements.txt. But some packages
> havedependencies outside of python and we should document
> thesedependencies as well so that operators, developers, and CI systems
> know what needs to be available for their programs.
>
> Bindep is a solution to this, it allows a repo to document
> binarydependencies in a single file. It even enablies specification of
> which distribution the package belongs to - Debian, Fedora, Gentoo,
> openSUSE, RHEL, SLES and Ubuntu have different package names - and
> allows profiles, like a test profile.
>
> Bindep is one of the tools the OpenStack Infrastructure team has written
> and maintains. It is in use by already over 130 repositories.
>
> For better bindep adoption, in the just released bindep 2.1.0 we have
> changed the name of the default file used by bindep from
> other-requirements.txt to bindep.txt and have pushed changes [3] to
> master branches of repositories for this.
>
> Projects are encouraged to create their own bindep files. Besides
> documenting what is required, it also gives a speedup in running tests
> since you install only what you need and not all packages that some
> other project might need and are installed  by default. Each test system
> comes with a basic installation and then we either add the repo defined
> package list or the large default list.
>
> In the OpenStack CI infrastructure, we use the "test" profile for
> installation of packages. This allows projects to document their run
> time dependencies - the default packages - and the additional packages
> needed for testing.
>
> Be aware that bindep is not used by devstack based tests, those have
> their own way to document dependencies.
>
> A side effect is that your tests run faster, they have less packages to
> install. A Ubuntu Xenial test node installs 140 packages and that can
> take between 2 and 5 minutes. With a smaller bindep file, this can change.
>
> Let's look at the log file for a normal installation with using the
> default dependencies:
> 2 upgraded, 139 newly installed, 0 to remove and 41 not upgraded
> Need to get 148 MB of archives.
> After this operation, 665 MB of additional disk space will be used.
>
> Compare this with the openstack-manuals repostiry that uses bindep -
> this example was 20 seconds and not minutes:
> 0 upgraded, 17 newly installed, 0 to remove and 43 not upgraded.
> Need to get 35.8 MB of archives.
> After this operation, 128 MB of additional disk space will be used.
>
> If you want to learn more about bindep, read the Infra Manual on package
> requirements [1] or the bindep manual [2].
>
> If you have further questions about bindep, feel free to ask the Infra
> team on #openstack-infra.
>
> Thanks to Anita for reviewing and improving this blog post and to the
> OpenStack Infra team that maintains bindep, especially to Jeremy Stanley
> and Robert Collins.
>
> Note I'm sending this out while not all our test clouds have images that
> know about bindep.txt (they only handle other-requirements.txt). The
> infra team is in the process of ensuring updated images in all our test
> clouds for later today. Thanks, Paul!
>
> Andreas
>
>
> References:
> [1] http://docs.openstack.org/infra/manual/drivers.html#package-requirements
> [2] http://docs.openstack.org/infra/bindep/
> [3] https://review.openstack.org/#/q/branch:master+topic:bindep-mv
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughts on testing novaclient functional with neutron

2016-08-12 Thread Dean Troyer
On Fri, Aug 12, 2016 at 10:13 AM, Matt Riedemann  wrote:

> Another idea is the base functional test that sets up the client just
> checks the keystone service catalog for a 'network' service entry,
> somewhere in here:
>

This is exactly the route OSC takes for those CLI commands that work
against both nova-network and neutron.  It's only been released since
earlier this year but appears to be working well in the field.  It boils
down to:

  if 'network' in service_catalog.get_endpoints():
  # neutron
  else:
  # nova-net

(service_catalog is from KSA's AccessInfo class)

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Clay Gerrard
The 
use_untested_probably_broken_deprecated_manger_so_maybe_i_can_migrate_cross_fingers
option sounds good!  The experiment would be then if it's still enough of a
stick to keep 3rd party drivers pony'd up on their commitment to the Cinder
team to consistently ship quality releases?

What about maybe the operator just not upgrading till post migration?  It's
the migration that sucks right?  You either get to punt a release and hope
it gets "back in good faith" or do it now and that 3rd party driver has
lost your business/trust.

-Clay

On Friday, August 12, 2016, Walter A. Boring IV > wrote:

>
> I was leaning towards a separate repo until I started thinking about all
> the overhead and complications this would cause. It's another repo for
> cores to watch. It would cause everyone extra complication in setting up
> their CI, which is already one of the biggest roadblocks. It would make
> it a little harder to do things like https://review.openstack.org/297140
> and https://review.openstack.org/346470 to be able to generate 
> this:http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
> setup, more moving parts to break, and just generally more
> complications.
>
> All things that can be solved for sure. I just question whether it would
> be worth having that overhead. Frankly, there are better things I'd like
> to spend my time on.
>
> I think at this point my first preference would actually be to define a
> new tag. This addresses both the driver removal issue as well as the
> backporting of driver bug fixes. I would like to see third party drivers
> recognized and treated as being different, because in reality they are
> very different than the rest of the code. Having something like
> follows_deprecation_but_has_third_party_drivers_that_dont would make a
> clear statement that their is a vendor component to this project that
> really has to be treated differently and has different concerns
> deployers need to be aware of.
>
> Barring that, I think my next choice would be to remove the tag. That
> would really be unfortunate as we do want to make it clear to users that
> Cinder will not arbitrarily break APIs or do anything between releases
> without warning when it comes to non-third party drivers. But if that is
> what we need to do to effectively communicate what to expect from
> Cinder, then I'm OK with that.
>
> My last choice (of the ones I'm favorable towards) would be marking a
> driver as untested/unstable/abandoned/etc rather than removing it. We
> could flag these a certain way and have then spam the logs like crazy
> after upgrade to make it very and painfully clear that they are not
> being maintained. But as Duncan pointed out, this doesn't have as much
> impact for getting vendor attention. It's amazing the level of executive
> involvement that can happen after a patch is put up for driver removal
> due to non-compliance.
>
> Sean
>
> __
>
> I believe there is a compromise that we could implement in Cinder that
> enables us to have a deprecation
> of unsupported drivers that aren't meeting the Cinder driver requirements
> and allow upgrades to work
> without outright immediately removing a driver.
>
>
>1. Add a 'supported = True' attribute to every driver.
>2. When a driver no longer meets Cinder community requirements, put a
>patch up against the driver
>3. When c-vol service starts, check the supported flag.  If the flag
>is False, then log an exception, and disable the driver.
>4. Allow the admin to put an entry in cinder.conf for the driver in
>question "enable_unsupported_driver = True".  This will allow the c-vol
>service to start the driver and allow it to work.  Log a warning on every
>driver call.
>5. This is a positive acknowledgement by the operator that they are
>enabling a potentially broken driver. Use at your own risk.
>6. If the vendor doesn't get the CI working in the next release, then
>remove the driver.
>7. If the vendor gets the CI working again, then set the supported
>flag back to True and all is good.
>
>
> This allows a deprecation period for a driver, and keeps operators who
> upgrade their deployment from losing access to their volumes they have on
> those back-ends.  It will give them time to contact the community and/or do
> some research, and find out what happened to the driver.   This also
> potentially gives the operator time to find a new supported backend and
> start migrating volumes.  I say potentially, because the driver may be
> broken, or it may work enough to migrate volumes off of it to a new backend.
>
> Having unsupported drivers in tree is terrible for the Cinder community,
> and in the long run terrible for operators.
> Instantly removing drivers because CI is unstable is terrible for
> operators in the short term, 

Re: [openstack-dev] [nova] Thoughts on testing novaclient functional with neutron

2016-08-12 Thread Armando M.
On 12 August 2016 at 08:13, Matt Riedemann 
wrote:

> I opened a bug yesterday against novaclient for running the functional
> tests against a neutron-backed devstack:
>
> https://bugs.launchpad.net/python-novaclient/+bug/1612410
>
> With neutron being the default in devstack now, people hacking on
> novaclient and running functional tests locally are going to have a hard
> time since the tests are unconditionally written with the assumption that
> the backing devstack is using nova-network.
>
> So we need to make the tests conditional, the question is what's the best
> way?
>
> We could use a config like how Tempest does it, but where does that
> happen? In the clouds.yaml, or the post_test_hook.sh, other?
>
> Another idea is the base functional test that sets up the client just
> checks the keystone service catalog for a 'network' service entry,
> somewhere in here:
>
> https://github.com/openstack/python-novaclient/blob/232711c0
> ef98baf79bcf4c8bdbae4b84003c9ab9/novaclient/tests/functional/base.py#L116
>
> Thoughts on either approach or something completely different?


Doesn't it make sense to configure the functional tests for novaclient to
make devstack work on a nova-net backend, and introduce a new non-voting
job used to flush out the new backend option, and transition over the old
job to the new one in due course?



>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread John Griffith
On Fri, Aug 12, 2016 at 12:10 PM, Walter A. Boring IV  wrote:

>
> I was leaning towards a separate repo until I started thinking about all
> the overhead and complications this would cause. It's another repo for
> cores to watch. It would cause everyone extra complication in setting up
> their CI, which is already one of the biggest roadblocks. It would make
> it a little harder to do things like https://review.openstack.org/297140
> and https://review.openstack.org/346470 to be able to generate 
> this:http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
> setup, more moving parts to break, and just generally more
> complications.
>
> All things that can be solved for sure. I just question whether it would
> be worth having that overhead. Frankly, there are better things I'd like
> to spend my time on.
>
> I think at this point my first preference would actually be to define a
> new tag. This addresses both the driver removal issue as well as the
> backporting of driver bug fixes. I would like to see third party drivers
> recognized and treated as being different, because in reality they are
> very different than the rest of the code. Having something like
> follows_deprecation_but_has_third_party_drivers_that_dont would make a
> clear statement that their is a vendor component to this project that
> really has to be treated differently and has different concerns
> deployers need to be aware of.
>
> Barring that, I think my next choice would be to remove the tag. That
> would really be unfortunate as we do want to make it clear to users that
> Cinder will not arbitrarily break APIs or do anything between releases
> without warning when it comes to non-third party drivers. But if that is
> what we need to do to effectively communicate what to expect from
> Cinder, then I'm OK with that.
>
> My last choice (of the ones I'm favorable towards) would be marking a
> driver as untested/unstable/abandoned/etc rather than removing it. We
> could flag these a certain way and have then spam the logs like crazy
> after upgrade to make it very and painfully clear that they are not
> being maintained. But as Duncan pointed out, this doesn't have as much
> impact for getting vendor attention. It's amazing the level of executive
> involvement that can happen after a patch is put up for driver removal
> due to non-compliance.
>
> Sean
>
> __
>
> I believe there is a compromise that we could implement in Cinder that
> enables us to have a deprecation
> of unsupported drivers that aren't meeting the Cinder driver requirements
> and allow upgrades to work
> without outright immediately removing a driver.
>
>
>1. Add a 'supported = True' attribute to every driver.
>2. When a driver no longer meets Cinder community requirements, put a
>patch up against the driver
>3. When c-vol service starts, check the supported flag.  If the flag
>is False, then log an exception, and disable the driver.
>4. Allow the admin to put an entry in cinder.conf for the driver in
>question "enable_unsupported_driver = True".  This will allow the c-vol
>service to start the driver and allow it to work.  Log a warning on every
>driver call.
>5. This is a positive acknowledgement by the operator that they are
>enabling a potentially broken driver. Use at your own risk.
>6. If the vendor doesn't get the CI working in the next release, then
>remove the driver.
>7. If the vendor gets the CI working again, then set the supported
>flag back to True and all is good.
>
>
> This allows a deprecation period for a driver, and keeps operators who
> upgrade their deployment from losing access to their volumes they have on
> those back-ends.  It will give them time to contact the community and/or do
> some research, and find out what happened to the driver.   This also
> potentially gives the operator time to find a new supported backend and
> start migrating volumes.  I say potentially, because the driver may be
> broken, or it may work enough to migrate volumes off of it to a new backend.
>
> Having unsupported drivers in tree is terrible for the Cinder community,
> and in the long run terrible for operators.
> Instantly removing drivers because CI is unstable is terrible for
> operators in the short term, because as soon as they upgrade OpenStack,
> they lose all access to managing their existing volumes.   Just because we
> leave a driver in tree in this state, doesn't mean that the operator will
> be able to migrate if the drive is broken, but they'll have a chance
> depending on the state of the driver in question.  It could be horribly
> broken, but the breakage might be something fixable by someone that just
> knows Python.   If the driver is gone from tree entirely, then that's a lot
> more to overcome.
>
> I don't think there is a way to make everyone happy all the time, but I
> 

[openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Andreas Jaeger
TL;DR: Projects can use bindep.txt to document in a programmatic way
their binary dependencies

Python developers record their dependencies on other Python packages in
requirements.txt and test-requirements.txt. But some packages
havedependencies outside of python and we should document
thesedependencies as well so that operators, developers, and CI systems
know what needs to be available for their programs.

Bindep is a solution to this, it allows a repo to document
binarydependencies in a single file. It even enablies specification of
which distribution the package belongs to - Debian, Fedora, Gentoo,
openSUSE, RHEL, SLES and Ubuntu have different package names - and
allows profiles, like a test profile.

Bindep is one of the tools the OpenStack Infrastructure team has written
and maintains. It is in use by already over 130 repositories.

For better bindep adoption, in the just released bindep 2.1.0 we have
changed the name of the default file used by bindep from
other-requirements.txt to bindep.txt and have pushed changes [3] to
master branches of repositories for this.

Projects are encouraged to create their own bindep files. Besides
documenting what is required, it also gives a speedup in running tests
since you install only what you need and not all packages that some
other project might need and are installed  by default. Each test system
comes with a basic installation and then we either add the repo defined
package list or the large default list.

In the OpenStack CI infrastructure, we use the "test" profile for
installation of packages. This allows projects to document their run
time dependencies - the default packages - and the additional packages
needed for testing.

Be aware that bindep is not used by devstack based tests, those have
their own way to document dependencies.

A side effect is that your tests run faster, they have less packages to
install. A Ubuntu Xenial test node installs 140 packages and that can
take between 2 and 5 minutes. With a smaller bindep file, this can change.

Let's look at the log file for a normal installation with using the
default dependencies:
2 upgraded, 139 newly installed, 0 to remove and 41 not upgraded
Need to get 148 MB of archives.
After this operation, 665 MB of additional disk space will be used.

Compare this with the openstack-manuals repostiry that uses bindep -
this example was 20 seconds and not minutes:
0 upgraded, 17 newly installed, 0 to remove and 43 not upgraded.
Need to get 35.8 MB of archives.
After this operation, 128 MB of additional disk space will be used.

If you want to learn more about bindep, read the Infra Manual on package
requirements [1] or the bindep manual [2].

If you have further questions about bindep, feel free to ask the Infra
team on #openstack-infra.

Thanks to Anita for reviewing and improving this blog post and to the
OpenStack Infra team that maintains bindep, especially to Jeremy Stanley
and Robert Collins.

Note I'm sending this out while not all our test clouds have images that
know about bindep.txt (they only handle other-requirements.txt). The
infra team is in the process of ensuring updated images in all our test
clouds for later today. Thanks, Paul!

Andreas


References:
[1] http://docs.openstack.org/infra/manual/drivers.html#package-requirements
[2] http://docs.openstack.org/infra/bindep/
[3] https://review.openstack.org/#/q/branch:master+topic:bindep-mv
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Walter A. Boring IV



I was leaning towards a separate repo until I started thinking about all
the overhead and complications this would cause. It's another repo for
cores to watch. It would cause everyone extra complication in setting up
their CI, which is already one of the biggest roadblocks. It would make
it a little harder to do things like https://review.openstack.org/297140
and https://review.openstack.org/346470 to be able to generate this:
http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
setup, more moving parts to break, and just generally more
complications.

All things that can be solved for sure. I just question whether it would
be worth having that overhead. Frankly, there are better things I'd like
to spend my time on.

I think at this point my first preference would actually be to define a
new tag. This addresses both the driver removal issue as well as the
backporting of driver bug fixes. I would like to see third party drivers
recognized and treated as being different, because in reality they are
very different than the rest of the code. Having something like
follows_deprecation_but_has_third_party_drivers_that_dont would make a
clear statement that their is a vendor component to this project that
really has to be treated differently and has different concerns
deployers need to be aware of.

Barring that, I think my next choice would be to remove the tag. That
would really be unfortunate as we do want to make it clear to users that
Cinder will not arbitrarily break APIs or do anything between releases
without warning when it comes to non-third party drivers. But if that is
what we need to do to effectively communicate what to expect from
Cinder, then I'm OK with that.

My last choice (of the ones I'm favorable towards) would be marking a
driver as untested/unstable/abandoned/etc rather than removing it. We
could flag these a certain way and have then spam the logs like crazy
after upgrade to make it very and painfully clear that they are not
being maintained. But as Duncan pointed out, this doesn't have as much
impact for getting vendor attention. It's amazing the level of executive
involvement that can happen after a patch is put up for driver removal
due to non-compliance.

Sean

__
I believe there is a compromise that we could implement in Cinder that 
enables us to have a deprecation
of unsupported drivers that aren't meeting the Cinder driver 
requirements and allow upgrades to work

without outright immediately removing a driver.

1. Add a 'supported = True' attribute to every driver.
2. When a driver no longer meets Cinder community requirements, put a
   patch up against the driver
3. When c-vol service starts, check the supported flag.  If the flag is
   False, then log an exception, and disable the driver.
4. Allow the admin to put an entry in cinder.conf for the driver in
   question "enable_unsupported_driver = True".  This will allow the
   c-vol service to start the driver and allow it to work.  Log a
   warning on every driver call.
5. This is a positive acknowledgement by the operator that they are
   enabling a potentially broken driver. Use at your own risk.
6. If the vendor doesn't get the CI working in the next release, then
   remove the driver.
7. If the vendor gets the CI working again, then set the supported flag
   back to True and all is good.


This allows a deprecation period for a driver, and keeps operators who 
upgrade their deployment from losing access to their volumes they have 
on those back-ends.  It will give them time to contact the community 
and/or do some research, and find out what happened to the driver.   
This also potentially gives the operator time to find a new supported 
backend and start migrating volumes.  I say potentially, because the 
driver may be broken, or it may work enough to migrate volumes off of it 
to a new backend.


Having unsupported drivers in tree is terrible for the Cinder community, 
and in the long run terrible for operators.
Instantly removing drivers because CI is unstable is terrible for 
operators in the short term, because as soon as they upgrade OpenStack, 
they lose all access to managing their existing volumes.   Just because 
we leave a driver in tree in this state, doesn't mean that the operator 
will be able to migrate if the drive is broken, but they'll have a 
chance depending on the state of the driver in question.  It could be 
horribly broken, but the breakage might be something fixable by someone 
that just knows Python.   If the driver is gone from tree entirely, then 
that's a lot more to overcome.


I don't think there is a way to make everyone happy all the time, but I 
think this buys operators a small window of opportunity to still manage 
their existing volumes before the driver is removed. It also still 
allows the Cinder community to deal with unsupported drivers in a way 
that will motivate vendors to keep their 

Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Clay Gerrard
On Thu, Aug 11, 2016 at 7:14 AM, Erno Kuvaja  wrote:

>
> Lets say I was ops evaluating different options as storage vendor for
> my cloud and I get told that "Here is the list of supported drivers
> for different OpenStack Cinder back ends delivered by Cinder team", I
> start looking what the support level of those drivers are and see that
> Cinder follows standard deprecation which is fairly user/ops friendly
> with decent warning etc. I'm happy with that, not knowing OpenStack I
> would not even look if different subcomponents of Cinder happens to
> follow different policy. Now I buy storage vendor X HW and at Oct I
> realize that the vendor's driver is not shipped, nor any remains of it
> is visible anymore, I'd be reasonably pissed off. If I knew that the
> risk is there I would select my HW based on the negotiations that my
> HW is contractually tied to maintain that driver and it's CI, and that
> would be fine as well or if not possible I'd select some other
> solution I could get reasonably guarantee that it will be
> supported/valid at it's expected life time. As said I don't think
> there is anything wrong with the 3rd party driver policy, but
> maintaining that and the tag about standard-deprecation project wide
> is sending wrong message to those who do not know better to safeguard
> their rear ends.
>

Can we clarify if anyone is aware of this *actually* happening?Because
this description of events sounds *terrible*?  If we have a case-in-point I
think it'd be down right negligent to not give the situation a proper RCA,
but I'd be *real* curious to hear the previous "4 whys" that lead to
"ultimately; the problems was the tags..."

I'm much more inclined to think that we should trust the Cinder team to do
what they think is best based on their experience.  If their experience is
that it's better for their operators that they *not* ship "deprecated (but
probably broken)" drivers - GOOD FOR THEM!  I think it'd be great if the
"standard deprecation policy" can be informed and updated based on the
experience of a successful project like Cinder - if not, yeah I really hope
they continue to do the *right* thing over the *standard* thing.

OTOH, if what they think is right is causing *real* problems, let's surface
those - if they got to this policy based on experience, new information
will spur new ideas.  But that's different than some pontification based on
hypotheticals.  Speaking of which, why is this even coming up in the
*development* ML?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican: Secure Setup & HSM-plugin

2016-08-12 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi Manuel,

I'm happy to hear about your interest in Barbican.  I assume your HSM
has a PKCS#11 interface since the admin commands to generate the MKEK
and HMAC keys worked for you.

The labels for the generated keys should be specified in the config
file for the API process. [1]  The API process uses the MKEK and HMAC
keys to encrypt and sign the secrets (keys) that are stored in
Barbican by clients.

The PKCS#11 plugin was designed to use the SQL Database to store
client keys (secrets) in the SQL database, so your API process must be
configured to use "store_crypto" as the enabled_secretstore_plugins
[2] in addition to specifing "p11_crypto" as the
enabled_crypto_plguins [3].

When configured this way, Barbican uses the HSM to encrypt the client
data (keys/secrets) before storing it in the DB.

The API itself does not currently support using keys stored by clients
to do server-side encryption, but it's a feature that has been
discussed in previous summits with some interest.  We've also had some
discussions with the Designate team to add server-side signing that
they could use to implement DNSSEC, but we don't yet have a blueprint
for it.

Let me know if you have any more questions.

- - Douglas Mendizábal

[1]
http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/barbi
can.conf#n278
[2]
http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/barbi
can.conf#n255
[3]
http://git.openstack.org/cgit/openstack/barbican/tree/etc/barbican/barbi
can.conf#n260


On 8/12/16 7:51 AM, Praktikant HSM wrote:
> Hi all,
> 
> As a member of Utimaco's pre-sales team I am currently testing an 
> integration of Barbican with one of our HSMs.
> 
> 
> 
> We were able to generate MKEKs and HMAC keys on the HSM with the 
> 'pkcs11-key-generation' as well as 'barbican-manage hsm' commands. 
> However, it is not fully clear to us how to use these keys to
> encrypt or sign data.
> 
> 
> 
> Additionally, we would appreciate further information concerning
> the secure setup of Barbican with an HSM-plugin.
> 
> 
> 
> Thank you in advance for your support.
> 
> 
> 
> Best regards,
> 
> 
> 
> 
> 
> Manuel Roth
> 
> 
> 
> ---
> 
> System Engineering HSM
> 
> 
> 
> Utimaco IS GmbH
> 
> Germanusstr. 4
> 
> 52080 Aachen
> 
> Germany
> 
> 
> 
> www.utimaco.com 
> 
> 
> --
- --
>
>  Utimaco IS GmbH Germanusstr. 4, D.52080 Aachen, Germany, Tel:
> +49-241-1696-0, www.utimaco.com Seat: Aachen – Registergericht
> Aachen HRB 18922 VAT ID No.: DE 815 496 496 Managementboard: Malte
> Pollmann (Chairman) CEO, Dr. Frank J. Nellissen CFO
> 
> This communication is confidential. We only send and receive email
> on the basis of the terms set out at 
> https://www.utimaco.com/en/e-mail-disclaimer/
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJXrfg0AAoJEB7Z2EQgmLX7A08QAIpZqMKNDdT8MwM/iLmlDrMz
s/3wh+BErcQ8DHRHfwFijS6R+dm3/lZxzwTFszcRGgnXS90cKkZ0MGfuabne3Ul1
ZaFi7HvN64H34ujWTWBz5aD36yDOQB3bvv/gakI5CAxziQzL+3lAJqZmc7uQBlPA
p1/85zGYCi414ub62Je+DSJe0zW7p8UqfrCWXdTjEC23e00hguSFPuVDgLafkHIa
0HC059Cw4vC1RFyasOa96a5YlPtqGkuHzqJlZmeU14NZX0sSRxqSy4zqE210t8PT
FKp99xbIqWvlvHfcvjbvUN56SCIZUg1NeUAtlD2GP0RhO6/RBb4dMAQ61xy2OmuL
gKtWCJNOzjhqU0VB/pxip5yS/hXFtars1N/T3bmz91GoQXPisR3YF7xQHSoSVpdd
6lrIsQxZwiIP0IHMRKPxhrTgpSWzI9cZ9pquYpYX8YLuGkqYmQMGccD6aa1iaBC+
BMIYOpaS5a6sIIHFzvOeLi/9KpWDcRMIU5y5NG9Yt4jgNzVC5wfLKexmfIzzPztV
4ePECVHr+d5S2KcsP0upNW1dO8RTcFB0yKmGio3+VFJAdCMW7i5GP6+qi8rmYG3t
ZCbNTnU4KIPKb7aWV83m9L2gK2V2BHsznIQX19yQbAe4u3HtTHvrCxvl8mVNaD11
ejBi0uxDrn4zwEWeEVr1
=9c/C
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Next Scheduler Subteam meeting

2016-08-12 Thread Ed Leafe
The next meeting of the Nova Scheduler subteam will be on Monday, August 15 at 
1400 UTC in #openstack-meeting-alt

http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160815T14

The agenda is here: https://wiki.openstack.org/wiki/Meetings/NovaScheduler

If you have any items you wish to discuss, please add them to the agenda before 
the meeting.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Branching Gnocchi stable/2.2

2016-08-12 Thread Julien Danjou
On Fri, Aug 12 2016, Doug Hellmann wrote:

> I've created the branch, which includes a patch to update the .gitreview
> file [1] and another to update the release notes [2]. You may want to
> modify that release note patch, so feel free to take it over and fix it
> up how you like.

Perfect, thanks Doug!

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Anita Kuno

On 16-08-12 09:21 AM, Thierry Carrez wrote:

Sean Dague wrote:

I 100% understand the cinder policy of kicking drivers out without CI.
And I think there is a lot of value in ensuring what's in tree is tested.

However, from a user perspective basically it means that if you deploy
Newton cinder and build a storage infrastructure around anything other
than ceph, lvm, or NFS, you have a very real chance of never being able
to upgrade to Ocata, because your driver was fully deleted, unless you
are willing to completely change up your storage architecture during the
upgrade.

That is the kind of reality that should be front and center to the
users. Because it's not just a drop of standard deprecation, it's also a
removal of 'supports upgrade', as Netwon cinder config won't work with
Ocata.

Could there be more of an off ramp / on ramp here to the drivers? If a
driver CI fails to meet the reporting window mark it deprecated for the
next delete window. If a driver is in a deprecated state they need some
long window of continuous reporting to get out of that state (like 120
days or something). Bring in all new drivers in a
deprecated/experimental/untested state, which they only get to shrug off
after the onramp window?

It's definitely important that the project has the ability to clean out
the cruft, but it would be nice to not be overly brutal to our operators
at the same time.

And if not, I think that tags (or lack there of) aren't fully
communicating the situation here. Cinder docs should basically say "only
use ceph / lvm / nfs, as those are the only drivers that we can
guarantee will be in the next release".

+1

Both of the options (keeping cruft in tree vs. having no assurance at
all that your choice of driver will be around in 6 months) are horrible
from an operators standpoint. But I think that's a false dichotomy and
we need a more subtle solution: communicate about sane drivers where we
trust the ability of core team or the vendor to still provide a workable
solution in the next release (following standard deprecation policy)
while still being able to remove cruft if a driver goes stale /
untested. That means defining multiple tiers of trust, and having each
driver build that trust over time.


I think building trust over time is the crucial point here. I think we 
as a community have learned that in certain areas, there is a vast 
amount of clean up to be done by allotting trust prior to it being earned.


Giving folks an opportunity to earn trust, actual trust, not just gaming 
a process, enables everyone to be able to work together optimally. We 
have learned some folks are unwilling to do the work to earn it.


But some folks are worthy of trust and have demonstrated it. Thanks to 
those who have wanted that for themselves.


Thank you,
Anita.



In that other thread I proposed two tiers (in openstack/cinder following
deprecation and stable policies and in a separate Cinder repository if
you don't trust it to follow the policies) since the Cinder team sees
value in keeping them cinder-core-reviewed and in a limited number of
repositories.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Adding RHEL 7 STIG to openstack-ansible-security

2016-08-12 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/04/2016 12:45 PM, Major Hayden wrote:
> The existing openstack-ansible-security role uses security configurations 
> from the Security Technical Implementation Guide (STIG) and the new Red Hat 
> Enterprise Linux 7 STIG is due out soon.  The role is currently based on the 
> RHEL 6 STIG, and although this works quite well for Ubuntu 14.04, the RHEL 7 
> STIG has plenty of improvements that work better with Ubuntu 16.04, CentOS 7 
> and RHEL 7.

I've gone ahead and proposed a spec for these changes here:

  https://review.openstack.org/#/c/354389/

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIsBAEBCAAWBQJXre2XDxxtYWpvckBtaHR4Lm5ldAAKCRBzcFHgwQEfsUOTEACc
Y+8TwsdFpmePraheCu/REA3f+Jd/Qu+DE6ZWWD9KdMdzYJZY4ODmnevkKxg2aOw6
kvh1b3cHOa6WD6Vppw9645hj4rAm/Gisi0ULUl4gAiLuti8Q/A+hbO9GTgEryXW5
ptVVKV+zfV6Ieul0C5LopfUj+6ItvvHWlkQJ9JHVgCsFEVA2nN5dcP79A13KHkzH
qdCCkWeS/3S6fSiNTg8npHkigd4CxQuGHzn4mE/rVMGRjq80SJZUOvaKQFl9yB7s
eeblvRiwpK568S1jxLzfktH/L1s9JrS06LP510vzTM0lTv787HOKd9wRcYe56RvG
UED7wsCy4DwQJQL8UmFhoHvNlEGwZ0EOPavstiur3vUu2yyKf8WxXUPlvs43hWyf
YDfayr6MPvcq5SvplN8BJadB3dIMjWdGlCoVtW7Kfgr1MVrZphdJtPyvzRlZhi1n
7zvrhqa/1zed/uAMncpMvGnO4NVw50QUzCZ3A0ZspoQzhIP4Gtx0aZiKfjm51xey
q5QCGQRYXA8h9iD7dCx0q0kkTGCRMfeNPkFOapawlzP+KhsxoJm7rIZQbtrM3Qv8
hBbF/D8mf+fwVjU17eb0D1FjaRcPQEINoiMUPEayZBIW7ZhzsUGTf53bcRrR53u/
oOoWYCE3XfOgMogunZzz4ncvqEXJfxsPahXfUcfytg==
=Wk7j
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Fix evaluation of host disk usage by volume-backed instances

2016-08-12 Thread melanie witt

I am working on a POC with this approach and will test all possible
scenarios (boot, resize, reboot, compute service stop/start,
shelved-unshelved etc).



Please let me know your opinion about the same or you have any other
solution in mind.




Hi Abhishek,


FWIW, I'm working on a patch to propose soon for the bug [1] as we had 
discussed on the review [2]. I'm currently testing it out locally.



Best,
melanie


[1] https://bugs.launchpad.net/nova/+bug/1469179
[2] https://review.openstack.org/#/c/200870


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Thoughts on testing novaclient functional with neutron

2016-08-12 Thread Matt Riedemann
I opened a bug yesterday against novaclient for running the functional 
tests against a neutron-backed devstack:


https://bugs.launchpad.net/python-novaclient/+bug/1612410

With neutron being the default in devstack now, people hacking on 
novaclient and running functional tests locally are going to have a hard 
time since the tests are unconditionally written with the assumption 
that the backing devstack is using nova-network.


So we need to make the tests conditional, the question is what's the 
best way?


We could use a config like how Tempest does it, but where does that 
happen? In the clouds.yaml, or the post_test_hook.sh, other?


Another idea is the base functional test that sets up the client just 
checks the keystone service catalog for a 'network' service entry, 
somewhere in here:


https://github.com/openstack/python-novaclient/blob/232711c0ef98baf79bcf4c8bdbae4b84003c9ab9/novaclient/tests/functional/base.py#L116

Thoughts on either approach or something completely different?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Why module has been deleted even if it is consumed by the project?

2016-08-12 Thread Matthew Thode
On 08/12/2016 09:53 AM, Andrey Pavlov wrote:
> Thanks for the answer,
> 
> I don't know why - but requirements job is not in the list of jobs. It
> present only in comment from jenkins.
> Logs are here
> - 
> http://logs.openstack.org/67/354667/1/check/gate-ec2-api-requirements/4e9e1da/
> 
> And I would like to include ec2api project
> to openstack/requirements/projects.txt
> Is it possible?

Ya, you can you'll first need to add the check-requirements job to your
list of test in project-config.  Once it's added there we can add your
project to projects.txt

The relevant section of the readme is here.

https://github.com/openstack/requirements/#enforcement-in-projects

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] OVO Status Dashboard

2016-08-12 Thread Morales, Victor
Hey neutrinos,

First of all, the high priority for OVO implementation in newton release are 
the implementation and integration of port, subnet and network objects, but 
given that more people is joining to this initiative and also many patches are
related directly and indirectly to this, results in something hard to track. 
So, I decided to create this document[1] to visualize and coordinate efforts.
Feel free to include, modify or add missing things but even more try to review 
existing patches and help us to achieve our goal.

[1] 
https://docs.google.com/spreadsheets/d/1FeeQlQITsZSj_wpOXiLbS36dirb_arX0XEWBdFVPMB8/edit?usp=sharing

Regards/Saludos
Victor Morales
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread John Griffith
On Fri, Aug 12, 2016 at 7:37 AM, Sean McGinnis 
wrote:

> On Fri, Aug 12, 2016 at 03:40:47PM +0300, Duncan Thomas wrote:
> > On 12 Aug 2016 15:28, "Thierry Carrez"  wrote:
> > >
> > > Duncan Thomas wrote:
> >
> > > I agree that leaving broken drivers in tree is not significantly better
> > > from an operational perspective. But I think the best operational
> > > experience would be to have an idea of how much risk you expose
> yourself
> > > when you pick a driver, and have a number of them that are actually
> > > /covered/ by the standard deprecation policy.
> > >
> > > So ideally there would be a number of in-tree drivers (on which the
> > > Cinder team would apply the standard deprecation policy), and a
> separate
> > > repository for 3rd-party drivers that can be removed at any time (and
> > > which would /not/ have the follows-standard-deprecation-policy tag).
> >
> > So we'd certainly have to move out all of the backends requiring
> > proprietary hardware, since we couldn't commit to keeping them working if
> > their vendors turn of their CI. That leaves ceph, lvm, NFS, drdb, and
> > sheepdog, I think. There is not enough broad knowledge in the core team
> > currently to support sheepdog or drdb without 'vendor' help. That would
> > leave us with three drivers in the tree, and not actually provide much
> > useful risk information to deployers at all.
> >
> > > I understand that this kind of reorganization is a bit painful for
> > > little (developer-side) gain, but I think it would provide the most
> > > useful information to our users and therefore the best operational
> > > experience...
> >
> > In theory this might be true, but see above - in practice it doesn't work
> > that way.
>
> I was leaning towards a separate repo until I started thinking about all
> the overhead and complications this would cause. It's another repo for
> cores to watch. It would cause everyone extra complication in setting up
> their CI, which is already one of the biggest roadblocks. It would make
> it a little harder to do things like https://review.openstack.org/297140
> and https://review.openstack.org/346470 to be able to generate this:
> http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
> setup, more moving parts to break, and just generally more
> complications.
>
> All things that can be solved for sure. I just question whether it would
> be worth having that overhead. Frankly, there are better things I'd like
> to spend my time on.
>
> I think at this point my first preference would actually be to define a
> new tag. This addresses both the driver removal issue as well as the
> backporting of driver bug fixes. I would like to see third party drivers
> recognized and treated as being different, because in reality they are
> very different than the rest of the code. Having something like
> follows_deprecation_but_has_third_party_drivers_that_dont would make a
> clear statement that their is a vendor component to this project that
> really has to be treated differently and has different concerns
> deployers need to be aware of.
>
> Barring that, I think my next choice would be to remove the tag. That
> would really be unfortunate as we do want to make it clear to users that
> Cinder will not arbitrarily break APIs or do anything between releases
> without warning when it comes to non-third party drivers. But if that is
> what we need to do to effectively communicate what to expect from
> Cinder, then I'm OK with that.
>
> My last choice (of the ones I'm favorable towards) would be marking a
> driver as untested/unstable/abandoned/etc rather than removing it. We
> could flag these a certain way and have then spam the logs like crazy
> after upgrade to make it very and painfully clear that they are not
> being maintained. But as Duncan pointed out, this doesn't have as much
> impact for getting vendor attention. It's amazing the level of executive
> involvement that can happen after a patch is put up for driver removal
> due to non-compliance.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Yeah, I think something like a "passes-upstream-integration" tag per driver
would be a better option.  Whether that's collected via automation looking
at the gerrit info from 3'rd party CI or we bring back the old manual Cert
scripts (or some form of them) is another conversation worth having next
time we're all together.​  Now to try and agree on the criteria might be a
bit of work.

By going with a tag we don't remove anything but we also don't pretend that
we know it works or anything either.
​
 The statement suggesting that if it's not in the infra gate then it must
be considered as maybe not there in the future 

Re: [openstack-dev] [requirements] Why module has been deleted even if it is consumed by the project?

2016-08-12 Thread Andrey Pavlov
Thanks for the answer,

I don't know why - but requirements job is not in the list of jobs. It
present only in comment from jenkins.
Logs are here -
http://logs.openstack.org/67/354667/1/check/gate-ec2-api-requirements/4e9e1da/

And I would like to include ec2api project to openstack/requirements/
projects.txt
Is it possible?

On Fri, Aug 12, 2016 at 5:42 PM, Doug Hellmann 
wrote:

> Excerpts from Andrey Pavlov's message of 2016-08-12 16:50:21 +0300:
> > Hi,
> >
> > When I've tried to bump version in ec2api project for some library - I've
> > got an error from requirements-gate job [1].
> > I've investigated this issue and found that this module was deleted from
> > global requirements with comment 'only ec2api is using it'.
> > It was done in the review [2].
> >
> > What happens?
> > Why consumed module was deleted?
> > How I can update this requirement now?
> > What means phrase "This is not consumed by any projects managed by
> > requirements."?
> >
> >
> > [1] - https://review.openstack.org/#/c/354667/
> > [2] - https://review.openstack.org/#/c/321955/
> >
>
> It looks like you're trying to change the version of botocore used by
> the ec2-api project. Is that right?
>
> I don't see the requirements gate job in the list of tests that ran
> against patch [1] at all, so I'm not sure where the error message
> you reported is coming from. Can you link to a log from the job?
>
> I don't see ec2-api listed in openstack/requirements/projects.txt
> as a project that receives automated updates from the global
> requirements list. Looking through all of the requirements files
> for all projects, I get:
>
> ec2-api   requirements.txt   botocore>=1.0.0
> rpm-packaging global-requirements.txtbotocore>=1.0.0
>
> Because only projects that aren't syncing requirements were using
> botocore, it didn't need to be included in the list of global
> requirements that is synced with projects. You should be able to
> modify your requirements list directly to update to the newer
> version.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Branching Gnocchi stable/2.2

2016-08-12 Thread Doug Hellmann
Excerpts from Julien Danjou's message of 2016-08-12 09:10:34 +0200:
> Hi release team,
> 
> I've asked on IRC, but I guess it's safer on the ml.
> 
> We tagged Gnocchi 2.2.0 using the release repo, but as discussed earlier
> on this list, the branching system is not ready yet. So we'd need
> someone on your side to cut a stable/2.2 branch starting at the 2.2.0
> tag.
> 
> Thanks!
> 
> Cheers,

I've created the branch, which includes a patch to update the .gitreview
file [1] and another to update the release notes [2]. You may want to
modify that release note patch, so feel free to take it over and fix it
up how you like.

Doug

[1] https://review.openstack.org/354757
[2] https://review.openstack.org/354758

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [neutron] [api] [doc] API reference for neutron stadium projects (re: API status report)

2016-08-12 Thread Henry Gessau
Akihiro Motoki  wrote:
> this mail focuses on neutron-specific topics. I dropped cinder and ironic 
> tags.
> 
> 2016-08-11 23:52 GMT+09:00 Anne Gentle :
>>
>>
>> On Wed, Aug 10, 2016 at 2:49 PM, Anne Gentle 
>> wrote:
>>>
>>> Hi all,
>>> I wanted to report on status and answer any questions you all have about
>>> the API reference and guide publishing process.
>>>
>>> The expectation is that we provide all OpenStack API information on
>>> developer.openstack.org. In order to meet that goal, it's simplest for now
>>> to have all projects use the RST+YAML+openstackdocstheme+os-api-ref
>>> extension tooling so that users see available OpenStack APIs in a sidebar
>>> navigation drop-down list.
>>>
>>> --Migration--
>>> The current status for migration is that all WADL content is migrated
>>> except for trove. There is a patch in progress and I'm in contact with the
>>> team to assist in any way. https://review.openstack.org/#/c/316381/
>>>
>>> --Theme, extension, release requirements--
>>> The current status for the theme, navigation, and Sphinx extension tooling
>>> is contained in the latest post from Graham proposing a solution for the
>>> release number switchover and offers to help teams as needed:
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101112.html I
>>> hope to meet the requirements deadline to get those changes landed.
>>> Requirements freeze is Aug 29.
>>>
>>> --Project coverage--
>>> The current status for project coverage is that these projects are now
>>> using the RST+YAML in-tree workflow and tools and publishing to
>>> http://developer.openstack.org/api-ref/ so they will be
>>> included in the upcoming API navigation sidebar intended to span all
>>> OpenStack APIs:
>>>
>>> designate http://developer.openstack.org/api-ref/dns/
>>> glance http://developer.openstack.org/api-ref/image/
>>> heat http://developer.openstack.org/api-ref/orchestration/
>>> ironic http://developer.openstack.org/api-ref/baremetal/
>>> keystone http://developer.openstack.org/api-ref/identity/
>>> manila http://developer.openstack.org/api-ref/shared-file-systems/
>>> neutron-lib http://developer.openstack.org/api-ref/networking/
>>> nova http://developer.openstack.org/api-ref/compute/
>>> sahara http://developer.openstack.org/api-ref/data-processing/
>>> senlin http://developer.openstack.org/api-ref/clustering/
>>> swift http://developer.openstack.org/api-ref/object-storage/
>>> zaqar http://developer.openstack.org/api-ref/messaging/
>>>
>>> These projects are using the in-tree workflow and common tools, but do not
>>> have a publish job in project-config in the jenkins/jobs/projects.yaml file.
>>>
>>> ceilometer
>>
>>
>> Sorry, in reviewing further today I found another project that does not have
>> a publish job but has in-tree source files:
>>
>> cinder
>>
>> Team cinder: can you let me know where you are in your publishing comfort
>> level? Please add an api-ref-jobs: line with a target of block-storage to
>> jenkins/jobs/projects.yaml in the project-config repo to ensure publishing
>> is correct.
>>
>> Another issue is the name of the target directory for the final URL. Team
>> ironic can I change your api-ref-jobs: line to bare-metal instead of
>> baremetal? It'll be better for search engines and for alignment with the
>> other projects URLs: https://review.openstack.org/354135
>>
>> I've also uncovered a problem where a neutron project's API does not have an
>> official service name, and am working on a solution but need help from the
>> neutron team: https://review.openstack.org/#/c/351407
> 
> I followed the discussion in https://review.openstack.org/#/c/351407
> and my understanding of the conclusion is to add API reference source
> of neutron stadium projects
> to neutron-lib and publish them under
> http://developer.openstack.org/api-ref/networking/ .
> I sounds reasonable to me.
> 
> We can have a dedicated pages for each stadium project like networking-sfc
> like api-ref/networking/service-function-chaining.
> At now all APIs are placed under v2/ directory, but it is not good
> both from user and
> maintenance perspectives.
> 
> 
> So, the next thing we need to clarify is what names and directory
> structure are appropropriate
> from the documentation perspective.
> My proposal is to prepare a dedicated directory per networking project
> repository.
> The directory name should be a function name rather than a project
> name. For example,
> - neutron => ???
> - neutron-lbaas => load-balancer
> - neutron-vpnaas => vpn
> - neutron-fwaas => firewall
> - neutron-dynamic-routing => dynamic-routing
> - networking-sfc => service-function-chaining
> - networking-l2gw => layer2-gateway
> - (networking-bgpvpn) => bgp-vpn
> 
> My remaining open questions are:
> 
> - Is 'v2' directory needed?
>   All networking API provided by stadium projects are extensions to
> Networking v2 API and "v2" is the only API we have 

Re: [openstack-dev] [requirements] Why module has been deleted even if it is consumed by the project?

2016-08-12 Thread Doug Hellmann
Excerpts from Andrey Pavlov's message of 2016-08-12 16:50:21 +0300:
> Hi,
> 
> When I've tried to bump version in ec2api project for some library - I've
> got an error from requirements-gate job [1].
> I've investigated this issue and found that this module was deleted from
> global requirements with comment 'only ec2api is using it'.
> It was done in the review [2].
> 
> What happens?
> Why consumed module was deleted?
> How I can update this requirement now?
> What means phrase "This is not consumed by any projects managed by
> requirements."?
> 
> 
> [1] - https://review.openstack.org/#/c/354667/
> [2] - https://review.openstack.org/#/c/321955/
> 

It looks like you're trying to change the version of botocore used by
the ec2-api project. Is that right?

I don't see the requirements gate job in the list of tests that ran
against patch [1] at all, so I'm not sure where the error message
you reported is coming from. Can you link to a log from the job?

I don't see ec2-api listed in openstack/requirements/projects.txt
as a project that receives automated updates from the global
requirements list. Looking through all of the requirements files
for all projects, I get:

ec2-api   requirements.txt   botocore>=1.0.0
rpm-packaging global-requirements.txtbotocore>=1.0.0

Because only projects that aren't syncing requirements were using
botocore, it didn't need to be included in the list of global
requirements that is synced with projects. You should be able to
modify your requirements list directly to update to the newer
version.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-12 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2016-08-11 15:00:56 -0700:
> 
> On 10 Aug 2016, at 8:29, Doug Hellmann wrote:
> 
> > Excerpts from Doug Hellmann's message of 2016-07-29 16:55:22 -0400:
> >> One of the outcomes of the discussion at the leadership training
> >> session earlier this year was the idea that the TC should set some
> >> community-wide goals for accomplishing specific technical tasks to
> >> get the projects synced up and moving in the same direction.
> >>
> >> After several drafts via etherpad and input from other TC and SWG
> >> members, I've prepared the change for the governance repo [1] and
> >> am ready to open this discussion up to the broader community. Please
> >> read through the patch carefully, especially the "goals/index.rst"
> >> document which tries to lay out the expectations for what makes a
> >> good goal for this purpose and for how teams are meant to approach
> >> working on these goals.
> >>
> >> I've also prepared two patches proposing specific goals for Ocata
> >> [2][3].  I've tried to keep these suggested goals for the first
> >> iteration limited to "finish what we've started" type items, so
> >> they are small and straightforward enough to be able to be completed.
> >> That will let us experiment with the process of managing goals this
> >> time around, and set us up for discussions that may need to happen
> >> at the Ocata summit about implementation.
> >>
> >> For future cycles, we can iterate on making the goals "harder", and
> >> collecting suggestions for goals from the community during the forum
> >> discussions that will happen at summits starting in Boston.
> >>
> >> Doug
> >>
> >> [1] https://review.openstack.org/349068 describe a process for managing 
> >> community-wide goals
> >> [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
> >> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> >> libraries"
> >>
> >
> > The proposal was discussed at the TC meeting yesterday [4], and
> > left open to give more time to comment. I've added all of the PTLs
> > for big tent projects as reviewers on the process patch [1] to
> > encourage comments from them.
> >
> > Please also look at the associated patches with the specific goals
> > for this cycle (python 3.5 support and cleaning up Oslo incubated
> > code).  So far most of the discussion has focused on the process,
> > but we need folks to think about the specific things they're going
> > to be asked to do during Ocata as well.
> >
> > Doug
> >
> > [4] 
> > http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-09-20.01.log.html
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> Commonality in goals and vision is what unites any community. I
> definitely support the TC's effort to define these goals for OpenStack
> and to champion them. However, I have a few concerns about the process
> that has been proposed.
> 
> I'm concerned with the mandate that all projects must prioritize these
> goals above all other work. Thinking about this from the perspective of
> the employers of OpenStack contributors, and I'm finding it difficult
> to imagine them (particularly smaller ones) getting behind this
> prioritization mandate. For example, if I've got a user or deployer
> issue that requires an upstream change, am I to prioritize Py35
> compatibility over "broken in production"? Am I now to schedule my own
> work on known bugs or missing features only after these goals have
> been met? Is that what I should ask other community members to do too?

There is a difference between priority and urgency. Clearly "broken
in production" is more urgent than other planned work. It's less
clear that, over the span of an entire 6 month release cycle, one
production outage is the most important thing the team would have
worked on.

The point of the current wording is to make it clear that because these
are goals coming from the entire community, teams are expected to place
a high priority on completing them. In some cases that may mean
working on community goals instead of working on internal team goals. We
all face this tension all the time, so that's nothing new.

> I agree with Hongbin Lu's comments that the resulting goals might fit
> into the interests of the majority but fundamentally violate the
> interests of a minority of project teams. As an example, should the TC
> decide that a future goal is for projects to implement a particular
> API-WG document, that may be good for several projects, but it might
> not be possible or advisable for others.

Again, the goals are not coming from the TC, they are coming from the
entire community. There will be discussion sessions, mailing list
threads, experimentation, etc. before any 

Re: [openstack-dev] [nova] Fix evaluation of host disk usage by volume-backed instances

2016-08-12 Thread Feodor Tersin
Hi Abhishek


There were a number of tries to analyze if an instance is booted from volume in 
DiskFilter itself. All reviews were stopped by core team members.


As for you assume about image_ref is None for such instances, it is not right 
for an instance booted from a volume backed image (a snapshot of another volume 
backed instance). In this case image_ref refers to the snapshot - Glance image 
which does not contain data, but which metadata has links to volume snapshot(s).


You also need to keep in mind that many other parts of code uses root_gb 
directly. E.g. nova-manage script calculates size of used host space by it (at 
least it did that a year ago).


Another thing (why i stopped to work on it) is that this change (setting 
root_gb to 0) does not fix resize case. In this case scheduler and other 
components must not check root_gb, but must do checks with corresponded 
attribute of a new flavor. And i did not found an easy way how to provide there 
the fact that root_gb need to be ignored.


I hope these info was useful for you.


Thanks,

Feodor Tersin


From: Kekane, Abhishek 
Sent: Friday, August 12, 2016 4:29:11 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [nova] Fix evaluation of host disk usage by 
volume-backed instances

Hi Nova developers,

This is about the patch: https://review.openstack.org/#/c/200870/19

We would like to fix this issue in Newton and back port it to Mitaka.

Reason:
Ubuntu 16.04 LTS supports Mitaka release. If we wait for this fix until Ocata 
release (~April 2017), then Ubuntu team might need some more time to release 
Ocata in 16.04 (~Oct 2017). I think it will be too late to fix such an 
important and critical issue. Now on the other hand, if we fix this issue in 
Newton and back port it to Mitaka, the chances of getting this fix in Ubuntu 
16.04 increases and it would be available to the Ubuntu users anytime between 
Oct and Dec of this year.

We admit that this patch is a hack but considering its severity, it’s important 
to get it fixed as early as possible. Moreover, this code has been reviewed by 
many eyes so far and I don’t see its breaking current functionality. After this 
issue is fixed in the Ocata release during resource-providers implementation, 
we can delete these changes.

This issue is discussed in Thu Aug 11 14:00:18 2016 UTC Nova meeting [1] and 
community came to conclusion that:

We need to fix this issue in Newton but

1. Not willing to modify instance root_gb that is stored in instances db table.
2. Suggested to fix this issue in RT but that won't solve the scheduler 
DiskFilter issue completely.


We have following approach in mind:

1. Scheduler DiskFilter should ignore root_gb from RequestSpec if instance is 
booted from volume.

IMO boot server doesn't accept both image_id and volume_id to launch a new 
server. That means, if the instance is booted from volume, image_ref will 
always be None in the instances db table. i.e. instance.image_ref should be 
None. So, in the RequestSpec class, we should add an attribute 
"is_volume_backed' and set it to True when image is None. The Diskfilter has 
access to spec_obj, so simply check if is_volume_backed is True, if yes, ignore 
root_gb else count root_gb and take further action. This will solve the 
scheduler DiskFilter issue.

2. Resource tracker should also ignore root_gb while updating compute disk 
metrics.
Again in "_get_usage_dict" method of resource_tracker.py, check if image is 
None, if yes, simply set root_gb to 0. This way each compute node will report 
disk metrics to the scheduler correctly.

So the entire logic is based on image_ref of instance, it should be None if 
instance is booted from volume.

I am working on a POC with this approach and will test all possible scenarios 
(boot, resize, reboot, compute service stop/start, shelved-unshelved etc).

Please let me know your opinion about the same or you have any other solution 
in mind.

[1] 
http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-08-11-14.00.log.html

Thank you,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-12 Thread Matt Riedemann

On 8/12/2016 8:52 AM, Matt Riedemann wrote:

On 8/12/2016 8:24 AM, Sean McGinnis wrote:

On Fri, Aug 12, 2016 at 05:55:47AM -0400, Sean Dague wrote:

A devstack patch was pushed earlier this cycle around os-brick -
https://review.openstack.org/341744

Apparently there are some os-brick operations that are only safe if the
nova and cinder lock paths are set to be the same thing. Though that
hasn't yet hit release notes or other documentation yet that I can see.


Patrick East submitted a patch to add a release note on the Cinder side
last night: https://review.openstack.org/#/c/354501/


Is this a thing that everyone is aware of at this point? Are project
teams ok with this new requirement? Given that lock_path has no default,
this means we're potentially shipping corruption by default to users.
The other way forward would be to revisit that lock_path by default
concern, and have a global default. Or have some way that users are
warned if we think they aren't in a compliant state.


This is a very good point that we are shipping corruption by default. I
would actually be in favor of having a global default. Other than
requiring tooz for default global locking (with a lot of extra overhead
for small deployments), I don't see a better way of making sure the
defaults are safe for those not aware of the issue.

And IMO, having the release note is just a CYA step. We can hope someone
reads it - and understands it's implications - but it likely will be
missed.

Anyway, that's my 2 cents.

Sean



I've put the devstack patch on a -2 hold until we get ACK from both Nova
and Cinder teams that everyone's cool with this.

-Sean

--
Sean Dague
http://dague.net

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I saw the nova one last night:

https://review.openstack.org/#/c/354502/

But I don't know the details, like what are the actual specific things
that fail w/o this? Vague "trust me, you need to do this or else"
release notes that impact how people deploy is not fun, so I'd like more
details before we just put this out there.



This is also probably something that should be advertised on the 
openstack-operators ML. I would at least feel more comfortable if this 
is a known thing that operators have already been dealing with and we 
just didn't realize.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-12 Thread Matt Riedemann

On 8/12/2016 8:24 AM, Sean McGinnis wrote:

On Fri, Aug 12, 2016 at 05:55:47AM -0400, Sean Dague wrote:

A devstack patch was pushed earlier this cycle around os-brick -
https://review.openstack.org/341744

Apparently there are some os-brick operations that are only safe if the
nova and cinder lock paths are set to be the same thing. Though that
hasn't yet hit release notes or other documentation yet that I can see.


Patrick East submitted a patch to add a release note on the Cinder side
last night: https://review.openstack.org/#/c/354501/


Is this a thing that everyone is aware of at this point? Are project
teams ok with this new requirement? Given that lock_path has no default,
this means we're potentially shipping corruption by default to users.
The other way forward would be to revisit that lock_path by default
concern, and have a global default. Or have some way that users are
warned if we think they aren't in a compliant state.


This is a very good point that we are shipping corruption by default. I
would actually be in favor of having a global default. Other than
requiring tooz for default global locking (with a lot of extra overhead
for small deployments), I don't see a better way of making sure the
defaults are safe for those not aware of the issue.

And IMO, having the release note is just a CYA step. We can hope someone
reads it - and understands it's implications - but it likely will be
missed.

Anyway, that's my 2 cents.

Sean



I've put the devstack patch on a -2 hold until we get ACK from both Nova
and Cinder teams that everyone's cool with this.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I saw the nova one last night:

https://review.openstack.org/#/c/354502/

But I don't know the details, like what are the actual specific things 
that fail w/o this? Vague "trust me, you need to do this or else" 
release notes that impact how people deploy is not fun, so I'd like more 
details before we just put this out there.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] Why module has been deleted even if it is consumed by the project?

2016-08-12 Thread Andrey Pavlov
Hi,

When I've tried to bump version in ec2api project for some library - I've
got an error from requirements-gate job [1].
I've investigated this issue and found that this module was deleted from
global requirements with comment 'only ec2api is using it'.
It was done in the review [2].

What happens?
Why consumed module was deleted?
How I can update this requirement now?
What means phrase "This is not consumed by any projects managed by
requirements."?


[1] - https://review.openstack.org/#/c/354667/
[2] - https://review.openstack.org/#/c/321955/

-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Sean McGinnis
On Fri, Aug 12, 2016 at 03:40:47PM +0300, Duncan Thomas wrote:
> On 12 Aug 2016 15:28, "Thierry Carrez"  wrote:
> >
> > Duncan Thomas wrote:
> 
> > I agree that leaving broken drivers in tree is not significantly better
> > from an operational perspective. But I think the best operational
> > experience would be to have an idea of how much risk you expose yourself
> > when you pick a driver, and have a number of them that are actually
> > /covered/ by the standard deprecation policy.
> >
> > So ideally there would be a number of in-tree drivers (on which the
> > Cinder team would apply the standard deprecation policy), and a separate
> > repository for 3rd-party drivers that can be removed at any time (and
> > which would /not/ have the follows-standard-deprecation-policy tag).
> 
> So we'd certainly have to move out all of the backends requiring
> proprietary hardware, since we couldn't commit to keeping them working if
> their vendors turn of their CI. That leaves ceph, lvm, NFS, drdb, and
> sheepdog, I think. There is not enough broad knowledge in the core team
> currently to support sheepdog or drdb without 'vendor' help. That would
> leave us with three drivers in the tree, and not actually provide much
> useful risk information to deployers at all.
> 
> > I understand that this kind of reorganization is a bit painful for
> > little (developer-side) gain, but I think it would provide the most
> > useful information to our users and therefore the best operational
> > experience...
> 
> In theory this might be true, but see above - in practice it doesn't work
> that way.

I was leaning towards a separate repo until I started thinking about all
the overhead and complications this would cause. It's another repo for
cores to watch. It would cause everyone extra complication in setting up
their CI, which is already one of the biggest roadblocks. It would make
it a little harder to do things like https://review.openstack.org/297140
and https://review.openstack.org/346470 to be able to generate this:
http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
setup, more moving parts to break, and just generally more
complications.

All things that can be solved for sure. I just question whether it would
be worth having that overhead. Frankly, there are better things I'd like
to spend my time on.

I think at this point my first preference would actually be to define a
new tag. This addresses both the driver removal issue as well as the
backporting of driver bug fixes. I would like to see third party drivers
recognized and treated as being different, because in reality they are
very different than the rest of the code. Having something like
follows_deprecation_but_has_third_party_drivers_that_dont would make a
clear statement that their is a vendor component to this project that
really has to be treated differently and has different concerns
deployers need to be aware of.

Barring that, I think my next choice would be to remove the tag. That
would really be unfortunate as we do want to make it clear to users that
Cinder will not arbitrarily break APIs or do anything between releases
without warning when it comes to non-third party drivers. But if that is
what we need to do to effectively communicate what to expect from
Cinder, then I'm OK with that.

My last choice (of the ones I'm favorable towards) would be marking a
driver as untested/unstable/abandoned/etc rather than removing it. We
could flag these a certain way and have then spam the logs like crazy
after upgrade to make it very and painfully clear that they are not
being maintained. But as Duncan pointed out, this doesn't have as much
impact for getting vendor attention. It's amazing the level of executive
involvement that can happen after a patch is put up for driver removal
due to non-compliance.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Fix evaluation of host disk usage by volume-backed instances

2016-08-12 Thread Kekane, Abhishek
Hi Nova developers,

This is about the patch: https://review.openstack.org/#/c/200870/19

We would like to fix this issue in Newton and back port it to Mitaka.

Reason:
Ubuntu 16.04 LTS supports Mitaka release. If we wait for this fix until Ocata 
release (~April 2017), then Ubuntu team might need some more time to release 
Ocata in 16.04 (~Oct 2017). I think it will be too late to fix such an 
important and critical issue. Now on the other hand, if we fix this issue in 
Newton and back port it to Mitaka, the chances of getting this fix in Ubuntu 
16.04 increases and it would be available to the Ubuntu users anytime between 
Oct and Dec of this year.

We admit that this patch is a hack but considering its severity, it's important 
to get it fixed as early as possible. Moreover, this code has been reviewed by 
many eyes so far and I don't see its breaking current functionality. After this 
issue is fixed in the Ocata release during resource-providers implementation, 
we can delete these changes.

This issue is discussed in Thu Aug 11 14:00:18 2016 UTC Nova meeting [1] and 
community came to conclusion that:

We need to fix this issue in Newton but

1. Not willing to modify instance root_gb that is stored in instances db table.
2. Suggested to fix this issue in RT but that won't solve the scheduler 
DiskFilter issue completely.


We have following approach in mind:

1. Scheduler DiskFilter should ignore root_gb from RequestSpec if instance is 
booted from volume.

IMO boot server doesn't accept both image_id and volume_id to launch a new 
server. That means, if the instance is booted from volume, image_ref will 
always be None in the instances db table. i.e. instance.image_ref should be 
None. So, in the RequestSpec class, we should add an attribute 
"is_volume_backed' and set it to True when image is None. The Diskfilter has 
access to spec_obj, so simply check if is_volume_backed is True, if yes, ignore 
root_gb else count root_gb and take further action. This will solve the 
scheduler DiskFilter issue.

2. Resource tracker should also ignore root_gb while updating compute disk 
metrics.
Again in "_get_usage_dict" method of resource_tracker.py, check if image is 
None, if yes, simply set root_gb to 0. This way each compute node will report 
disk metrics to the scheduler correctly.

So the entire logic is based on image_ref of instance, it should be None if 
instance is booted from volume.

I am working on a POC with this approach and will test all possible scenarios 
(boot, resize, reboot, compute service stop/start, shelved-unshelved etc).

Please let me know your opinion about the same or you have any other solution 
in mind.

[1] 
http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-08-11-14.00.log.html

Thank you,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-12 Thread Thierry Carrez
Duncan Thomas wrote:
> [...]
> To turn the question around, what is the downside of loosing the tag?

The tag does not exist in a vacuum. It describes a behavior that
operators want.

They want a sane deprecation policy so that the blanket is not pulled
from under them without a warning. They want a sane stable policy so
that they can deploy from trusted stable branches and not necessarily
review patch-per-patch to see if one happens to change behavior in an
unwanted way.

> Are people going to suddenly stop deploying cinder? That seems rather
> unlikely.

This sounds a bit like "you'll have to swallow this or just stop
deploying cinder altogether" which is not really a place I want us to
put our users in.

Asking the same questions on the -operators list, telling them that you
won't have a deprecation policy or a stable policy anymore and that if
you don't like it you should drop OpenStack altogether, I suspect you'd
get a pretty sad response.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-12 Thread Jay Pipes

On 08/12/2016 04:05 AM, Daniel P. Berrange wrote:

On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:

Hi Novas and anyone interested in how to represent capabilities in a
consistent fashion.

I spent an hour creating a new os-capabilities Python library this evening:

http://github.com/jaypipes/os-capabilities

Please see the README for examples of how the library works and how I'm
thinking of structuring these capability strings and symbols. I intend
os-capabilities to be the place where the OpenStack community catalogs and
collates standardized features for hardware, devices, networks, storage,
hypervisors, etc.

Let me know what you think about the structure of the library and whether
you would be interested in owning additions to the library of constants in
your area of expertise.


How are you expecting that these constants are used ? It seems unlikely
the, say nova code, code is going to be explicitly accessing any of the
individual CPU flag constants.


These capability strings are what deployers will associate with a flavor 
in Nova and they will be passed in the request to the placement API in 
either a "requirements" or a "preferences" list. In order to ensure that 
two OpenStack clouds refer to various capabilities (not just CPU flags, 
see below), we need a curated list of these standardized constants.


> It should surely just be entirely metatadata

driven - eg libvirt driver would just parse libvirt capabilities XML and
extract all the CPU flag strings & simply export them.


You are just thinking in terms of (lib)virt/compute capabilities. 
os-capabilities intends to provide a standard set of capability 
constants for more than virt/compute, including storage, network devices 
and more.


But, yes, I imagine discovery code running on a compute node with the 
*libvirt* virt driver could indeed simply query the libvirt capabilities 
XML snippet and translate those capability codes into os-capabilities 
constants. Remember, VMWare and Hyper-V also need to do this discovery 
and translation to a standardized set of constants. So does 
ironic-inspector when it queries an IPMI interface of course.


> It would be very

undesirable to have to add new code to os-capabilities every time that
Intel/AMD create new CPU flags for new features, and force users to upgrade
openstack to be able to express requirements on those CPU flags.


I don't see how we would be able to expose a particular new CPU flag 
*across disparate OpenStack clouds* unless we have some standardized set 
of constants that has been curated. Not all OpenStack clouds run 
libvirt. And again, think bigger than just virt/compute.


Best,
-jay


Next steps for the library include:

* Bringing in other top-level namespaces like disk: or net: and working with
contributors to fill in the capability strings and symbols.
* Adding constraints functionality to the library. For instance, building in
information to the os-capabilities interface that would allow a set of
capabilities to be cross-checked for set violations. As an example, a
resource provider having DISK_GB inventory cannot have *both* the disk:ssd
*and* the disk:hdd capability strings associated with it -- clearly the disk
storage is either SSD or spinning disk.


Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-12 Thread Jay Pipes

On 08/12/2016 07:49 AM, Jim Rollenhagen wrote:

On Thu, Aug 11, 2016 at 9:03 PM, Jay Pipes  wrote:

On 08/11/2016 05:46 PM, Clay Gerrard wrote:


On Thu, Aug 11, 2016 at 2:25 PM, Ed Leafe > wrote:


Overall this looks good, although it seems a bit odd to have
ALL_CAPS_STRINGS to represent all:caps:strings throughout. The
example you gave:

>>> print os_caps.HW_CPU_X86_SSE42
hw:cpu:x86:sse42


Just to be clear, this project doesn't *do* anything right?  Like it
won't parse `/proc/cpuinfo` and actually figure out a machines cpu flags
that can then be broadcast as "capabilities"?

Like, TBH I think it took me longer than I would prefer to honestly
admit to find out about /sys/block//queue/rotational [1]

So if there was a library about standardizing how hardware capabilities
are discovered and reported - that maybe seems like a sane sort of thing
for a collection of related projects to agree on.  But I'm not sure if
this does that?



Hi Clay!

It does not currently do that, but I'm interested in adding this capability
(pun intended).


ironic-python-agent does some of this discovery. It isn't
comprehensive, but it's a good starting point if we want to
lift some of that code out. The classes are here, and the
discovery things are in the same file if you grep around. :)

https://github.com/openstack/ironic-python-agent/blob/master/ironic_python_agent/hardware.py#L186-L251


Rock on. Duly noted :)

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-12 Thread Duncan Thomas
On 12 August 2016 at 16:09, Thierry Carrez  wrote:

>
> How about: 4. Take 3rd-party drivers to a separate cinder-extra-drivers
> repository/deliverable under the Cinder team, one that would /not/ have
> follows-stable-policy or follows-standard-deprecation tags ? That
> repository would still get core-reviewed by the Cinder team, so you
> would keep the centralized code review value. It would be in a single
> repository, so you would keep most of the "all drivers checked out in
> one place" benefits. But you could have a special stable branch policy
> there and that would also solve that other issue in the thread about
> removing unmaintained drivers without deprecation notices.
>
> Or is there another benefit in shipping everything inside a single
> repository that you didn't mention ?
>

The development process is definitely smoother with everything in one repo.
Cross repo changes (even repos under the same team, like brinck is for
cinder) are painful, because you have to get the change into the 'child'
repo, wait of it to merge, then wait for it to be released in some form
that is usable to the parent project (e.g. a pip release), then finally you
can merge the cinder change.

To turn the question around, what is the downside of loosing the tag? Are
people going to suddenly stop deploying cinder? That seems rather unlikely.

Nobody has yet given a single benefit to shipping a broken driver.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-12 Thread Sean McGinnis
On Fri, Aug 12, 2016 at 05:55:47AM -0400, Sean Dague wrote:
> A devstack patch was pushed earlier this cycle around os-brick -
> https://review.openstack.org/341744
> 
> Apparently there are some os-brick operations that are only safe if the
> nova and cinder lock paths are set to be the same thing. Though that
> hasn't yet hit release notes or other documentation yet that I can see.

Patrick East submitted a patch to add a release note on the Cinder side
last night: https://review.openstack.org/#/c/354501/

> Is this a thing that everyone is aware of at this point? Are project
> teams ok with this new requirement? Given that lock_path has no default,
> this means we're potentially shipping corruption by default to users.
> The other way forward would be to revisit that lock_path by default
> concern, and have a global default. Or have some way that users are
> warned if we think they aren't in a compliant state.

This is a very good point that we are shipping corruption by default. I
would actually be in favor of having a global default. Other than
requiring tooz for default global locking (with a lot of extra overhead
for small deployments), I don't see a better way of making sure the
defaults are safe for those not aware of the issue.

And IMO, having the release note is just a CYA step. We can hope someone
reads it - and understands it's implications - but it likely will be
missed.

Anyway, that's my 2 cents.

Sean

> 
> I've put the devstack patch on a -2 hold until we get ACK from both Nova
> and Cinder teams that everyone's cool with this.
> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Thierry Carrez
Sean Dague wrote:
> I 100% understand the cinder policy of kicking drivers out without CI.
> And I think there is a lot of value in ensuring what's in tree is tested.
> 
> However, from a user perspective basically it means that if you deploy
> Newton cinder and build a storage infrastructure around anything other
> than ceph, lvm, or NFS, you have a very real chance of never being able
> to upgrade to Ocata, because your driver was fully deleted, unless you
> are willing to completely change up your storage architecture during the
> upgrade.
> 
> That is the kind of reality that should be front and center to the
> users. Because it's not just a drop of standard deprecation, it's also a
> removal of 'supports upgrade', as Netwon cinder config won't work with
> Ocata.
> 
> Could there be more of an off ramp / on ramp here to the drivers? If a
> driver CI fails to meet the reporting window mark it deprecated for the
> next delete window. If a driver is in a deprecated state they need some
> long window of continuous reporting to get out of that state (like 120
> days or something). Bring in all new drivers in a
> deprecated/experimental/untested state, which they only get to shrug off
> after the onramp window?
> 
> It's definitely important that the project has the ability to clean out
> the cruft, but it would be nice to not be overly brutal to our operators
> at the same time.
> 
> And if not, I think that tags (or lack there of) aren't fully
> communicating the situation here. Cinder docs should basically say "only
> use ceph / lvm / nfs, as those are the only drivers that we can
> guarantee will be in the next release".

+1

Both of the options (keeping cruft in tree vs. having no assurance at
all that your choice of driver will be around in 6 months) are horrible
from an operators standpoint. But I think that's a false dichotomy and
we need a more subtle solution: communicate about sane drivers where we
trust the ability of core team or the vendor to still provide a workable
solution in the next release (following standard deprecation policy)
while still being able to remove cruft if a driver goes stale /
untested. That means defining multiple tiers of trust, and having each
driver build that trust over time.

In that other thread I proposed two tiers (in openstack/cinder following
deprecation and stable policies and in a separate Cinder repository if
you don't trust it to follow the policies) since the Cinder team sees
value in keeping them cinder-core-reviewed and in a limited number of
repositories.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Duncan Thomas
Strictly speaking, we only guarantee lvm... If any other driver starts
failing CI and nobody steps up to fix it then it wool be removed. I listed
ceph and NFS because I think there's enough knowledge and interest in the
core team to keep them working without needing any particular company to
help out.

We could make the windows larger as you suggest, but experience has shown
that this just causes vendors to make CI less of a priority, and
realistically we are already struggling to get meaningful results from CI.

If we remove a driver, it is highly likely that a forward port from the
previous release is trivial. Anybody building a deployment without some
sort of contracted driver commitment from their storage vendor is probably
doing themselves and the community a disservice though.

120 days of broken CI probably means we're shipping broken code, and I'm
not sure tags help most deployers - we have a lot of them, and the fine
details of their meaning is not obvious. There's only one tag appropriate
in prestige for a driver that has no passing CI - BROKEN. Shipping broken
code does not help anybody who's trying to rely on it, even during upgrade.
We might as well be honest and force them to do the forward port. If we
leave the broken driver in and they upgrade and everything breaks, it just
makes cinder look broken, without putting the blame squarely where it
belong - with the storage vendor who hadn't kept up the support for their
product. Giving a fake façade of 'support' just allows vendors to sell more
unsupported stuff, it doesn't help users, OpenStack developers or operators.

We could split and the drivers out into a new tree and give it different
tags, but it would slow down development, and frankly we've enough problems
on that front already. As far as I can tell, we (the cinder team) are
better off shrugging about the tags and carrying on as we are.

On 12 Aug 2016 15:54, "Sean Dague"  wrote:

> On 08/12/2016 08:40 AM, Duncan Thomas wrote:
> > On 12 Aug 2016 15:28, "Thierry Carrez"  > > wrote:
> >>
> >> Duncan Thomas wrote:
> >
> >> I agree that leaving broken drivers in tree is not significantly better
> >> from an operational perspective. But I think the best operational
> >> experience would be to have an idea of how much risk you expose yourself
> >> when you pick a driver, and have a number of them that are actually
> >> /covered/ by the standard deprecation policy.
> >>
> >> So ideally there would be a number of in-tree drivers (on which the
> >> Cinder team would apply the standard deprecation policy), and a separate
> >> repository for 3rd-party drivers that can be removed at any time (and
> >> which would /not/ have the follows-standard-deprecation-policy tag).
> >
> > So we'd certainly have to move out all of the backends requiring
> > proprietary hardware, since we couldn't commit to keeping them working
> > if their vendors turn of their CI. That leaves ceph, lvm, NFS, drdb, and
> > sheepdog, I think. There is not enough broad knowledge in the core team
> > currently to support sheepdog or drdb without 'vendor' help. That would
> > leave us with three drivers in the tree, and not actually provide much
> > useful risk information to deployers at all.
>
> I 100% understand the cinder policy of kicking drivers out without CI.
> And I think there is a lot of value in ensuring what's in tree is tested.
>
> However, from a user perspective basically it means that if you deploy
> Newton cinder and build a storage infrastructure around anything other
> than ceph, lvm, or NFS, you have a very real chance of never being able
> to upgrade to Ocata, because your driver was fully deleted, unless you
> are willing to completely change up your storage architecture during the
> upgrade.
>
> That is the kind of reality that should be front and center to the
> users. Because it's not just a drop of standard deprecation, it's also a
> removal of 'supports upgrade', as Netwon cinder config won't work with
> Ocata.
>
> Could there be more of an off ramp / on ramp here to the drivers? If a
> driver CI fails to meet the reporting window mark it deprecated for the
> next delete window. If a driver is in a deprecated state they need some
> long window of continuous reporting to get out of that state (like 120
> days or something). Bring in all new drivers in a
> deprecated/experimental/untested state, which they only get to shrug off
> after the onramp window?
>
> It's definitely important that the project has the ability to clean out
> the cruft, but it would be nice to not be overly brutal to our operators
> at the same time.
>
> And if not, I think that tags (or lack there of) aren't fully
> communicating the situation here. Cinder docs should basically say "only
> use ceph / lvm / nfs, as those are the only drivers that we can
> guarantee will be in the next release".
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> 

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-12 Thread Thierry Carrez
Duncan Thomas wrote:
> [...]
> Given this need, what are our options?
> 
> 1. We could do all this outside Openstack infrastructure. There are
> significant downsides to doing so from organisational, maintenance, cost
> etc points of view. Also means that the place vendors go for these
> patches is not obvious, and the process for getting patches in is not
> standard.
> 
> 2. We could have something not named 'stable' that has looser rules than
> stable branches,, maybe just pep8 / unit / cinder in-tree tests. No
> devstack.
> 
> 3. We go with the Neutron model and take drivers out of tree. This is
> not something the cinder core team are in favour of - we see significant
> value in the code review that drivers currently get - the code quality
> improvements between when a driver is submitted and when it is merged
> are sometimes very significant. Also, taking the code out of tree makes
> it difficult to get all the drivers checked out in one place to analyse
> e.g. how a certain driver call is implemented across all the drivers,
> when reasoning or making changes to core code.

How about: 4. Take 3rd-party drivers to a separate cinder-extra-drivers
repository/deliverable under the Cinder team, one that would /not/ have
follows-stable-policy or follows-standard-deprecation tags ? That
repository would still get core-reviewed by the Cinder team, so you
would keep the centralized code review value. It would be in a single
repository, so you would keep most of the "all drivers checked out in
one place" benefits. But you could have a special stable branch policy
there and that would also solve that other issue in the thread about
removing unmaintained drivers without deprecation notices.

Or is there another benefit in shipping everything inside a single
repository that you didn't mention ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Sean Dague
On 08/12/2016 08:40 AM, Duncan Thomas wrote:
> On 12 Aug 2016 15:28, "Thierry Carrez"  > wrote:
>>
>> Duncan Thomas wrote:
> 
>> I agree that leaving broken drivers in tree is not significantly better
>> from an operational perspective. But I think the best operational
>> experience would be to have an idea of how much risk you expose yourself
>> when you pick a driver, and have a number of them that are actually
>> /covered/ by the standard deprecation policy.
>>
>> So ideally there would be a number of in-tree drivers (on which the
>> Cinder team would apply the standard deprecation policy), and a separate
>> repository for 3rd-party drivers that can be removed at any time (and
>> which would /not/ have the follows-standard-deprecation-policy tag).
> 
> So we'd certainly have to move out all of the backends requiring
> proprietary hardware, since we couldn't commit to keeping them working
> if their vendors turn of their CI. That leaves ceph, lvm, NFS, drdb, and
> sheepdog, I think. There is not enough broad knowledge in the core team
> currently to support sheepdog or drdb without 'vendor' help. That would
> leave us with three drivers in the tree, and not actually provide much
> useful risk information to deployers at all.

I 100% understand the cinder policy of kicking drivers out without CI.
And I think there is a lot of value in ensuring what's in tree is tested.

However, from a user perspective basically it means that if you deploy
Newton cinder and build a storage infrastructure around anything other
than ceph, lvm, or NFS, you have a very real chance of never being able
to upgrade to Ocata, because your driver was fully deleted, unless you
are willing to completely change up your storage architecture during the
upgrade.

That is the kind of reality that should be front and center to the
users. Because it's not just a drop of standard deprecation, it's also a
removal of 'supports upgrade', as Netwon cinder config won't work with
Ocata.

Could there be more of an off ramp / on ramp here to the drivers? If a
driver CI fails to meet the reporting window mark it deprecated for the
next delete window. If a driver is in a deprecated state they need some
long window of continuous reporting to get out of that state (like 120
days or something). Bring in all new drivers in a
deprecated/experimental/untested state, which they only get to shrug off
after the onramp window?

It's definitely important that the project has the ability to clean out
the cruft, but it would be nice to not be overly brutal to our operators
at the same time.

And if not, I think that tags (or lack there of) aren't fully
communicating the situation here. Cinder docs should basically say "only
use ceph / lvm / nfs, as those are the only drivers that we can
guarantee will be in the next release".

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barbican: Secure Setup & HSM-plugin

2016-08-12 Thread Praktikant HSM
Hi all,
As a member of Utimaco's pre-sales team I am currently testing an integration 
of Barbican with one of our HSMs.

We were able to generate MKEKs and HMAC keys on the HSM with the 
'pkcs11-key-generation' as well as 'barbican-manage hsm' commands. However, it 
is not fully clear to us how to use these keys to encrypt or sign data.

Additionally, we would appreciate further information concerning the secure 
setup of Barbican with an HSM-plugin.

Thank you in advance for your support.

Best regards,


Manuel Roth

---
System Engineering HSM

Utimaco IS GmbH
Germanusstr. 4
52080 Aachen
Germany

www.utimaco.com



Utimaco IS GmbH
Germanusstr. 4, D.52080 Aachen, Germany, Tel: +49-241-1696-0, www.utimaco.com
Seat: Aachen - Registergericht Aachen HRB 18922
VAT ID No.: DE 815 496 496
Managementboard: Malte Pollmann (Chairman) CEO, Dr. Frank J. Nellissen CFO

This communication is confidential. We only send and receive email on the basis 
of the terms set out at https://www.utimaco.com/en/e-mail-disclaimer/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-12 Thread Duncan Thomas
Is there some docs for it somewhere? Or some quick way of telling that
we've done it and gotten it right?

On 12 Aug 2016 08:17, "Andreas Jaeger"  wrote:

> On 08/12/2016 04:25 AM, Robert Collins wrote:
> > On 11 Aug 2016 3:13 PM, "Ben Swartzlander"  > > wrote:
> >>
> >> ...
> >>
> >> I still don't agree with this stance. Code doesn't just magically stop
> > working. Code breaks when things change which aren't version controlled
> > properly or when you have undeclared dependencies.
> >
> > Well this is why the constraints work was and is being done. It's not
> > 100%rolled out as far as I know though, and stable branch support feels
> > all the gaps.
>
> As announced yesterday:
>
> Constraints work is *now* 100 % rolled out from the infra side, it's up
> to projects to use it fully now,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Duncan Thomas
On 12 Aug 2016 15:28, "Thierry Carrez"  wrote:
>
> Duncan Thomas wrote:

> I agree that leaving broken drivers in tree is not significantly better
> from an operational perspective. But I think the best operational
> experience would be to have an idea of how much risk you expose yourself
> when you pick a driver, and have a number of them that are actually
> /covered/ by the standard deprecation policy.
>
> So ideally there would be a number of in-tree drivers (on which the
> Cinder team would apply the standard deprecation policy), and a separate
> repository for 3rd-party drivers that can be removed at any time (and
> which would /not/ have the follows-standard-deprecation-policy tag).

So we'd certainly have to move out all of the backends requiring
proprietary hardware, since we couldn't commit to keeping them working if
their vendors turn of their CI. That leaves ceph, lvm, NFS, drdb, and
sheepdog, I think. There is not enough broad knowledge in the core team
currently to support sheepdog or drdb without 'vendor' help. That would
leave us with three drivers in the tree, and not actually provide much
useful risk information to deployers at all.

> I understand that this kind of reorganization is a bit painful for
> little (developer-side) gain, but I think it would provide the most
> useful information to our users and therefore the best operational
> experience...

In theory this might be true, but see above - in practice it doesn't work
that way.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-12 Thread Thierry Carrez
John Dickinson wrote:
> [...]
> I know the TC has no malicious intent here, and I do support the idea
> of having cross-project goals. The first goals proposed seem like
> great goals.  And I understand the significant challenges of
> coordinating goals between a multitude of different projects. However,
> I haven't yet added my own +1 to the proposed goals because the
> current process means that I am committing that every Swift project
> team contributor is now to prioritize that work above all else, no
> matter what is happening to their customers, their products, or their
> communities.
> [...]

I agree that the wording around "prioritization" is slightly suboptimal.
I think the intent here is that each project team commits to getting
that work done during the development cycle, barring exceptional
circumstances.

The way I see it, that doesn't mean you would prioritize that (as in "do
it first") over urgent things like fixing a bug that results in data
corruption or a significant vulnerability. It means it should be a
priority to get that done over the cycle. It should be seen as a "must
have" rather than a "nice to have" when you discuss cycle priorities.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] additional git repo(s) for tripleo-quickstart

2016-08-12 Thread Paul Belanger
On Wed, Aug 10, 2016 at 03:26:18PM -0400, Wesley Hayutin wrote:
> Greetings,
> 
> In an effort to make TripleO CI composable and managed and governed by the
> TripleO project we have  found the need to create additional git repos in
> openstack under the TripleO project.  This could also be done outside of
> the TripleO project, but ideally it's in TripleO.
> 
> I'm proposing the creation of a repo called tripleo-quickstart-extras that
> would contain some or all of the current third party roles used with
> TripleO-Quickstart.
> 
> The context behind this discussion is that we would like to use oooq to
> document baremetal deployments to supplement and or replace the current
> TripleO documentation.  It would be ideal of the code used to create this
> documentation was part of the TripleO project.
> 
> We're looking for discussion and permission for a new TripleO git repo to
> be created.
> 
>From an infrastructure point of view, creating additional git repos is
straight forward.

The way I see it is, either create tripleo-quickstart-extras repo with all your
roles, or start doing individual roles for example:

  ansible-role-tripleo-build-all-the-things

I'd be on board with using the ansible-role-tripleo prefix for roles specific to
tripleo. And it seems to be your current naming schema too.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest][plugins] Service clients stable interface in tempest.lib

2016-08-12 Thread Andrea Frittoli
Hi Tempest plugin developers / maintainers,

a new stable interface "ServiceClients" [0][1] is available in Tempest that
provides tests with a convenient way to access all available service
clients.
At the same time the plugin interface is extended with an optional method
"get_service_clients" [2][3], that allows plugins to declare any service
client they implement.

When a plugin uses "ServiceClients", all Tempest stable clients and all
clients exposed by installed plugins via the new interface will be
automatically available and pre-configured.
This makes it easy for plugins to access service clients and to write
integration tests which access APIs via service clients implemented in
different plugins.

One caveat is that three of the Tempest service clients are stable at the
moment: compute, network and image.
Work is in progress to make the other three (identity, volume and
object-storage) stable as well.

ServiceClients replaces the legacy "Manager" classes [2] and [3], which are
not stable interface and may change or disappear without prior notice.
We'll keep them both around unchanged for a reasonable amount of time to
allow for plugins to switch to the new interface - and at least as long as
the remaining Tempest service clients are available in tempest.lib.

I tried out the new interface with a couple of plugins already [6][7]. I
would heartily recommend migrating your Tempest plugin to the new
interface, especially if your currently using one of the unstable "Manager"
interfaces today. Please reach out for help in the #openstack-qa room if
you have questions on the new interfaces, trouble using them in your
plugin, requests for features or feedback.

Thanks for reading through.

Andrea Frittoli (andreaf)

[0]
http://docs.openstack.org/developer/tempest/library/clients.html#tempest.lib.services.clients.ServiceClients

[1]
http://docs.openstack.org/releasenotes/tempest/unreleased.html#new-features
[2] http://docs.openstack.org/developer/tempest/plugin.html#service-clients
[3]
http://docs.openstack.org/releasenotes/tempest/unreleased.html#new-features
[4]
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/manager.py#n26
[5]
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/clients.py#n35
[6] https://review.openstack.org/#/c/334596/
[7] https://review.openstack.org/#/c/338486/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Thierry Carrez
Duncan Thomas wrote:
> Given the options, I'd agree with Sean and John that removing the tag is
> a far lesser evil than changing our policy.

Agree that the tag should be removed. The Cinder core certainly follows
standard deprecation policy, but the Cinder drivers are not.

> If we leave broken drivers in the tree, the end user (operator) is no
> better off - the thing they evaluated won't work - but it will be harder
> to tell why. The storage vendor won't suffer the pressure that comes
> from driver removal, so will have less incentive to fix their driver
> (there's enough examples of the threat of driver removal causing the
> immediate fix of things that have remained broken for months that we
> know, for certain that the policy works).

I agree that leaving broken drivers in tree is not significantly better
from an operational perspective. But I think the best operational
experience would be to have an idea of how much risk you expose yourself
when you pick a driver, and have a number of them that are actually
/covered/ by the standard deprecation policy.

So ideally there would be a number of in-tree drivers (on which the
Cinder team would apply the standard deprecation policy), and a separate
repository for 3rd-party drivers that can be removed at any time (and
which would /not/ have the follows-standard-deprecation-policy tag).

I understand that this kind of reorganization is a bit painful for
little (developer-side) gain, but I think it would provide the most
useful information to our users and therefore the best operational
experience...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Constraints are ready to be used for tox.ini

2016-08-12 Thread Andreas Jaeger
On 08/12/2016 01:44 PM, Ihar Hrachyshka wrote:
> Andreas Jaeger  wrote:
> 
>> On 08/12/2016 12:13 PM, Ihar Hrachyshka wrote:
>>> Jeremy Stanley  wrote:
>>>
 On 2016-08-11 20:19:51 +0200 (+0200), Ihar Hrachyshka wrote:
> Do I read it right that we can now use constraints for post queue too
> (releasenotes, cover, venv targets)?

 Yes, that was the hardest part to get working, but thanks to
 tireless efforts on the part of a number of people that has now been
 fixed and tested.
>>>
>>> Yay! Kudos to everyone who made it happen!
>>>
>>> I posted a bunch of patches for neutron repos to validate it works:
>>> https://review.openstack.org/#/q/I02b28d3b354c3b175147c5be36eea4dc7e05f2a3,n,z
>>>
>>
>> The real validation is checking the results of post, tag, and release
>> queue...
>>
>> I don't expect surprises in check/gate queue…
>>
> 
> Yeah, we will need to land those to check post queue. But at least we
> can validate that constraints are correctly applied for where we execute
> the targets in check queue (f.e. tox -e releasenotes). I checked
> releasenotes target in all neutron repos with those patches, and at
> least os-client-config is now not pulling the latest version, but the
> one in current upper-constraints.txt file, so it’s already something.

glad to hear.

>> One caveat: Constraints cannot be used for jobs running on long-lived
>> nodes unless special care is taken - like I did for translation jobs,
> 
> I probably lack some knowledge here. What are those jobs, and how do I
> detect them?

Sorry, that was more for completeness for the rest of the infra team.

The long-lived nodes have special purposes and you're not messing with
them directly,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] versioning the api-ref?

2016-08-12 Thread Andreas Jaeger
On 08/12/2016 01:43 PM, Jim Rollenhagen wrote:
> On Thu, Aug 11, 2016 at 6:13 PM, John Dickinson  wrote:
>>
>>
>> On 11 Aug 2016, at 15:02, Brian Rosmaita wrote:
>>
>>> I have a question about the api-ref. Right now, for example, the new
>>> images v1/v2 api-refs are accurate for Mitaka.  But DocImpact bugs are
>>> being generated as we speak for changes in master that won't be
>>> available to consumers until Newton is released (unless they build from
>>> source). If those bug fixes get merged, then the api-ref will no longer
>>> be accurate for Mitaka API consumers (since it's published upon update).
>>>
>>> My question is, how should we handle this? We want the api-ref to be
>>> accurate for users, but we also want to move quickly on updates (so that
>>> the updates actually get made in a timely fashion).
>>>
>>> My suggestion is that we should always have an api-ref available that
>>> reflects the stable releases, that is, one for each stable branch.  So
>>> right now, for instance, the default api-ref page would display the
>>> api-ref for Mitaka, with links to "older" (Liberty) and "development"
>>> (master).  But excellent as that suggestion is, it doesn't help right
>>> now, because the most accurate Mitaka api-ref for Glance, for instance,
>>> is in Glance master as part of the WADL to RST migration project.  What
>>> I'd like to do is publish a frozen version of that somewhere as we make
>>> the Newton updates along with the Newton code changes.
>>>
>>> Thus I guess I have two questions:
>>>
>>> (1) How should we version (and publish multiple versions of) the api-ref
>>> in general?
>>>
>>> (2) How do we do it right now?
>>>
>>> thanks,
>>> brian
>>>
>>
>> I was working with the oslosphinx project to try and solve this issue in a 
>> cross-project way for the dev docs. I think the ideas there could be useful 
>> here.
>>
>> Basically, if you have docs built every commit (instead of every release, 
>> like normally happens with library projects), you can set 
>> show_other_versions to True and get a sidebar link of versions based on 
>> tags. (Yeah, I know it wasn't working earlier, but that should be fixed now).
>>
>> So with this process, keep building docs per commit so you have the latest 
>> available. But turn on the sidebar links for other versions, and you can 
>> have a place for docs from the last few releases in your project. I'm not 
>> sure that it would work well for stable branches that are updated (but 
>> really, if you're updating stable, how "stable" is it?)
> 
> We actually publish per-release dev docs right now, though the sidebar
> seems broken:
> 
> master: http://docs.openstack.org/developer/swift/
> stable release: http://docs.openstack.org/developer/swift/mitaka/
> intermediate release: http://docs.openstack.org/developer/swift/2.9.0/


The latest oslosphinx theme should fix these.

Note also that you need to set a variable in conf.py to turn on the
sidebar - it was turned on by default for some time (and there was no
variable to turn it off) and now there's there's a variable and default
is off,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-12 Thread Jim Rollenhagen
On Thu, Aug 11, 2016 at 9:03 PM, Jay Pipes  wrote:
> On 08/11/2016 05:46 PM, Clay Gerrard wrote:
>>
>> On Thu, Aug 11, 2016 at 2:25 PM, Ed Leafe > > wrote:
>>
>>
>> Overall this looks good, although it seems a bit odd to have
>> ALL_CAPS_STRINGS to represent all:caps:strings throughout. The
>> example you gave:
>>
>> >>> print os_caps.HW_CPU_X86_SSE42
>> hw:cpu:x86:sse42
>>
>>
>> Just to be clear, this project doesn't *do* anything right?  Like it
>> won't parse `/proc/cpuinfo` and actually figure out a machines cpu flags
>> that can then be broadcast as "capabilities"?
>>
>> Like, TBH I think it took me longer than I would prefer to honestly
>> admit to find out about /sys/block//queue/rotational [1]
>>
>> So if there was a library about standardizing how hardware capabilities
>> are discovered and reported - that maybe seems like a sane sort of thing
>> for a collection of related projects to agree on.  But I'm not sure if
>> this does that?
>
>
> Hi Clay!
>
> It does not currently do that, but I'm interested in adding this capability
> (pun intended).

ironic-python-agent does some of this discovery. It isn't
comprehensive, but it's a good starting point if we want to
lift some of that code out. The classes are here, and the
discovery things are in the same file if you grep around. :)

https://github.com/openstack/ironic-python-agent/blob/master/ironic_python_agent/hardware.py#L186-L251

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Constraints are ready to be used for tox.ini

2016-08-12 Thread Ihar Hrachyshka

Andreas Jaeger  wrote:


On 08/12/2016 12:13 PM, Ihar Hrachyshka wrote:

Jeremy Stanley  wrote:


On 2016-08-11 20:19:51 +0200 (+0200), Ihar Hrachyshka wrote:

Do I read it right that we can now use constraints for post queue too
(releasenotes, cover, venv targets)?


Yes, that was the hardest part to get working, but thanks to
tireless efforts on the part of a number of people that has now been
fixed and tested.


Yay! Kudos to everyone who made it happen!

I posted a bunch of patches for neutron repos to validate it works:
https://review.openstack.org/#/q/I02b28d3b354c3b175147c5be36eea4dc7e05f2a3,n,z


The real validation is checking the results of post, tag, and release
queue...

I don't expect surprises in check/gate queue…



Yeah, we will need to land those to check post queue. But at least we can  
validate that constraints are correctly applied for where we execute the  
targets in check queue (f.e. tox -e releasenotes). I checked releasenotes  
target in all neutron repos with those patches, and at least  
os-client-config is now not pulling the latest version, but the one in  
current upper-constraints.txt file, so it’s already something.



One caveat: Constraints cannot be used for jobs running on long-lived
nodes unless special care is taken - like I did for translation jobs,


I probably lack some knowledge here. What are those jobs, and how do I  
detect them?


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] versioning the api-ref?

2016-08-12 Thread Jim Rollenhagen
On Thu, Aug 11, 2016 at 6:13 PM, John Dickinson  wrote:
>
>
> On 11 Aug 2016, at 15:02, Brian Rosmaita wrote:
>
>> I have a question about the api-ref. Right now, for example, the new
>> images v1/v2 api-refs are accurate for Mitaka.  But DocImpact bugs are
>> being generated as we speak for changes in master that won't be
>> available to consumers until Newton is released (unless they build from
>> source). If those bug fixes get merged, then the api-ref will no longer
>> be accurate for Mitaka API consumers (since it's published upon update).
>>
>> My question is, how should we handle this? We want the api-ref to be
>> accurate for users, but we also want to move quickly on updates (so that
>> the updates actually get made in a timely fashion).
>>
>> My suggestion is that we should always have an api-ref available that
>> reflects the stable releases, that is, one for each stable branch.  So
>> right now, for instance, the default api-ref page would display the
>> api-ref for Mitaka, with links to "older" (Liberty) and "development"
>> (master).  But excellent as that suggestion is, it doesn't help right
>> now, because the most accurate Mitaka api-ref for Glance, for instance,
>> is in Glance master as part of the WADL to RST migration project.  What
>> I'd like to do is publish a frozen version of that somewhere as we make
>> the Newton updates along with the Newton code changes.
>>
>> Thus I guess I have two questions:
>>
>> (1) How should we version (and publish multiple versions of) the api-ref
>> in general?
>>
>> (2) How do we do it right now?
>>
>> thanks,
>> brian
>>
>
> I was working with the oslosphinx project to try and solve this issue in a 
> cross-project way for the dev docs. I think the ideas there could be useful 
> here.
>
> Basically, if you have docs built every commit (instead of every release, 
> like normally happens with library projects), you can set show_other_versions 
> to True and get a sidebar link of versions based on tags. (Yeah, I know it 
> wasn't working earlier, but that should be fixed now).
>
> So with this process, keep building docs per commit so you have the latest 
> available. But turn on the sidebar links for other versions, and you can have 
> a place for docs from the last few releases in your project. I'm not sure 
> that it would work well for stable branches that are updated (but really, if 
> you're updating stable, how "stable" is it?)

We actually publish per-release dev docs right now, though the sidebar
seems broken:

master: http://docs.openstack.org/developer/swift/
stable release: http://docs.openstack.org/developer/swift/mitaka/
intermediate release: http://docs.openstack.org/developer/swift/2.9.0/

// jim

>
>
> --John
>
>
>
>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Constraints are ready to be used for tox.ini

2016-08-12 Thread Andreas Jaeger
On 08/12/2016 12:13 PM, Ihar Hrachyshka wrote:
> Jeremy Stanley  wrote:
> 
>> On 2016-08-11 20:19:51 +0200 (+0200), Ihar Hrachyshka wrote:
>>> Do I read it right that we can now use constraints for post queue too
>>> (releasenotes, cover, venv targets)?
>>
>> Yes, that was the hardest part to get working, but thanks to
>> tireless efforts on the part of a number of people that has now been
>> fixed and tested.
> 
> Yay! Kudos to everyone who made it happen!
> 
> I posted a bunch of patches for neutron repos to validate it works:
> https://review.openstack.org/#/q/I02b28d3b354c3b175147c5be36eea4dc7e05f2a3,n,z

The real validation is checking the results of post, tag, and release
queue...

I don't expect surprises in check/gate queue...

One caveat: Constraints cannot be used for jobs running on long-lived
nodes unless special care is taken - like I did for translation jobs,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is there a logging_exception_prefix option for Neutron (Juno) ?

2016-08-12 Thread Ihar Hrachyshka

Tom Li  wrote:


Hi,

Does anyone know if the Neutron Juno release logging conf allows options  
such as: logging_exception_prefix, logging_debug_format_suffix,  
logging_default_format_string, logging_context_format_string like the  
other general OS services?


From the Juno doc  
(http://docs.openstack.org/juno/config-reference/content/section_neutron.conf.html)  
the only relevant variables that I found are : log_format and  
log_date_format


They seem to be included in Neutron Mitaka release:  
http://docs.openstack.org/mitaka/config-reference/networking/sample-configuration-files.html


Those options were available back in Juno, though sample config files  
shipped back then did not contain those options. You can still use them  
though.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Constraints are ready to be used for tox.ini

2016-08-12 Thread Ihar Hrachyshka

Jeremy Stanley  wrote:


On 2016-08-11 20:19:51 +0200 (+0200), Ihar Hrachyshka wrote:

Do I read it right that we can now use constraints for post queue too
(releasenotes, cover, venv targets)?


Yes, that was the hardest part to get working, but thanks to
tireless efforts on the part of a number of people that has now been
fixed and tested.


Yay! Kudos to everyone who made it happen!

I posted a bunch of patches for neutron repos to validate it works:  
https://review.openstack.org/#/q/I02b28d3b354c3b175147c5be36eea4dc7e05f2a3,n,z


Thanks,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Constraints are ready to be used for tox.ini

2016-08-12 Thread Robert Collins
Fantastic news Andreas!

-Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] out next week/PTL cover

2016-08-12 Thread James Page
Hi

I'm out of contact from everything electronic next week; back on the 22nd
August.

David Ames (thankyou!) will be covering any Charms PTL related matters in
my absence and generally keeping the wheels on development.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] manila service cannot work on liberty

2016-08-12 Thread Jevon Qiao

Hi All,

Currently, I'm trying to set up manila on an environment which is 
running Liberty. Both the API and scheduler can operate well, but when I 
run 'manila service-list', I always get following error:


2016-08-12 17:25:00.470 30562 INFO manila.api.openstack.wsgi [-] OPTIONS 
http://:::10.200.0.41:8786/ 
2016-08-12 17:25:00.471 30562 INFO manila.api.openstack.wsgi [-] 
http://:::10.200.0.41:8786/  returned with 
HTTP 300
2016-08-12 17:25:00.472 30562 INFO eventlet.wsgi.server [-] 
:::10.200.0.35 - - [12/Aug/2016 17:25:00] "OPTIONS / HTTP/1.0" 300 
900 0.002414
2016-08-12 17:25:01.053 30562 CRITICAL keystonemiddleware.auth_token [-] 
Unable to validate token: SSL exception connecting to 
https://127.0.0.1:35357: [SSL: UNKNOWN_PROTOCOL] unknown protocol 
(_ssl.c:765)
2016-08-12 17:25:01.055 30562 INFO eventlet.wsgi.server [-] 
:::10.200.0.35 - - [12/Aug/2016 17:25:01] "GET 
/v2/546e878e722d430492417b72f1072dd2/types/default HTTP/1.1" 503 254 
0.032591


I think I might miss some configuration but I cannot figure out what the 
configuration is. I tried to google it but had no luck either. Does 
anyone encounter this issue before and help me out of this?


BTW, my manila configuration is as follows,

[DEFAULT]
rpc_backend = rabbit
default_share_type = default_share_type
rootwrap_config = /etc/manila/rootwrap.conf
auth_strategy = keystone
my_ip = 10.200.0.41

[keystone_authtoken]
memcached_servers = lb.0.example200.ustack.in:11211 


auth_uri = http://lb.0.example200.ustack.in:5000
auth_url = http://lb.0.example200.ustack.in:35357/v2.0
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = services
username = manila
password = 123456

[database]
connection = 
mysql://manila:123...@lb.0.example200.ustack.in/manila?charset=utf8 



[oslo_concurrency]
lock_path = /var/lib/manila/tmp

[oslo_messaging_rabbit]
rabbit_hosts=10.200.0.44:5672 ,10.200.0.45:5672 
,10.200.0.46:5672 

rabbit_userid=openstack
rabbit_password=e3dc9ac817fa4c61414235e5

--
Best Regards
Jevon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] locking concern with os-brick

2016-08-12 Thread Sean Dague
A devstack patch was pushed earlier this cycle around os-brick -
https://review.openstack.org/341744

Apparently there are some os-brick operations that are only safe if the
nova and cinder lock paths are set to be the same thing. Though that
hasn't yet hit release notes or other documentation yet that I can see.
Is this a thing that everyone is aware of at this point? Are project
teams ok with this new requirement? Given that lock_path has no default,
this means we're potentially shipping corruption by default to users.
The other way forward would be to revisit that lock_path by default
concern, and have a global default. Or have some way that users are
warned if we think they aren't in a compliant state.

I've put the devstack patch on a -2 hold until we get ACK from both Nova
and Cinder teams that everyone's cool with this.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] versioning the api-ref?

2016-08-12 Thread Sean Dague
On 08/11/2016 06:02 PM, Brian Rosmaita wrote:
> I have a question about the api-ref. Right now, for example, the new
> images v1/v2 api-refs are accurate for Mitaka.  But DocImpact bugs are
> being generated as we speak for changes in master that won't be
> available to consumers until Newton is released (unless they build from
> source). If those bug fixes get merged, then the api-ref will no longer
> be accurate for Mitaka API consumers (since it's published upon update).

I'm confused about this statement.

Are you saying that the Glance v2 API in Mitaka and Newton are different
in some user visible ways? But both are called the v2 API? How does an
end user know which to use?

The assumption with the api-ref work is that the API document should be
timeless (branchless), and hence why building from master is always
appropriate. That information works for all time.

We do support microversion markup in the document, you can see some of
that in action here in the Nova API Ref -
http://developer.openstack.org/api-ref/compute/?expanded=list-servers-detail


-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

2016-08-12 Thread Miguel Angel Ajo Pelayo
That was my feeling Moshe, thanks for checking.

Anil, which card and drivers are you using exactly?

You should probably contact your card vendor and check if they have a
fix for the issue, which seems more like a bug on their implementation
of the embedded switch, the card or the driver.

Best regards,
Miguel  Ángel.

On Thu, Aug 11, 2016 at 12:49 PM, Moshe Levi  wrote:
> Hi Anil,
>
>
> I tested it with Mellanox NIC and it working
>
> 16: enp6s0d1:  mtu 1500 qdisc mq state UP 
> mode DEFAULT group default qlen 1000
> link/ether 00:02:c9:e9:c2:12 brd ff:ff:ff:ff:ff:ff
> vf 0 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 1 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 2 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 3 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 4 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 5 MAC fa:16:3e:0d:8c:a2, vlan 192, spoof checking on, link-state enable
> vf 6 MAC fa:16:3e:0d:8c:a2, vlan 190, spoof checking on, link-state enable
> vf 7 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
>
> I guess the problem is with the SR-IOV NIC/ driver you are using maybe you 
> should contact them
>
>
> -Original Message-
> From: Moshe Levi
> Sent: Wednesday, August 10, 2016 5:59 PM
> To: 'Miguel Angel Ajo Pelayo' ; OpenStack Development 
> Mailing List (not for usage questions) 
> Cc: Armando M. 
> Subject: RE: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness
>
> Miguel,
>
> I talked to our driver architect and according to him this is vendor 
> implementation (according to him this  should work with  Mellanox NIC) I need 
> to verify that this indeed working.
> I will update after I will prepare SR-IOV setup and try it myself.
>
>
> -Original Message-
> From: Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com]
> Sent: Wednesday, August 10, 2016 12:04 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Cc: Armando M. ; Moshe Levi 
> Subject: Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness
>
> @moshe, any insight on this?
>
> I guess that'd depend on the nic internal switch implementation and how the 
> switch ARP tables are handled there (per network, or global per switch).
>
> If that's the case for some sr-iov vendors (or all), would it make sense to 
> have a global switch to create globally unique mac addresses (for the same 
> neutron deployment, of course).
>
> On Wed, Aug 10, 2016 at 7:38 AM, huangdenghui  wrote:
>> hi Armando
>> I think this feature causes problem in sriov scenario, since sriov
>> NIC don't support the vf has the same mac,even the port belongs to the
>> different network.
>>
>>
>> 发自网易邮箱手机版
>>
>>
>> On 2016-08-10 04:55 , Armando M. Wrote:
>>
>>
>>
>> On 9 August 2016 at 13:53, Anil Rao  wrote:
>>>
>>> Is the MAC address of a Neutron port on a tenant virtual network
>>> globally unique or unique just within that particular tenant network?
>>
>>
>> The latter:
>>
>> https://github.com/openstack/neutron/blob/master/neutron/db/models_v2.
>> py#L139
>>
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Anil
>>>
>>>
>>> _
>>> _ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>>
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [neutron] [api] [doc] API reference for neutron stadium projects (re: API status report)

2016-08-12 Thread Akihiro Motoki
this mail focuses on neutron-specific topics. I dropped cinder and ironic tags.

2016-08-11 23:52 GMT+09:00 Anne Gentle :
>
>
> On Wed, Aug 10, 2016 at 2:49 PM, Anne Gentle 
> wrote:
>>
>> Hi all,
>> I wanted to report on status and answer any questions you all have about
>> the API reference and guide publishing process.
>>
>> The expectation is that we provide all OpenStack API information on
>> developer.openstack.org. In order to meet that goal, it's simplest for now
>> to have all projects use the RST+YAML+openstackdocstheme+os-api-ref
>> extension tooling so that users see available OpenStack APIs in a sidebar
>> navigation drop-down list.
>>
>> --Migration--
>> The current status for migration is that all WADL content is migrated
>> except for trove. There is a patch in progress and I'm in contact with the
>> team to assist in any way. https://review.openstack.org/#/c/316381/
>>
>> --Theme, extension, release requirements--
>> The current status for the theme, navigation, and Sphinx extension tooling
>> is contained in the latest post from Graham proposing a solution for the
>> release number switchover and offers to help teams as needed:
>> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101112.html I
>> hope to meet the requirements deadline to get those changes landed.
>> Requirements freeze is Aug 29.
>>
>> --Project coverage--
>> The current status for project coverage is that these projects are now
>> using the RST+YAML in-tree workflow and tools and publishing to
>> http://developer.openstack.org/api-ref/ so they will be
>> included in the upcoming API navigation sidebar intended to span all
>> OpenStack APIs:
>>
>> designate http://developer.openstack.org/api-ref/dns/
>> glance http://developer.openstack.org/api-ref/image/
>> heat http://developer.openstack.org/api-ref/orchestration/
>> ironic http://developer.openstack.org/api-ref/baremetal/
>> keystone http://developer.openstack.org/api-ref/identity/
>> manila http://developer.openstack.org/api-ref/shared-file-systems/
>> neutron-lib http://developer.openstack.org/api-ref/networking/
>> nova http://developer.openstack.org/api-ref/compute/
>> sahara http://developer.openstack.org/api-ref/data-processing/
>> senlin http://developer.openstack.org/api-ref/clustering/
>> swift http://developer.openstack.org/api-ref/object-storage/
>> zaqar http://developer.openstack.org/api-ref/messaging/
>>
>> These projects are using the in-tree workflow and common tools, but do not
>> have a publish job in project-config in the jenkins/jobs/projects.yaml file.
>>
>> ceilometer
>
>
> Sorry, in reviewing further today I found another project that does not have
> a publish job but has in-tree source files:
>
> cinder
>
> Team cinder: can you let me know where you are in your publishing comfort
> level? Please add an api-ref-jobs: line with a target of block-storage to
> jenkins/jobs/projects.yaml in the project-config repo to ensure publishing
> is correct.
>
> Another issue is the name of the target directory for the final URL. Team
> ironic can I change your api-ref-jobs: line to bare-metal instead of
> baremetal? It'll be better for search engines and for alignment with the
> other projects URLs: https://review.openstack.org/354135
>
> I've also uncovered a problem where a neutron project's API does not have an
> official service name, and am working on a solution but need help from the
> neutron team: https://review.openstack.org/#/c/351407

I followed the discussion in https://review.openstack.org/#/c/351407
and my understanding of the conclusion is to add API reference source
of neutron stadium projects
to neutron-lib and publish them under
http://developer.openstack.org/api-ref/networking/ .
I sounds reasonable to me.

We can have a dedicated pages for each stadium project like networking-sfc
like api-ref/networking/service-function-chaining.
At now all APIs are placed under v2/ directory, but it is not good
both from user and
maintenance perspectives.


So, the next thing we need to clarify is what names and directory
structure are appropropriate
from the documentation perspective.
My proposal is to prepare a dedicated directory per networking project
repository.
The directory name should be a function name rather than a project
name. For example,
- neutron => ???
- neutron-lbaas => load-balancer
- neutron-vpnaas => vpn
- neutron-fwaas => firewall
- neutron-dynamic-routing => dynamic-routing
- networking-sfc => service-function-chaining
- networking-l2gw => layer2-gateway
- (networking-bgpvpn) => bgp-vpn

My remaining open questions are:

- Is 'v2' directory needed?
  All networking API provided by stadium projects are extensions to
Networking v2 API and "v2" is the only API we have now.
  Do we place all APIs from stadium projects under 'v2' directory, or
is 'v2' directory unnecessary?

- what is a good name for main neutron API (provided by 'neutron' repo)?

Any feedback 

Re: [openstack-dev] [nova] os-capabilities library created

2016-08-12 Thread Daniel P. Berrange
On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:
> Hi Novas and anyone interested in how to represent capabilities in a
> consistent fashion.
> 
> I spent an hour creating a new os-capabilities Python library this evening:
> 
> http://github.com/jaypipes/os-capabilities
> 
> Please see the README for examples of how the library works and how I'm
> thinking of structuring these capability strings and symbols. I intend
> os-capabilities to be the place where the OpenStack community catalogs and
> collates standardized features for hardware, devices, networks, storage,
> hypervisors, etc.
> 
> Let me know what you think about the structure of the library and whether
> you would be interested in owning additions to the library of constants in
> your area of expertise.

How are you expecting that these constants are used ? It seems unlikely
the, say nova code, code is going to be explicitly accessing any of the
individual CPU flag constants. It should surely just be entirely metatadata
driven - eg libvirt driver would just parse libvirt capabilities XML and
extract all the CPU flag strings & simply export them. It would be very
undesirable to have to add new code to os-capabilities every time that
Intel/AMD create new CPU flags for new features, and force users to upgrade
openstack to be able to express requirements on those CPU flags.

> Next steps for the library include:
> 
> * Bringing in other top-level namespaces like disk: or net: and working with
> contributors to fill in the capability strings and symbols.
> * Adding constraints functionality to the library. For instance, building in
> information to the os-capabilities interface that would allow a set of
> capabilities to be cross-checked for set violations. As an example, a
> resource provider having DISK_GB inventory cannot have *both* the disk:ssd
> *and* the disk:hdd capability strings associated with it -- clearly the disk
> storage is either SSD or spinning disk.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Overlay MTU setup in docker remote driver

2016-08-12 Thread Liping Mao (limao)
Hi Kuryr team,

When the network in neutron using overlay for vm,
it will use dhcp option to control the VM interface MTU,
but for docker, the ip address does not get from dhcp.
So it will not set up proper MTU in container.

Two work-around in my mind now:
1. Set the default MTU in docker to 1450 or less.
2. Manually configure MTU after container start up.

But both of these are not good, the idea way in my mind
is when libnetwork Call remote driver create network,
kuryr create neutron network, then return Proper MTU to libnetwork,
docker use this MTU for this network. But docker remote driver
does not support this.

Or maybe let user config MTU in remote driver,
a little similar with overlay driver:
https://github.com/docker/libnetwork/pull/1349

But now, seems like remote driver will not do similar things.

Any idea to solve this problem? Thanks.


Regards,
Liping Mao


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun][Higgins] Proposing Sudipta Biswas and Wenzhi Yu for Zun core reviewer team

2016-08-12 Thread taget


+1 for both, they would be great addition to zun team.

On 2016年08月12日 10:26, Yanyan Hu wrote:


Both Sudipta and Wenzhi have been actively contributing to the Zun 
project for a while. Sudipta provided helpful advice for the project 
roadmap and architecture design. Wenzhi consistently contributed high 
quality patches and insightful reviews. I think both of them are 
qualified to join the core team.




--
Best Regards,
Eli Qiao (乔立勇), Intel OTC.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Branching Gnocchi stable/2.2

2016-08-12 Thread Julien Danjou
Hi release team,

I've asked on IRC, but I guess it's safer on the ml.

We tagged Gnocchi 2.2.0 using the release repo, but as discussed earlier
on this list, the branching system is not ready yet. So we'd need
someone on your side to cut a stable/2.2 branch starting at the 2.2.0
tag.

Thanks!

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Agenda for IRC meeting Aug. 12 2016

2016-08-12 Thread hu . zhijiang
Sorry for publishing agenda so late.

1) Roll Call
2) Daisycloud: Daisy Demo and doc
3) Daisycloud: Ironic status update
4) Daisycloud: Get backend type by calling host_get_all()
5) Daisy4nfv: Related status update


B.R.,
Zhijiang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] : Return VLAN ID for tap-service

2016-08-12 Thread Varsha Jayaraj
When we create a tap-service, a VLAN ID will be associated with that
tap-service. In case we want to identify the flows associated with that
VLAN ID, so that we can add additional filters, the response returned after
creating a tap-service must include the VLAN ID information as well. Can
this be incorporated?

Thanks.

Regards,
Varsha
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev