[openstack-dev] [TripleO] IPSEC integration

2017-11-15 Thread Juan Antonio Osorio
Hello folks!

A few months ago Dan Sneddon and me worked in an ansible role that would
enable IPSEC for the overcloud [1]. Currently, one would run it as an extra
step after the overcloud deployment. But, I would like to start integrating
it to TripleO itself, making it another option, probably as a composable
service.

For this, I'm planning to move the tripleo-ipsec ansible role repository
under the TripleO umbrella. Would that be fine with everyone? Or should I
add this ansible role as part of another repository? After that's available
and packaged in RDO. I'll then look into the actual TripleO composable
service.

Any input and contributions are welcome!

[1] https://github.com/JAORMX/tripleo-ipsec

-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] openstack-tox-py27 job is not executed when spec files are added or modified in nova-specs

2017-11-15 Thread Takashi Natsume

Hi, Nova developers.

In nova-specs project, there is an 'openstack-tox-py27' job (in Zuul 
check or Zuul gate).
The job checks whether spec files comply with the template or not, line 
length, etc.
But there are cases that the job is not executed even if a spec file is 
added or modified.


For example, in https://review.openstack.org/#/c/508164/, a spec file 
was added.

But the 'openstack-tox-py27' job was not executed.
So the following error occurs after it has been merged.

* nova-specs: py27 fails "Line limited to a maximum of 79 characters."
  https://bugs.launchpad.net/nova/+bug/1732581

When a spec file is added or modified, the 'openstack-tox-py27' job 
should be executed.


Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][all] Removal of packages from bindep-fallback

2017-11-15 Thread Ian Wienand
Hello,

Some time ago we started the process of moving towards projects being
more explicit about thier binary dependencies using bindep [1]

To facilitate the transition, we created a "fallback" set of
dependencies [2] which are installed when a project does not specifiy
it's own bindep dependencies.  This essentially replicated the rather
ad-hoc environment provided by CI images before we started the
transition.

This list has acquired a few packages that cause some problems in
various situations today.  Particularly packages that aren't in the
increasing number of distributions we provide, or packages that come
from alternative repositories.

To this end, [3,4] proposes the removal of

 liberasurecode-*
 mongodb-*
 python-zmq
 redis
 zookeeper
 ruby-*

from the fallback packages.  This has a small potential to affect some
jobs that tacitly rely on these packages.

NOTE: this does *not* affect devstack jobs (devstack manages it's own
dependencies outside bindep) and if you want them back, it's just a
matter of putting them into the bindep file in your project (and as a
bonus, you have better dependency descriptions for your code).

We should be able to then remove centos-release-openstack-* from our
centos base images too [5], which will make life easier for projects
such as triple-o who have to work-around that.

If you have concerns, please reach out either via mail or in
#openstack-infra

Thank you,

-i

[1] https://docs.openstack.org/infra/bindep/
[2] 
https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/data/bindep-fallback.txt
[3] https://review.openstack.org/519533
[4] https://review.openstack.org/519534
[5] https://review.openstack.org/519535

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Keynote: Governance and Trust -- from Company Led to Community Led - Sarah Novotny - YouTube

2017-11-15 Thread Joshua Harlow

Doug Hellmann wrote:

This keynote talk about the evolution of governance in the kubernetes community 
and the CNCF more broadly discusses some interesting parallels with our own 
history.

https://m.youtube.com/watch?feature=youtu.be=Apw_fuTEhyA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Neat! Thanks for sharing,

So out of curiosity, why aren't we (as a whole, or part of it?) just 
advocating (pushing for?) merging these two communities (ours and theirs).


Call it the MegaCNCF if that helps.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday Nov 16th at 8:00 UTC

2017-11-15 Thread Ghanshyam Mann
Hello everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, Nov 16th at 8:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
 
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_Nov_16th_2017_.280800_UTC.29

Anyone is welcome to add an item to the agenda.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APIs schema consumption discussion

2017-11-15 Thread Gilles Dubreuil


On 15/11/17 03:07, Doug Hellmann wrote:

Excerpts from Gilles Dubreuil's message of 2017-11-14 10:15:02 +1100:

Hi,

Follow-up conversation from our last "API SIG feedback and discussion
session" at Sydney Summit [1], about APIs schema consumption.

Let's summarize the current situation.

Each OpenStack project has an "API-source" folder containing RST files
describing its API structure ([2] for example). Those files are in turn
consumed by the Sphinx library to generate each project's API reference
manual which are then available in the API guide documentation [3]. Such
effort has made the APIs harmoniously consistent across all OpenStack
projects and has also created a "de-facto" API schema.

While the RST files are used by the documentation, they are not readily
consumable by SDKs. Of course the APIs schema can be extracted by web
crawling the Reference guides, which in turn can be used by any
language. Such approach works [4] and help the Misty project [5] (Ruby
SDK) started with more languages to exploit the same approach.

Therefore to allow better automation, the next step would be to have the
APIs schema stored directly into each project's repo so the SDKs could
consume them straight from the source. This is why we've started
discussing how to have the schema either extracted from the RST files or
alternatively having the API described directly into its own file. The
latter would provide a different work flow: "Yaml -> RST -> Reference
doco" instead of "RST -> Reference doco -> Yaml".

So the question at this stage is: "Which of the work flow to choose from?".

To clarify the needs, it's important to note that we found out that none
of the SDKs project, besides OSC (OpenStack Client, thanks to Dean),
have full time working teams to maintain each SDK, which besides the
natural structural complexity inherent to the cloud context, makes the
task of keeping a SDK up to date very difficult. Especially as projects
moves forward. Automatically managing Openstack APIs is inevitable for
consumers. Another example/feedback was provided by the presenters of
"AT’s Strategy for Implementing a Next Generation OpenStack Cloud"
session during Sydney Keynotes, as they don't handle the Openstack API
manually!

APIs consumers and providers, any thoughts?

[1]
https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20442/api-sig-feedback-and-discussion-session
[2] https://github.com/openstack/nova/tree/master/api-guide/source
[3] https://developer.openstack.org/api-guide/quick-start/index.html
[4] https://github.com/flystack/openstack-APIs
[5] https://github.com/flystack/misty

Regards,
Gilles

Please do not build something that looks like SOAP based on parsing RST
files. Surely we can at least work directly from JSONSchema inputs?


I'm glad you said that :).
Working directly from YAML or JSON files (format to be discussed) to 
maintain the schema seems (to me too) the natural approach.


Meaning every project to change current practice: maintain the schema 
files instead of maintaining RST files.
I suppose there has been suggestion to do it the other way (parse the 
RST files) because of the latter impact on the current practice, but it 
shouldn't be a blocker.


Gil



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Gilles Dubreuil
Senior Software Engineer, Openstack DFG Integration
Mobile: +61 400 894 219
Email: gil...@redhat.com
GitHub/IRC: gildub



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Keynote: Governance and Trust -- from Company Led to Community Led - Sarah Novotny - YouTube

2017-11-15 Thread Doug Hellmann
This keynote talk about the evolution of governance in the kubernetes community 
and the CNCF more broadly discusses some interesting parallels with our own 
history. 

https://m.youtube.com/watch?feature=youtu.be=Apw_fuTEhyA


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] removing zuul v3 legacy jobs

2017-11-15 Thread Emilien Macchi
Some progress today:

https://review.openstack.org/#/q/topic:tripleo/migrate-to-zuulv3+(status:open+OR+status:merged)

- TripleO UI switched to new jobs
- tripleo-ci has playbooks, jobs and templates ready for review
- THT and puppet-tripleo has project layout ready for review

Next by order: tripleo-common, python-tripleoclient,
instack-undercloud, t-q and t-q-e, all other tripleo projects.

Any help in review this work is welcome.
So far tests are running fine now, sounds like good progress so far.

Note: this is a first migration step. Of course in the future we'll
think about how we can "ansiblelize" the tasks and optimize things.
But before that, we need to migrate and do this first step now.

Thanks,

On Tue, Nov 14, 2017 at 4:18 PM, Emilien Macchi  wrote:
> Hi,
>
> I'm working on migrating all TripleO CI jobs to be in-tree, I'm also
> refactoring the layout and do some cleanup.
> It's a bit of work, that can be followed here:
> https://review.openstack.org/#/q/topic:tripleo/migrate-to-zuulv3
>
> The only thing I ask from our team is to let me know any change in
> project-config & zuul layout, so we can update my work in tripleo-ci
> patch, otherwise it will be lost when we land the patches.
>
> Thanks,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Joshua Harlow
Just a thought, cause I have known/do know what Mathieu is talking about 
and find the disconnect still oddly weird. Why aren't developer people 
from other companies coming into to where Mathieu works (or where I 
work) and seeing how it really works down on the ground here.


I mean if we still have this weird disconnect; why aren't we like 
starting some kind of temp. assignments from developers that wish to 
learn into the actual companies that are struggling; call it a working 
vacation or something if that helps your management buy into it.


After all if its a community, and we should be trying to break down 
walls as much as we can...


Just a thought...

-Josh

Mathieu Gagné wrote:

Some clarifications below.

On Wed, Nov 15, 2017 at 4:52 AM, Bogdan Dobrelya  wrote:

Thank you Mathieu for the insights!


To add details to what happened:
* Upgrade was never made a #1 priority. It was a one man show for far
too long. (myself)


I suppose that confirms that upgrades is very nice to have in production
deployments, eventually, maybe... (please read below to continue)


* I also happen to manage and work on other priorities.
* Lot of work made to prepare for multiple versions support in our
deployment tools. (we use Puppet)
* Lot of work in the packaging area to speedup packaging. (we are
still using deb packages but with virtualenv to stay Puppet
compatible)
* We need to forward-port private patches which upstream won't accept
and/or are private business logic.


... yet long time maintaining and landing fixes is the ops' *reality* and
pain #1. And upgrades are only pain #2. LTS can not directly help with #2,
but only indirectly, if the vendors' downstream teams could better cooperate
with #1 and have more time and resources to dedicate for #2, upgrades
stories for shipped products and distros.


We do not have a vendor. (anymore, if you consider Ubuntu
cloud-archive as a vendor)
We package and deploy ourselves.


Let's please to not lower the real value of LTS branches and not substitute
#1 with #2. This topic is not about bureaucracy and policies, it is about
how could the community help vendors to cooperate over maintaining of
commodity things, with as less bureaucracy as possible, to ease the
operators pains in the end.


* Our developer teams didn't have enough free cycles to work right
away on the upgrade. (this means delays)
* We need to test compatibility with 3rd party systems which takes
some time. (and make them compatible)


This confirms perhaps why it is vital to only run 3rd party CI jobs for LTS
branches?


For us, 3rd party systems are internal systems outside our control or
realm of influence.
They are often in-house systems that the outside world would care very
little about.


* We need to update systems ever which we don't have full control.
This means serious delays when it comes to deployment.
* We need to test features/stability during some time in our dev
environment.
* We need to test features/stability during some time in our
staging/pre-prod environment.
* We need to announce and inform our users at least 2 weeks in advance
before performing an upgrade.
* We choose to upgrade one service at a time (in all regions) to avoid
a huge big bang upgrade. (this means more maintenance windows to plan
and you can't stack them too much)
* We need to swiftly respond to bug discovered by our users. This
means change of priorities and delay in other service upgrades.
* We will soon need to upgrade operating systems to support latest
OpenStack versions. (this means we have to stop OpenStack upgrades
until all nodes are upgraded)


It seems that the answer to the question sounded, "Why upgrades are so
painful and take so much time for ops?" is "as upgrades are not the
priority. Long Time Support and maintenance are".


The cost of performing an upgrading is both time and resources
consuming which are both limited.
And you need to sync the world around you to make it happen. It's not
a one man decision/task.

When you remove all the external factors, dependencies, politics,
etc., upgrading can take an afternoon from A to Z from some projects.
We do have an internal cloud for our developers that lives in a
vacuum. Let me tell you that it's not very easy upgrade it. We are
talking about hours/days, not years.

So if I can only afford to upgrade once per year, what are my options?

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-I18n] [I18n] survey about translation of project repo documentation

2017-11-15 Thread Ian Y. Choi

Hello Frank,

I have just answered the survey - hopefully the survey end time is based 
on UTC

and my survey answer will be also considered well.

If you (developers, translators) are interested in seeing the translated 
version of
project repository documentation, please fill out the survey, which is 
very helpful

for I18n team to have good insight.


With many thanks,

/Ian

Frank Kloeker wrote on 11/9/2017 10:15 PM:

Dear all,

hopefully you had a nice Summit and you're safe at home or on the way 
again.
During the week the I18n team was thinking about documentation 
translating in project repos. The doc migration is almost done and now 
it's a good time to start. But there are a lot's of projects and the 
state of things is different. To get an impression how important this 
topic is and which projects are interested We've create a survey to 
figure it out [1]. Please take part in the survey till 2017/11/15 
23:59:59


many thanks

Frank (PTL I18n)

[1] https://www.surveymonkey.de/r/HSD5YTD

___
OpenStack-I18n mailing list
openstack-i...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Please do not approve or recheck anything not related to CI alert bugs

2017-11-15 Thread Alex Schultz
Ok so here's the latest. We've switched scenario001 to non-voting[0]
for now until Bug 1731063[1] can be resolved. We should be OK to start
merging thing in master as the other current issues don't appear to be
affecting the gate significantly as it stands.  We still need to
understand why we're hitting Bug 1731063 and address the problem so we
can revert the non-voting change ASAP.  Scenario001 provides lots of
coverage for TripleO so I do not want to see it non-voting for long.
If scenario001 is failing on your change, please make sure it is not
Bug 1731063 before rechecking or approving.  If you are approving
changes or rechecking and it fails, do not blindly recheck. Please
file a new bug and ping #tripleo so we can make sure we don't have
other things that may affect the gate.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/520155/
[1] https://bugs.launchpad.net/tripleo/+bug/1731063

On Sat, Nov 11, 2017 at 8:47 PM, Alex Schultz  wrote:
> Ok so here's the current status of things.  I've gone through some of
> the pending patches and sent them to the gate over the weekend since
> the gate was empty (yay!).  We've managed to land a bunch of patches.
> That being said for any patch for master with scenario jobs, please do
> not recheck/approve. Currently the non-containerized scenario001/004
> jobs are broken due to Bug 1731688[0] (these run on
> tripleo-quickstart-extras/tripleo-ci).  There is a patch[1] out for a
> revert of the breaking change. The scenario001-container job is super
> flaky due to Bug 1731063[2] and we could use some help figuring out
> what's going on.  We're also seeing some issues around heat
> interactions[3][4] but those seems to be less of a problem than the
> previously mentioned bugs.
>
> So at the moment any changes that don't have scenario jobs associated
> with them may be approved/rechecked freely.  We can discuss on Monday
> what to do about the scenario jobs if we still are running into issues
> without a solution in sight.  Also please keep an eye on the gate
> queue[5] and don't approve things if it starts getting excessively
> long.
>
> Thanks,
> -Alex
>
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1731688
> [1] https://review.openstack.org/#/c/519041/
> [2] https://bugs.launchpad.net/tripleo/+bug/1731063
> [3] https://bugs.launchpad.net/tripleo/+bug/1731032
> [4] https://bugs.launchpad.net/tripleo/+bug/1731540
> [5] http://zuulv3.openstack.org/
>
> On Wed, Nov 8, 2017 at 3:39 PM, Alex Schultz  wrote:
>> So we have some good news and some bad news.  The good news is that
>> we've managed to get the gate queue[0] under control since we've held
>> off on pushing new things to the gate.  The bad news is that we've
>> still got some random failures occurring during the deployment of
>> master.  Since we're not seeing infra related issues, we should be OK
>> to merge things to stable/* branches.  Unfortunately until we resolve
>> the issues in master[1] we could potentially backup the queue.  Please
>> do not merge things that are not critical bugs.  I would ask that
>> folks please take a look at the open bugs and help figure out what is
>> going wrong. I've created two issues today that I've seen in the gate
>> that we don't appear to have open patches for. One appears to be an
>> issue in the heat deployment process[3] and the other is related to
>> the tempest verification of being able to launch a VM & ssh to it[4].
>>
>> Thanks,
>> -Alex
>>
>> [3] https://bugs.launchpad.net/tripleo/+bug/1731032
>> [4] https://bugs.launchpad.net/tripleo/+bug/1731063
>>
>> On Tue, Nov 7, 2017 at 8:33 AM, Alex Schultz  wrote:
>>> Hey Folks
>>>
>>> So we're at 24+ hours again in the gate[0] and the queue only
>>> continues to grow. We currently have 6 ci/alert bugs[1]. Please do not
>>> approve of recheck anything that isn't related to these bugs.  I will
>>> most likely need to go through the queue and abandon everything to
>>> clear it up as we are consistently hitting timeouts on various jobs
>>> which is preventing anything from merging.
>>>
>>> Thanks,
>>> -Alex
>>>
>> [0] http://zuulv3.openstack.org/
>> [1] 
>> https://bugs.launchpad.net/tripleo/+bugs?field.searchtext==-importance%3Alist=NEW%3Alist=CONFIRMED%3Alist=TRIAGED%3Alist=INPROGRESS%3Alist=CRITICAL_option=any=_reporter=_commenter==_subscriber==ci+alert_combinator=ALL_cve.used=_dupes.used=_dupes=on_me.used=_patch.used=_branches.used=_branches=on_no_branches.used=_no_branches=on_blueprints.used=_blueprints=on_no_blueprints.used=_no_blueprints=on=Search

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][neutron][infra] Tag of openstack/neutron-fwaas-dashboard failed

2017-11-15 Thread Doug Hellmann
Excerpts from Akihiro Motoki's message of 2017-11-16 01:32:00 +0900:
> 2017-11-15 1:06 GMT+09:00 Andreas Jaeger :
> > On 2017-11-14 17:03, Doug Hellmann wrote:
> >> Excerpts from Andreas Jaeger's message of 2017-11-14 09:31:48 +0100:
> >>> On 2017-11-13 22:09, Doug Hellmann wrote:
>  Excerpts from zuul's message of 2017-11-13 20:37:18 +:
> > Unable to freeze job graph: Unable to modify final job  > publish-openstack-releasenotes branches: None source: 
> > openstack-infra/project-config/zuul.d/jobs.yaml@master#26> attribute 
> > required_projects={'openstack/horizon':  > at 0x7ff848d06b70>} with variant  > branches: None source: 
> > openstack-infra/openstack-zuul-jobs/zuul.d/project-templates.yaml@master#285>
> >
> 
>  It looks like there is a configuration issue with
>  neutron-fwaas-dashboard.
> >>>
> >>> Yes, we marked  publish-openstack-releasenotes as final - and then the
> >>> job added requirements to it.
> >>>
> >>> I see at least these two different fixes:
> >>> - remove the final: true from the job
> >>> - add neutron and horizon to the job like we done for release job. But
> >>> there are other projects that have even more requirements.
> >>>
> >>> Infra team, what's the best approach here?
> >>>
> >>> Andreas
> >>
> >> No projects should even need to install *themselves* much less their
> >> other dependencies to build release notes. It should be possible to
> >> build release notes just with sphinx and reno (and their dependencies).
> >
> > It should - but that's not how those projects seem to be set up ;(
> 
> The current setup is what most horizon plugins do. It is not special.
> 
> The release notes of neutron-fwaas-dashboard can be built with sphinx,
> openstackdocstheme, reno and neutron-fwaas-dashboard itself.
> I am fine to change this appropriately, but what is the right thing to do?
> Do we need to change 'releasenotes' env to depend on only sphinx,
> openstackdocstheme and reno?
> 
> Akihiro

Someone needs to figure that out. Unfortunately, I don't have the
bandwidth right now. Maybe you, or someone else from one of the affected 
projects, can work on it?

Two options have been discussed so far:

1. Create special jobs for neutron and horizon projects that install
   neutron or horizon, like we do for the release jobs.

2. Redefine the release notes job so that it doesn't use tox
   but installs only the pieces it needs to run a sphinx build. The
   CTI is already defined in a way to support that [1].

My current preference is for option 2. There may be other options
that we haven't explored, though.

Doug

[1] 
https://governance.openstack.org/tc/reference/project-testing-interface.html#release-notes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] bugs for up-coming bugsmash and reviews

2017-11-15 Thread Jay S Bryant

Team,

Wanted to just send out a note for those of you who were not able to 
attend today's weekly meeting.


There is a bugsmash coming up next week in China [1] .  If you have bugs 
that are appropriate to be addressed during the bugsmash please update 
them in Launchpad with the 'bugsmash' tag.  We currently have 12 bugs on 
the list, never hurts to add more.


TommyLikeHu will be at the bugsmash.  Please keep an eye out for 
requests to review patches coming from the bugsmash and give them 
priority so we can try to make this a successful.  There is etherpad [2] 
where they will be tracking their work, wouldn't hurt for the core team 
to also keep an eye on there to help keep things moving along.


Thanks for your support of this effort!

Jay

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123854.html


[2] 
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Queens-Wuhan-Bugs-List



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Mathieu Gagné
Some clarifications below.

On Wed, Nov 15, 2017 at 4:52 AM, Bogdan Dobrelya  wrote:
> Thank you Mathieu for the insights!
>
>> To add details to what happened:
>> * Upgrade was never made a #1 priority. It was a one man show for far
>> too long. (myself)
>
>
> I suppose that confirms that upgrades is very nice to have in production
> deployments, eventually, maybe... (please read below to continue)
>
>> * I also happen to manage and work on other priorities.
>> * Lot of work made to prepare for multiple versions support in our
>> deployment tools. (we use Puppet)
>> * Lot of work in the packaging area to speedup packaging. (we are
>> still using deb packages but with virtualenv to stay Puppet
>> compatible)
>> * We need to forward-port private patches which upstream won't accept
>> and/or are private business logic.
>
>
> ... yet long time maintaining and landing fixes is the ops' *reality* and
> pain #1. And upgrades are only pain #2. LTS can not directly help with #2,
> but only indirectly, if the vendors' downstream teams could better cooperate
> with #1 and have more time and resources to dedicate for #2, upgrades
> stories for shipped products and distros.

We do not have a vendor. (anymore, if you consider Ubuntu
cloud-archive as a vendor)
We package and deploy ourselves.

> Let's please to not lower the real value of LTS branches and not substitute
> #1 with #2. This topic is not about bureaucracy and policies, it is about
> how could the community help vendors to cooperate over maintaining of
> commodity things, with as less bureaucracy as possible, to ease the
> operators pains in the end.
>
>> * Our developer teams didn't have enough free cycles to work right
>> away on the upgrade. (this means delays)
>> * We need to test compatibility with 3rd party systems which takes
>> some time. (and make them compatible)
>
>
> This confirms perhaps why it is vital to only run 3rd party CI jobs for LTS
> branches?

For us, 3rd party systems are internal systems outside our control or
realm of influence.
They are often in-house systems that the outside world would care very
little about.

>> * We need to update systems ever which we don't have full control.
>> This means serious delays when it comes to deployment.
>> * We need to test features/stability during some time in our dev
>> environment.
>> * We need to test features/stability during some time in our
>> staging/pre-prod environment.
>> * We need to announce and inform our users at least 2 weeks in advance
>> before performing an upgrade.
>> * We choose to upgrade one service at a time (in all regions) to avoid
>> a huge big bang upgrade. (this means more maintenance windows to plan
>> and you can't stack them too much)
>> * We need to swiftly respond to bug discovered by our users. This
>> means change of priorities and delay in other service upgrades.
>> * We will soon need to upgrade operating systems to support latest
>> OpenStack versions. (this means we have to stop OpenStack upgrades
>> until all nodes are upgraded)
>
>
> It seems that the answer to the question sounded, "Why upgrades are so
> painful and take so much time for ops?" is "as upgrades are not the
> priority. Long Time Support and maintenance are".

The cost of performing an upgrading is both time and resources
consuming which are both limited.
And you need to sync the world around you to make it happen. It's not
a one man decision/task.

When you remove all the external factors, dependencies, politics,
etc., upgrading can take an afternoon from A to Z from some projects.
We do have an internal cloud for our developers that lives in a
vacuum. Let me tell you that it's not very easy upgrade it. We are
talking about hours/days, not years.

So if I can only afford to upgrade once per year, what are my options?

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [policy] [keystone] Support for deprecating policies

2017-11-15 Thread Lance Bragstad
I messed up the links in the previous note.

Merged implementation: https://review.openstack.org/#/c/509909/
Documentation:
https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#oslo_policy.policy.DeprecatedRule


On 11/15/2017 11:34 AM, Lance Bragstad wrote:
> Hey all,
>
> I wanted to let everyone know that we just merged a patch [0] that
> allows developers to deprecate policies (just like deprecating
> configuration options). The functionality is implemented using the
> DeprecatedRule object [1] and emits a warning to operators when
> deprecated policies are used. There is also compatibility logic built in
> to aid in transitions for operators.
>
> If you've been waiting to change or improve default policies in a
> project, you should be able to start making those changes with a new
> version of oslo.policy. If you notice anything strange in the
> implementation or find the documentation lacking, please let me know.
> I'd like to make sure we get any issues ironed out as early as possible.
>
> Thanks,
>
> Lance
>
>
> [1] https://review.openstack.org/#/c/509909/
> [0]
> https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#oslo_policy.policy.DeprecatedRule
>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [policy] [keystone] Support for deprecating policies

2017-11-15 Thread Lance Bragstad
Hey all,

I wanted to let everyone know that we just merged a patch [0] that
allows developers to deprecate policies (just like deprecating
configuration options). The functionality is implemented using the
DeprecatedRule object [1] and emits a warning to operators when
deprecated policies are used. There is also compatibility logic built in
to aid in transitions for operators.

If you've been waiting to change or improve default policies in a
project, you should be able to start making those changes with a new
version of oslo.policy. If you notice anything strange in the
implementation or find the documentation lacking, please let me know.
I'd like to make sure we get any issues ironed out as early as possible.

Thanks,

Lance


[1] https://review.openstack.org/#/c/509909/
[0]
https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#oslo_policy.policy.DeprecatedRule




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][neutron][infra] Tag of openstack/neutron-fwaas-dashboard failed

2017-11-15 Thread Akihiro Motoki
2017-11-15 1:06 GMT+09:00 Andreas Jaeger :
> On 2017-11-14 17:03, Doug Hellmann wrote:
>> Excerpts from Andreas Jaeger's message of 2017-11-14 09:31:48 +0100:
>>> On 2017-11-13 22:09, Doug Hellmann wrote:
 Excerpts from zuul's message of 2017-11-13 20:37:18 +:
> Unable to freeze job graph: Unable to modify final job  publish-openstack-releasenotes branches: None source: 
> openstack-infra/project-config/zuul.d/jobs.yaml@master#26> attribute 
> required_projects={'openstack/horizon':  0x7ff848d06b70>} with variant  branches: None source: 
> openstack-infra/openstack-zuul-jobs/zuul.d/project-templates.yaml@master#285>
>

 It looks like there is a configuration issue with
 neutron-fwaas-dashboard.
>>>
>>> Yes, we marked  publish-openstack-releasenotes as final - and then the
>>> job added requirements to it.
>>>
>>> I see at least these two different fixes:
>>> - remove the final: true from the job
>>> - add neutron and horizon to the job like we done for release job. But
>>> there are other projects that have even more requirements.
>>>
>>> Infra team, what's the best approach here?
>>>
>>> Andreas
>>
>> No projects should even need to install *themselves* much less their
>> other dependencies to build release notes. It should be possible to
>> build release notes just with sphinx and reno (and their dependencies).
>
> It should - but that's not how those projects seem to be set up ;(

The current setup is what most horizon plugins do. It is not special.

The release notes of neutron-fwaas-dashboard can be built with sphinx,
openstackdocstheme, reno and neutron-fwaas-dashboard itself.
I am fine to change this appropriately, but what is the right thing to do?
Do we need to change 'releasenotes' env to depend on only sphinx,
openstackdocstheme and reno?

Akihiro

>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Does glance_store swift driver support range requests ?

2017-11-15 Thread John Dickinson


On 15 Nov 2017, at 7:40, Jay Pipes wrote:

> On 11/15/2017 06:28 AM, Duncan Thomas wrote:
>> On 15 November 2017 at 11:15, Matt Keenan  wrote:
>>> On 13/11/17 22:51, Nikhil Komawar wrote:
>>>
>>> I think it will a rather hard problem to solve. As swift store can be
>>> configured to store objects in different configurations. I guess the next
>>> question would be, what is your underlying problem -- multiple build
>>> requests or is this for retry for a single download?
>>>
>>> If the image is in image cache and you are hitting the glance node with
>>> cached image (which is quite possible for tiny deployments), this feature
>>> will be relatively easier.
>>>
>>>
>>> So the specific image stored in glance is a Unified Archive
>>> (https://docs.oracle.com/cd/E36784_01/html/E38524/gmrlo.html).
>>>
>>> During a UAR deployment the archive UUID is required and it is contained in
>>> the first 33 characters of the UAR image, thus a range request for this
>>> portion is required when initiating the deployment. Then the rest of the
>>> archive is extracted and deployed.
>>
>> Given the range you want is always at the beginning, is a range
>> request any different to doing a full get request and dripping the
>> connection when you've got the bytes you want?
>
> Or just store the UAR UUID in the image metadata...
>
> -jay

Swift supports range requests (and multiple ranges at the same time)

eg: "Range: bytes=1-34, 100-1024"


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] New Team Meeting Time doodle

2017-11-15 Thread Bedyk, Witold
Hello everyone,

We would like to choose the optimal time for our Team Meeting. I have created a 
doodle [1] for that. Please put your preferences until Monday 1200 UTC if you 
want to attend the meeting.

Cheers
Witek

[1] https://doodle.com/poll/s5fwqtu7ik898p57

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-11-15 Thread Matthew Thode
On 17-11-15 09:32:33, Dmitry Tantsur wrote:
> On 10/31/2017 12:11 AM, richard.pi...@dell.com wrote:
> > > From: Dmitry Tantsur [mailto:dtant...@redhat.com]
> > 
> > > Cons:
> > > 1. more work for both the requirements team and the vendor teams
> > 
> > Please elaborate on the additional work you envision for the vendor teams.
> 
> Any requirements updates with have to be submitted to the requirements repo.
> It may take longer (may not).
> 

We (requirements) are prety good about being on top of reviews :P

> > 
> > > 2. inability to use ironic release notes to explain driver requirements 
> > > changes
> > 
> > Where could that information move to?
> 
> I think it's a generic question, to be honest. We don't inform operators of
> requirements changes via release notes. I don't have an easy answer.
> 

Should each driver have it's own release notes then?  Not sure if that'd help.

> > 
> > > We either will have one list:
> > > 
> > > [extras]
> > > drivers =
> > > sushy>=a.b
> > > python-dracclient>=x.y
> > > python-prolianutils>=v.w
> > > ...
> > > 
> > > or (and I like this more) we'll have a list per hardware type:
> > > 
> > > [extras]
> > > redfish =
> > > sushy>=a.b
> > > idrac =
> > > python-dracclient>=x.y
> > > ilo =
> > > ...
> > > ...
> > > 
> > > WDYT?
> > > 
> > 
> > Overall, a big +1. I prefer the second approach.
> > 
> > A couple of questions ...
> > 
> > 1. If two (2) hardware types have the same requirement, would they both
> > enter it in their lists?
> 
> Yes
> 
> > 2. And would that be correctly handled?
> 
> Tony checked it (see his response to this thread) - yes.
> 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Flavio Percoco

On 14/11/17 15:10 -0500, Doug Hellmann wrote:

Excerpts from Chris Friesen's message of 2017-11-14 14:01:58 -0600:

On 11/14/2017 01:28 PM, Dmitry Tantsur wrote:

>> The quality of backported fixes is expected to be a direct (and only?)
>> interest of those new teams of new cores, coming from users and operators and
>> vendors.
>
> I'm not assuming bad intentions, not at all. But there is a lot of involved 
in a
> decision whether to make a backport or not. Will these people be able to
> evaluate a risk of each patch? Do they have enough context on how that release
> was implemented and what can break? Do they understand why feature backports 
are
> bad? Why they should not skip (supported) releases when backporting?
>
> I know a lot of very reasonable people who do not understand the things above
> really well.

I would hope that the core team for upstream LTS would be the (hopefully
experienced) people doing the downstream work that already happens within the
various distros.

Chris



Presumably those are the same people we've been trying to convince
to work on the existing stable branches for the last 5 years. What
makes these extended branches more appealing to those people than
the existing branches? Is it the reduced requirements on maintaining
test jobs? Or maybe some other policy change that could be applied
to the stable branches?


Guessing based on the feedback so far, I would say that these branches are more
appealing because they are the ones these folks are actually running in
production.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Does glance_store swift driver support range requests ?

2017-11-15 Thread Jay Pipes

On 11/15/2017 06:28 AM, Duncan Thomas wrote:

On 15 November 2017 at 11:15, Matt Keenan  wrote:

On 13/11/17 22:51, Nikhil Komawar wrote:

I think it will a rather hard problem to solve. As swift store can be
configured to store objects in different configurations. I guess the next
question would be, what is your underlying problem -- multiple build
requests or is this for retry for a single download?

If the image is in image cache and you are hitting the glance node with
cached image (which is quite possible for tiny deployments), this feature
will be relatively easier.


So the specific image stored in glance is a Unified Archive
(https://docs.oracle.com/cd/E36784_01/html/E38524/gmrlo.html).

During a UAR deployment the archive UUID is required and it is contained in
the first 33 characters of the UAR image, thus a range request for this
portion is required when initiating the deployment. Then the rest of the
archive is extracted and deployed.


Given the range you want is always at the beginning, is a range
request any different to doing a full get request and dripping the
connection when you've got the bytes you want?


Or just store the UAR UUID in the image metadata...

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-15 Thread Doug Hellmann
Excerpts from Fox, Kevin M's message of 2017-11-15 00:37:26 +:
> I can think of a few ideas, though some sound painful on paper Not really 
> recommending anything, just thinking out loud...
> 
> One idea is that at the root of chaos monkey. If something is hard, do it 
> frequently. If upgrading is hard, we need to be doing it constantly so the 
> pain gets largely eliminated. One idea would be to discourage the use of 
> standing up a fresh devstack all the time by devs and have them upgrade them 
> instead. If its hard, then its likely someone will chip in to make it less 
> hard.
> 
> Another is devstack in general. the tooling used by devs and that used by ops 
> are so different as to isolate the devs from ops' pain. If they used more 
> opsish tooling, then they would hit the same issues and would be more likely 
> to find solutions that work for both parties.
> 
> A third one is supporting multiple version upgrades in the gate. I rarely 
> have a problem with a cloud has a database one version back. I have seen lots 
> of issues with databases that contain data back when the cloud was 
> instantiated and then upgraded multiple times.
> 
> Another option is trying to unify/detangle the upgrade procedure. upgrading 
> compute kit should be one or two commands if you can live with the defaults. 
> Not weeks of poring through release notes, finding correct orders from pages 
> of text and testing vigorously on test systems.

This sounds like an opportunity for some knowledge sharing. Maybe when
the Operators' Guide makes it into the wiki?

> 
> How about some tool that does the: dump database to somewhere temporary, 
> iterate over all the upgrade job components, and see if it will successfully 
> not corrupt your database. That takes a while to do manually. Ideally it 
> could even upload stacktraces back a bug tracker for attention.
> 
> Thanks,
> Kevin
> 
> From: Davanum Srinivas [dava...@gmail.com]
> Sent: Tuesday, November 14, 2017 4:08 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: openstack-oper.
> Subject: Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases
> 
> On Wed, Nov 15, 2017 at 10:44 AM, John Dickinson  wrote:
> >
> >
> > On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
> >
> >> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
> >>> The pressure for #2 comes from the inability to skip upgrades and the 
> >>> fact that upgrades are hugely time consuming still.
> >>>
> >>> If you want to reduce the push for number #2 and help developers get 
> >>> their wish of getting features into users hands sooner, the path to 
> >>> upgrade really needs to be much less painful.
> >>>
> >>
> >> +1000
> >>
> >> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
> >> execute the upgrade. (and we skipped a version)
> >> Scheduling all the relevant internal teams is a monumental task
> >> because we don't have dedicated teams for those projects and they have
> >> other priorities.
> >> Upgrading affects a LOT of our systems, some we don't fully have
> >> control over. And it can takes months to get new deployment on those
> >> systems. (and after, we have to test compatibility, of course)
> >>
> >> So I guess you can understand my frustration when I'm told to upgrade
> >> more often and that skipping versions is discouraged/unsupported.
> >> At the current pace, I'm just falling behind. I *need* to skip
> >> versions to keep up.
> >>
> >> So for our next upgrades, we plan on skipping even more versions if
> >> the database migration allows it. (except for Nova which is a huge
> >> PITA to be honest due to CellsV1)
> >> I just don't see any other ways to keep up otherwise.
> >
> > ?!?!
> >
> > What does it take for this to never happen again? No operator should need 
> > to plan and execute an upgrade for a whole year to upgrade one year's worth 
> > of code development.
> >
> > We don't need new policies, new teams, more releases, fewer releases, or 
> > anything like that. The goal is NOT "let's have an LTS release". The goal 
> > should be "How do we make sure Mattieu and everyone else in the world can 
> > actually deploy and use the software we are writing?"
> >
> > Can we drop the entire LTS discussion for now and focus on "make upgrades 
> > take less than a year" instead? After we solve that, let's come back around 
> > to LTS versions, if needed. I know there's already some work around that. 
> > Let's focus there and not be distracted about the best bureaucracy for not 
> > deleting two-year-old branches.
> >
> >
> > --John
> 
> John,
> 
> So... Any concrete ideas on how to achieve that?
> 
> Thanks,
> Dims
> 
> >
> >
> > /me puts on asbestos pants
> >
> >>
> >> --
> >> Mathieu
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> openstack-operat...@lists.openstack.org
> >> 

Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-15 Thread Jeremy Stanley
On 2017-11-15 00:37:26 + (+), Fox, Kevin M wrote:
[...]
> One idea is that at the root of chaos monkey. If something is
> hard, do it frequently. If upgrading is hard, we need to be doing
> it constantly so the pain gets largely eliminated. One idea would
> be to discourage the use of standing up a fresh devstack all the
> time by devs and have them upgrade them instead. If its hard, then
> its likely someone will chip in to make it less hard.

This is also the idea behind running grenade in CI. The previous
OpenStack release is deployed, an attempt at a representative (if
small) dataset is loaded into it, and then it is upgraded to the
release under development with the proposed change applied and
exercised to make sure the original resources built under the
earlier release are still in working order. We can certainly do more
to make this a better representation of "The Real World" within the
resource constraints of our continuous integration, but we do at
least have a framework in place to attempt it.

> Another is devstack in general. the tooling used by devs and that
> used by ops are so different as to isolate the devs from ops'
> pain. If they used more opsish tooling, then they would hit the
> same issues and would be more likely to find solutions that work
> for both parties.

Keep in mind that DevStack was developed to have a quick framework
anyone could use to locally deploy an all-in-one OpenStack from
source. It was not actually developed for CI automation, to the
extent that we developed a separate wrapper project to make DevStack
usable within our CI (the now somewhat archaically-named
devstack-gate project). It's certainly possible to replace that with
a more mainstream deployment tool, I think, so long as it maintains
the primary qualities we rely on: 1. rapid deployment, 2. can work
on a single system with fairly limited resources, 3. can deploy from
source and incorporate proposed patches, 4. pluggable/extensible so
that new services can be easily integrated even before they're
officially released.

> A third one is supporting multiple version upgrades in the gate. I
> rarely have a problem with a cloud has a database one version
> back. I have seen lots of issues with databases that contain data
> back when the cloud was instantiated and then upgraded multiple
> times.

I believe this will be necessary anyway if we want to officially
support so-called "fast forward" upgrades, since anything that's not
tested is assumed to be (and in fact usually is) broken.

> Another option is trying to unify/detangle the upgrade procedure.
> upgrading compute kit should be one or two commands if you can
> live with the defaults. Not weeks of poring through release notes,
> finding correct orders from pages of text and testing vigorously
> on test systems.

This also sounds like a defect in our current upgrade testing, if
we're somehow embedding upgrade automation in our testing without
providing the same tools to easily perform those steps in production
upgrades.

> How about some tool that does the: dump database to somewhere
> temporary, iterate over all the upgrade job components, and see if
> it will successfully not corrupt your database. That takes a while
> to do manually. Ideally it could even upload stacktraces back a
> bug tracker for attention.

Without a clearer definition of "successfully not corrupt your
database" suitable for automated checking, I don't see how this one
is realistic. Do we have a database validation tool now? If we do,
is it deficient in some way? If we don't, what specifically should
it be checking? Seems like something we would also want to run at
the end of all our upgrade tests too.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-15 Thread Thierry Carrez
John Dickinson wrote:
> What I heard from ops in the room is that they want (to start) one
> release a year who's branch isn't deleted after a year. What if that's
> exactly what we did? I propose that OpenStack only do one release a year
> instead of two. We still keep N-2 stable releases around. We still do
> backports to all open stable branches. We still do all the things we're
> doing now, we just do it once a year instead of twice.

I started a thread around this specific suggestion on the -sigs list at:

http://lists.openstack.org/pipermail/openstack-sigs/2017-November/000149.html

Please continue the discussion there, to avoid the cross-posting.

If you haven't already, please subscribe at:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI end of sprint status

2017-11-15 Thread Arx Cruz
Hello,

On November 13 we came the end of sprint using our new team structure [1],
and here’s the highlights:

Sprint Review:

The sprint epic was Reproduce of upstream CI jobs against RDO cloud
personal tenant [2] in order to help our Ruck and Rover to reproduce CI
issues.

It was we set several cards each with one specific task to achieve our
objective, and I am glad to report that we were able to complete it and now
the Ruck and Rover have a easy tool to reproduce CI issues upstream.

There are some reviews pending to be merged, but we are considering the
work done. You can try it by following the documentation [3]! I'm also
happy to say that in this sprint, we have only one card in tech debt!

One can see the results of the sprint via https://tinyurl.com/ybfds8p3

List of what was done by the Ruck and Rover:

   -

   https://bugs.launchpad.net/tripleo/+bug/1729586
   -

   https://bugs.launchpad.net/tripleo/+bug/1729328
   -

   https://bugs.launchpad.net/tripleo/+bug/1728135
   -

   https://bugs.launchpad.net/tripleo/+bug/1728070


We also have our new Ruck and Rover for this wee:

   - Ruck
  - Attila Darazs - adaras|ruck
   - Rover
  - Ronelle Landy - rlandy|rover


If you have any questions and/or sugestions, please contact us

Kind regards,
Arx Cruz

[1] https://review.openstack.org/#/c/509280/

[2] https://trello.com/c/aPuHTfo4

[3] https://etherpad.openstack.org/p/ruck-rover-reproduce-jobs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Thierry Carrez
I suggested by Rocky, I moved the discussion to the -sigs list by
posting my promised summary of the session at:

http://lists.openstack.org/pipermail/openstack-sigs/2017-November/000148.html

Please continue the discussion there, to avoid the cross-posting.

If you haven't already, please subscribe at:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] multiple agents with segment access

2017-11-15 Thread Legacy, Allain
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Tuesday, November 14, 2017 4:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] multiple agents with segment access
> 
> In general, you should be able to run both regular l2 agent (ovs) and sriov
> agent. If you have problems with it, we should probably assume it's a bug.
> Please report.


Ok, since this affects two distinct parts of the system I created 2 separate 
bug reports:

https://bugs.launchpad.net/neutron/+bug/1732445
https://bugs.launchpad.net/neutron/+bug/1732448
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-15 Thread Thierry Carrez
Rochelle Grober wrote:
> Folks,
> 
> This discussion and the people interested in it seem like a perfect 
> application of the SIG process.  By turning LTS into a SIG, everyone can 
> discuss the issues on the SIG mailing list and the discussion shouldn't end 
> up split.  If it turns into a project, great.  If a solution is found that 
> doesn't need a new project, great.  Even once  there is a decision on how to 
> move forward, there will still be implementation issues and enhancements, so 
> the SIG could very well be long-lived.  But the important aspect of this is:  
> keeping the discussion in a place where both devs and ops can follow the 
> whole thing and act on recommendations.

That's an excellent suggestion, Rocky.

Moving the discussion to a SIG around LTS / longer-support / post-EOL
support would also be a great way to form a team to work on that.

Yes, there is a one-time pain involved with subscribing to the -sigs ML,
but I'd say that it's a good idea anyway, and this minimal friction
might reduce the discussion to people that might actually help with
setting something up.

So join:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

While I'm not sure that's the best name for it, as suggested by Rocky
let's use [lts] as a prefix there.

I'll start a couple of threads.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] removing zuul v3 legacy jobs

2017-11-15 Thread Emilien Macchi
On Tue, Nov 14, 2017 at 11:44 PM, Andreas Jaeger  wrote:
> On 2017-11-15 01:18, Emilien Macchi wrote:
>> Hi,
>>
>> I'm working on migrating all TripleO CI jobs to be in-tree, I'm also
>> refactoring the layout and do some cleanup.
>
> Please don't move *all* in tree - only the legacy ones. There's a
> specific set of jobs we (infra, release team) like to keep in
> project-config, see
> https://docs.openstack.org/infra/manual/zuulv3.html#what-not-to-convert

Yes, sorry for confusion, I'm only working on legacy jobs.

>> It's a bit of work, that can be followed here:
>> https://review.openstack.org/#/q/topic:tripleo/migrate-to-zuulv3
>>
>> The only thing I ask from our team is to let me know any change in
>> project-config & zuul layout, so we can update my work in tripleo-ci
>> patch, otherwise it will be lost when we land the patches.
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2017.11.15

2017-11-15 Thread Zhipeng Huang
Hi Team,

As agreed last week we will begin our weekly video conference to speed up
the development, ZOOM meeting link could be found at
https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Next_meeting_:_Nov_8th.2C_2017
.

The meeting will start as usual from UTC1500, and we will log necessary
info at #openstack-cyborg

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Does glance_store swift driver support range requests ?

2017-11-15 Thread Duncan Thomas
On 15 November 2017 at 11:15, Matt Keenan  wrote:
> On 13/11/17 22:51, Nikhil Komawar wrote:
>
> I think it will a rather hard problem to solve. As swift store can be
> configured to store objects in different configurations. I guess the next
> question would be, what is your underlying problem -- multiple build
> requests or is this for retry for a single download?
>
> If the image is in image cache and you are hitting the glance node with
> cached image (which is quite possible for tiny deployments), this feature
> will be relatively easier.
>
>
> So the specific image stored in glance is a Unified Archive
> (https://docs.oracle.com/cd/E36784_01/html/E38524/gmrlo.html).
>
> During a UAR deployment the archive UUID is required and it is contained in
> the first 33 characters of the UAR image, thus a range request for this
> portion is required when initiating the deployment. Then the rest of the
> archive is extracted and deployed.

Given the range you want is always at the beginning, is a range
request any different to doing a full get request and dripping the
connection when you've got the bytes you want?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Does glance_store swift driver support range requests ?

2017-11-15 Thread Matt Keenan

On 13/11/17 22:51, Nikhil Komawar wrote:
I think it will a rather hard problem to solve. As swift store can be 
configured to store objects in different configurations. I guess the 
next question would be, what is your underlying problem -- multiple 
build requests or is this for retry for a single download?


If the image is in image cache and you are hitting the glance node 
with cached image (which is quite possible for tiny deployments), this 
feature will be relatively easier.




So the specific image stored in glance is a Unified Archive 
(https://docs.oracle.com/cd/E36784_01/html/E38524/gmrlo.html).


During a UAR deployment the archive UUID is required and it is contained 
in the first 33 characters of the UAR image, thus a range request for 
this portion is required when initiating the deployment. Then the rest 
of the archive is extracted and deployed.


I just want to know whether this capability is possible with swift ?

If I change the default_store in glance_api.conf  to say "file" e.g. 
default_store = file. restart devstack@g-api and then upload an archive. 
The curl request succeeds, so file based range requests are working, 
just the default swift setup is failing and I though it might be some 
conf file setting was required to enable the capability. Pretty sure 
cinder works (well it used to work last time I tried this which was with 
way back with Mitaka :) ).


Anyhow if it's not supported then I can work around at least initially 
using file store.


thanks

Matt




On Mon, Nov 13, 2017 at 6:47 AM, Matt Keenan > wrote:


Hi,

 Just configured devstack on Fedora 26, and by default
glance_store uses swift for image storage. When attempting to get
a specific range from a glance stored image, it's reporting range
requests are not supported e.g.:

    $ curl -i -X GET -r 0-32 -H "X-Auth-Token: $auth_token"
http://10.169.104.255/image/v2/images/29b7aa

    5e-3ec2-49b5-ab6b-d6cc5099f46c/file
    HTTP/1.1 400 Bad Request
    Date: Mon, 13 Nov 2017 10:43:23 GMT
    Server: Apache/2.4.27 (Fedora) OpenSSL/1.1.0f-fips
mod_wsgi/4.5.15 Python/2.7
    Content-Length: 205
    Content-Type: text/html; charset=UTF-8
    x-openstack-request-id: req-5ed2239f-165b-406f-969b-5cc4ab8c632d
    Connection: close

    
 
  400 Bad Request
 
 
  400 Bad Request
  Getting images randomly from this store is not supported.
Offset: 0, length: 33
 

Upon investigation, glance-api log is emitting:

    Nov 13 10:45:31 devstack@g-api.service[22783]:
#033[01;31mERROR glance.location [#033[01;36mNone
req-ad6da3f0-ead1-486a-a873-d301f02b0888 #033[00;36mdemo
demo#033[01;31m]
#033[01;35m#033[0│·1;31mGlance tried all
active locations to get data for image
29b7aa5e-3ec2-49b5-ab6b-d6cc5099f46c but all have failed.#033[00m:
StoreRandomGetNotSupported: Getting images randomly from this
store is notMDg4OCAjMDMzWzAwOzM2bW supported. Offset: 0, length: 33

The exception StoreRandomGetNotSupported is emitted by
glance_store from glance_store/capabilities.py:

    op_exec_map = {
    'get': (exceptions.StoreRandomGetNotSupported
    if kwargs.get('offset') or
kwargs.get('chunk_size') else
    exceptions.StoreGetNotSupported),
    'add': exceptions.StoreAddDisabled,
    'delete': exceptions.StoreDeleteNotSupported}

Looking at _driver/swift/store.py I think range requests are
supported, it I've be unsuccessful in configuring it.

Does the glance_store swift driver support range requests ?

Can it be configured within a conf file, by somehow adding a
capability ?

thanks

Matt

-- 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
--
Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
HHGS : http://www.hh-gs.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Bogdan Dobrelya

Thank you Mathieu for the insights!


To add details to what happened:
* Upgrade was never made a #1 priority. It was a one man show for far
too long. (myself)


I suppose that confirms that upgrades is very nice to have in production 
deployments, eventually, maybe... (please read below to continue)



* I also happen to manage and work on other priorities.
* Lot of work made to prepare for multiple versions support in our
deployment tools. (we use Puppet)
* Lot of work in the packaging area to speedup packaging. (we are
still using deb packages but with virtualenv to stay Puppet
compatible)
* We need to forward-port private patches which upstream won't accept
and/or are private business logic.


... yet long time maintaining and landing fixes is the ops' *reality* 
and pain #1. And upgrades are only pain #2. LTS can not directly help 
with #2, but only indirectly, if the vendors' downstream teams could 
better cooperate with #1 and have more time and resources to dedicate 
for #2, upgrades stories for shipped products and distros.


Let's please to not lower the real value of LTS branches and not 
substitute #1 with #2. This topic is not about bureaucracy and policies, 
it is about how could the community help vendors to cooperate over 
maintaining of commodity things, with as less bureaucracy as possible, 
to ease the operators pains in the end.



* Our developer teams didn't have enough free cycles to work right
away on the upgrade. (this means delays)
* We need to test compatibility with 3rd party systems which takes
some time. (and make them compatible)


This confirms perhaps why it is vital to only run 3rd party CI jobs for 
LTS branches?



* We need to update systems ever which we don't have full control.
This means serious delays when it comes to deployment.
* We need to test features/stability during some time in our dev environment.
* We need to test features/stability during some time in our
staging/pre-prod environment.
* We need to announce and inform our users at least 2 weeks in advance
before performing an upgrade.
* We choose to upgrade one service at a time (in all regions) to avoid
a huge big bang upgrade. (this means more maintenance windows to plan
and you can't stack them too much)
* We need to swiftly respond to bug discovered by our users. This
means change of priorities and delay in other service upgrades.
* We will soon need to upgrade operating systems to support latest
OpenStack versions. (this means we have to stop OpenStack upgrades
until all nodes are upgraded)


It seems that the answer to the question sounded, "Why upgrades are so 
painful and take so much time for ops?" is "as upgrades are not the 
priority. Long Time Support and maintenance are".


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] inclusion of openstack/networking-generic-switch project under OpenStack baremetal program

2017-11-15 Thread Shivanand Tendulker
Thank you. I too vote for 'Option 1'.

Thanks and Regards
Shiv



On Wed, Nov 15, 2017 at 1:03 AM, Villalovos, John L <
john.l.villalo...@intel.com> wrote:

> Thanks for sending this out.
>
>
>
> I would vote for Option 1.
>
>
>
> Thanks,
>
> John
>
>
>
> *From:* Pavlo Shchelokovskyy [mailto:pshchelokovs...@mirantis.com]
> *Sent:* Tuesday, November 14, 2017 8:16 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [ironic] inclusion of
> openstack/networking-generic-switch project under OpenStack baremetal
> program
>
>
>
> Hi all,
>
>
>
> as this topic it was recently brought up in ironic IRC meeting, I'd like
> to start a discussion on the subject.
>
>
>
> A quick recap - networking-generic-switch project (n-g-s) was born out of
> necessity to do two things:
>
>
>
> -  test the "network isolation for baremetal nodes" (a.k.a. multi-tenancy)
> feature of ironic on upstream gates in virtualized environment and
>
> - do the same on cheap/simple/dumb hardware switches that are not
> supported by other various openstack/networking-* projects.
>
>
>
> Back when it was created AFAIR neutron governance (neutron stadium) was
> under some changes, so in the end n-g-s ended up not belonging to any
> official program.
>
>
>
> Over time n-g-s grew to be an essential part of ironic gate testing
> (similar to virtualbmc). What's more, we have reports that it is already
> being used in production.
>
>
>
> Currently the core reviewers team of n-g-s consists of 4 people (2 of
> those are currently core reviewers in ironic too), all of them are working
> for the same company (Mirantis). This poses some risk as companies and
> people come and go, plus since some voting ironic gate jobs depend on n-g-s
> stability, a more diverse group of core reviewers from baremetal program
> might be beneficial to be able to land patches in case of severe gate
> troubles.
>
>
>
> Currently I know of 3 proposed ways to change the current situation:
>
>
>
> 1) include n-g-s under ironic (OpenStack Baremetal program) governance,
> effectively including ironic-core team to the core team of  n-g-s similar
> to how ironic-inspector currently governed (keeping an extended sub-core
> team). Reasoning for addition is the same as with virtualbmc/sushy
> projects, with the debatable difference that the actual scope of n-g-s is
> quite bigger and apparently includes production use-cases;
>
>
>
> 2) keep things as they are now, just add ironic-stable-maint team to the
> n-g-s core reviewers to decrease low diversity risks;
>
>
>
> 3) merge the code from n-g-s into networking-baremetal project which is
> already under ironic governance.
>
>
>
> As a core in n-g-s myself I'm happy with either 1) or 2), but not really
> fond of 3) as it kind of stretches the networking-baremetal scope too much
> IMHO.
>
>
>
> Eager to hear your comments and proposals.
>
>
>
> Cheers,
>
> --
>
> Dr. Pavlo Shchelokovskyy
>
> Senior Software Engineer
>
> Mirantis Inc
>
> www.mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] moving the weekly meeting to our channel?

2017-11-15 Thread Dmitry Tantsur

Hi all,

Due to a technical issue we had to have our weekly meeting in our main channel 
this time. And we liked it :) I wonder if we should switch to it.


Pros:
* easier to find
* no channel switching

Cons:
* potential conflicts with other meetings (already a problem, given how many 
rooms we have)

* blocks the main channel for 1 hour for other business

Opinions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-11-15 Thread Dmitry Tantsur

On 10/31/2017 12:11 AM, richard.pi...@dell.com wrote:

From: Dmitry Tantsur [mailto:dtant...@redhat.com]



Cons:
1. more work for both the requirements team and the vendor teams


Please elaborate on the additional work you envision for the vendor teams.


Any requirements updates with have to be submitted to the requirements repo. It 
may take longer (may not).





2. inability to use ironic release notes to explain driver requirements changes


Where could that information move to?


I think it's a generic question, to be honest. We don't inform operators of 
requirements changes via release notes. I don't have an easy answer.





We either will have one list:

[extras]
drivers =
sushy>=a.b
python-dracclient>=x.y
python-prolianutils>=v.w
...

or (and I like this more) we'll have a list per hardware type:

[extras]
redfish =
sushy>=a.b
idrac =
python-dracclient>=x.y
ilo =
...
...

WDYT?



Overall, a big +1. I prefer the second approach.

A couple of questions ...

1. If two (2) hardware types have the same requirement, would they both
enter it in their lists?


Yes


2. And would that be correctly handled?


Tony checked it (see his response to this thread) - yes.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-11-15 Thread Dmitry Tantsur
I don't think it affects containers directly. Depending on how you build 
containers you may have to do nothing (if you use package, for example) or 
update your pip install to do a different thing (or things).


On 10/30/2017 09:48 PM, arkady.kanev...@dell.com wrote:

The second seem to be better suited for per driver requirement handling and per 
HW type per function.
Which option is easier to handle for container per dependency for the future?


Thanks,
Arkady

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com]
Sent: Monday, October 30, 2017 2:47 PM
To: openstack-dev 
Subject: Re: [openstack-dev] [ironic] [requirements] moving driver dependencies 
to global-requirements?

Excerpts from Dmitry Tantsur's message of 2017-10-30 17:51:49 +0100:

Hi all,

So far driver requirements [1] have been managed outside of global-requirements.
This was mostly necessary because some dependencies were not on PyPI.
This is no longer the case, and I'd like to consider managing them
just like any other dependencies. Pros:
1. making these dependencies (and their versions) more visible for
packagers 2. following the same policies for regular and driver
dependencies 3. ensuring co-installability of these dependencies with
each other and with the remaining openstack 4. potentially using
upper-constraints in 3rd party CI to test what packagers will probably
package 5. we'll be able to finally create a tox job running unit
tests with all these dependencies installed (FYI these often breaks in
RDO CI)

Cons:
1. more work for both the requirements team and the vendor teams 2.
inability to use ironic release notes to explain driver requirements
changes 3. any objections from the requirements team?

If we make this change, we'll drop driver-requirements.txt, and will
use setuptools extras to list then in setup.cfg (this way is supported
by g-r) similar to what we do in ironicclient [2].

We either will have one list:

[extras]
drivers =
sushy>=a.b
python-dracclient>=x.y
python-prolianutils>=v.w
...

or (and I like this more) we'll have a list per hardware type:

[extras]
redfish =
sushy>=a.b
idrac =
python-dracclient>=x.y
ilo =
...
...

WDYT?


The second option is what I would expect.

Doug



[1]
https://github.com/openstack/ironic/blob/master/driver-requirements.tx
t [2]
https://github.com/openstack/python-ironicclient/blob/master/setup.cfg
#L115



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-11-15 Thread Dmitry Tantsur

On 10/30/2017 11:28 PM, Matthew Thode wrote:

On 17-10-30 20:48:37, arkady.kanev...@dell.com wrote:

The second seem to be better suited for per driver requirement handling and per 
HW type per function.
Which option is easier to handle for container per dependency for the future?


Thanks,
Arkady

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com]
Sent: Monday, October 30, 2017 2:47 PM
To: openstack-dev 
Subject: Re: [openstack-dev] [ironic] [requirements] moving driver dependencies 
to global-requirements?

Excerpts from Dmitry Tantsur's message of 2017-10-30 17:51:49 +0100:

Hi all,

So far driver requirements [1] have been managed outside of global-requirements.
This was mostly necessary because some dependencies were not on PyPI.
This is no longer the case, and I'd like to consider managing them
just like any other dependencies. Pros:
1. making these dependencies (and their versions) more visible for
packagers 2. following the same policies for regular and driver
dependencies 3. ensuring co-installability of these dependencies with
each other and with the remaining openstack 4. potentially using
upper-constraints in 3rd party CI to test what packagers will probably
package 5. we'll be able to finally create a tox job running unit
tests with all these dependencies installed (FYI these often breaks in
RDO CI)

Cons:
1. more work for both the requirements team and the vendor teams 2.
inability to use ironic release notes to explain driver requirements
changes 3. any objections from the requirements team?

If we make this change, we'll drop driver-requirements.txt, and will
use setuptools extras to list then in setup.cfg (this way is supported
by g-r) similar to what we do in ironicclient [2].

We either will have one list:

[extras]
drivers =
sushy>=a.b
python-dracclient>=x.y
python-prolianutils>=v.w
...

or (and I like this more) we'll have a list per hardware type:

[extras]
redfish =
sushy>=a.b
idrac =
python-dracclient>=x.y
ilo =
...
...

WDYT?


The second option is what I would expect.

Doug



[1]
https://github.com/openstack/ironic/blob/master/driver-requirements.tx
t [2]
https://github.com/openstack/python-ironicclient/blob/master/setup.cfg
#L115



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Meant to reply from this address, but below is my original response.

The first question I have is if ALL the drivers are suposed to be co-installable
with eachother.  If so, adding them to requirements sounds fine, as long as each
one follows https://github.com/openstack/requirements/#for-new-requirements .


Yes, an ironic installation can have all drivers enabled at the same time on the 
same conductor.




As far as the format, I prefer option 2 (the breakout option).  I'm not sure if
the bot will need an update, but I suspect not as it tries to keep ordering 
iirc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev