Re: [openstack-dev] [stable][meta] Proposing Retiring the Stable Branch project

2018-07-19 Thread Tony Breeds
On Fri, Jul 20, 2018 at 01:05:26PM +1000, Tony Breeds wrote:
> 
> Hello folks,
> So really the subject says it all.  I fell like at the time we
> created the Stable branch project team that was the only option.  Since
> then we have crated the SIG structure and in my opinion that's a better
> fit.  We've also transition from 'Stable Branch Maintenance' to
> 'Extended Maintenance'
> 
> Being a SIG will make it explicit that we *need* operator, user and
> developer contributions.

I meant to say I've created:
https://review.openstack.org/584205 and
https://review.openstack.org/584206

To make this transition.

Thoughts?

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][meta] Proposing Retiring the Stable Branch project

2018-07-19 Thread Tony Breeds
team and Opening the Extended Maintenance SIG
Reply-To: 

Hello folks,
So really the subject says it all.  I fell like at the time we
created the Stable branch project team that was the only option.  Since
then we have crated the SIG structure and in my opinion that's a better
fit.  We've also transition from 'Stable Branch Maintenance' to
'Extended Maintenance'

Being a SIG will make it explicit that we *need* operator, user and
developer contributions.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [neutron-fwaas] Feature Freeze for logging feature

2018-07-19 Thread Furukawa, Yushiro
Hi Miguel,

I'd like to ask Feature Freeze Exeption regarding FWaaS v2 logging.
Following patches are under reviewing now.  So, could you please add these
patches into FFE?

01. openstack/neutron-fwaas https://review.openstack.org/#/c/530694
02. openstack/neutron-fwaas https://review.openstack.org/#/c/553738
03. openstack/neutron-fwaas https://review.openstack.org/#/c/580976
04. openstack/neutron-fwaas https://review.openstack.org/#/c/574128
05. openstack/neutron-fwaas https://review.openstack.org/#/c/532792
06. openstack/neutron-fwaas https://review.openstack.org/#/c/576338
07. openstack/neutron-fwaas https://review.openstack.org/#/c/530715
08. openstack/neutron-fwaas https://review.openstack.org/#/c/578718
09. openstack/neutron   https://review.openstack.org/#/c/534227
10. openstack/neutron   https://review.openstack.org/#/c/529814
11. openstack/neutron   https://review.openstack.org/#/c/580575
12. openstack/neutron   https://review.openstack.org/#/c/582498

We're focus on reviewing/testing these patches now.  In addition, please take
a look 4 neutron patches :)  It is very helpful for us.

Best regards,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes

2018-07-19 Thread Ben Nemec



On 07/19/2018 03:37 PM, Emilien Macchi wrote:
Today I played a little bit with Standalone deployment [1] to deploy a 
single OpenStack cloud without the need of an undercloud and overcloud.

The use-case I am testing is the following:
"As an operator, I want to deploy a single node OpenStack, that I can 
extend with remote compute nodes on the edge when needed."


We still have a bunch of things to figure out so it works out of the 
box, but so far I was able to build something that worked, and I found 
useful to share it early to gather some feedback:

https://gitlab.com/emacchi/tripleo-standalone-edge

Keep in mind this is a proof of concept, based on upstream documentation 
and re-using 100% what is in TripleO today. The only thing I'm doing is 
to change the environment and the roles for the remote compute node.
I plan to work on cleaning the manual steps that I had to do to make it 
working, like hardcoding some hiera parameters and figure out how to 
override ServiceNetmap.


Anyway, feel free to test / ask questions / provide feedback.


What is the benefit of doing this over just using deployed server to 
install a remote server from the central management system?  You need to 
have connectivity back to the central location anyway.  Won't this 
become unwieldy with a large number of edge nodes?  I thought we told 
people not to use Packstack for multi-node deployments for exactly that 
reason.


I guess my concern is that eliminating the undercloud makes sense for 
single-node PoC's and development work, but for what sounds like a 
production workload I feel like you're cutting off your nose to spite 
your face.  In the interest of saving one VM's worth of resources, now 
all of your day 2 operations have no built-in orchestration.  Every time 
you want to change a configuration it's "copy new script to system, ssh 
to system, run script, repeat for all systems.  So maybe this is a 
backdoor way to make Ansible our API? ;-)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes

2018-07-19 Thread Emilien Macchi
Today I played a little bit with Standalone deployment [1] to deploy a
single OpenStack cloud without the need of an undercloud and overcloud.
The use-case I am testing is the following:
"As an operator, I want to deploy a single node OpenStack, that I can
extend with remote compute nodes on the edge when needed."

We still have a bunch of things to figure out so it works out of the box,
but so far I was able to build something that worked, and I found useful to
share it early to gather some feedback:
  https://gitlab.com/emacchi/tripleo-standalone-edge

Keep in mind this is a proof of concept, based on upstream documentation
and re-using 100% what is in TripleO today. The only thing I'm doing is to
change the environment and the roles for the remote compute node.
I plan to work on cleaning the manual steps that I had to do to make it
working, like hardcoding some hiera parameters and figure out how to
override ServiceNetmap.

Anyway, feel free to test / ask questions / provide feedback.

Thanks,
[1]
https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed

2018-07-19 Thread Artom Lifshitz
I've proposed [1] to add extra logging on the Nova side. Let's see if
that helps us catch the root cause of this.

[1] https://review.openstack.org/584032

On Thu, Jul 19, 2018 at 12:50 PM, Artom Lifshitz  wrote:
> Because we're waiting for the volume to become available before we
> continue with the test [1], its tag still being present means Nova's
> not cleaning up the device tags on volume detach. This is most likely
> a bug. I'll look into it.
>
> [1] 
> https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L378
>
> On Thu, Jul 19, 2018 at 7:09 AM, Slawomir Kaplonski  
> wrote:
>> Hi,
>>
>> Since some time we see that test 
>> tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment
>>  is failing sometimes.
>> Bug about that is reported for Tempest currently [1] but after small patch 
>> [2] was merged I was today able to check what cause this issue.
>>
>> Test which is failing is in [3] and it looks that everything is going fine 
>> with it up to last line of test. So volume and port are created, attached, 
>> tags are set properly, both devices are detached properly also and at the 
>> end test is failing as in 
>> http://169.254.169.254/openstack/latest/meta_data.json still has some device 
>> inside.
>> And it looks now from [4] that it is volume which isn’t removed from this 
>> meta_data.json.
>> So I think that it would be good if people from Nova and Cinder teams could 
>> look at it and try to figure out what is going on there and how it can be 
>> fixed.
>>
>> Thanks in advance for help.
>>
>> [1] https://bugs.launchpad.net/tempest/+bug/1775947
>> [2] https://review.openstack.org/#/c/578765/
>> [3] 
>> https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L330
>> [4] 
>> http://logs.openstack.org/69/567369/15/check/tempest-full/528bc75/job-output.txt.gz#_2018-07-19_10_06_09_273919
>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> --
> Artom Lifshitz
> Software Engineer, OpenStack Compute DFG



-- 
--
Artom Lifshitz
Software Engineer, OpenStack Compute DFG

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disk space requirement - any way to lower it a little?

2018-07-19 Thread Ben Nemec



On 07/19/2018 11:55 AM, Paul Belanger wrote:

On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote:

Hello,

While trying to get a new validation¹ in the undercloud preflight
checks, I hit an (not so) unexpected issue with the CI:
it doesn't provide flavors with the minimal requirements, at least
regarding the disk space.

A quick-fix is to disable the validations in the CI - Wes has already
pushed a patch for that in the upstream CI:
https://review.openstack.org/#/c/583275/
We can consider this as a quick'n'temporary fix².

The issue is on the RDO CI: apparently, they provide instances with
"only" 55G of free space, making the checks fail:
https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46

So, the question is: would it be possible to lower the requirment to,
let's say, 50G? Where does that 60G³ come from?

Thanks for your help/feedback.

Cheers,

C.



¹ https://review.openstack.org/#/c/582917/

² as you might know, there's a BP for a unified validation framework,
and it will allow to get injected configuration in CI env in order to
lower the requirements if necessary:
https://blueprints.launchpad.net/tripleo/+spec/validation-framework

³
http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements


Keep in mind, upstream we don't really have control over partitions of nodes, in
some case it is a single, other multiple. I'd suggest looking more at:

   https://docs.openstack.org/infra/manual/testing.html


And this isn't just a testing thing.  As I mentioned in the previous 
thread, real-world users often use separate partitions for some data 
(logs, for example).  Looking at the existing validation[1] I don't know 
that it would handle multiple partitions sufficiently well to turn it on 
by default.  It's only checking /var and /, and I've seen much more 
complex partition layouts than that.


1: 
https://github.com/openstack/tripleo-validations/blob/master/validations/tasks/disk_space.yaml




As for downstream RDO, the same is going to apply once we start adding more
cloud providers. I would look to see if you actually need that much space for
deployments, and make try to mock the testing of that logic.


It's also worth noting that what we can get away with in ci is not 
necessarily appropriate for production.  Being able to run a 
short-lived, single-use deployment in 50 GB doesn't mean that you could 
realistically run that on a long-lived production cloud.  Log and 
database storage tends to increase over time.  There should be a ceiling 
to how large that all grows if rotation and db cleanup is configured 
correctly, but that ceiling is much higher than anything ci is ever 
going to hit.


Anecdotally, I bumped my development flavor disk space to >50 GB because 
I ran out of space when I built containers locally.  I don't know if 
that's something we expect users to be doing, but it is definitely 
possible to exhaust 50 GB in a short period of time.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disk space requirement - any way to lower it a little?

2018-07-19 Thread Paul Belanger
On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote:
> Hello,
> 
> While trying to get a new validation¹ in the undercloud preflight
> checks, I hit an (not so) unexpected issue with the CI:
> it doesn't provide flavors with the minimal requirements, at least
> regarding the disk space.
> 
> A quick-fix is to disable the validations in the CI - Wes has already
> pushed a patch for that in the upstream CI:
> https://review.openstack.org/#/c/583275/
> We can consider this as a quick'n'temporary fix².
> 
> The issue is on the RDO CI: apparently, they provide instances with
> "only" 55G of free space, making the checks fail:
> https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46
> 
> So, the question is: would it be possible to lower the requirment to,
> let's say, 50G? Where does that 60G³ come from?
> 
> Thanks for your help/feedback.
> 
> Cheers,
> 
> C.
> 
> 
> 
> ¹ https://review.openstack.org/#/c/582917/
> 
> ² as you might know, there's a BP for a unified validation framework,
> and it will allow to get injected configuration in CI env in order to
> lower the requirements if necessary:
> https://blueprints.launchpad.net/tripleo/+spec/validation-framework
> 
> ³
> http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements
> 
Keep in mind, upstream we don't really have control over partitions of nodes, in
some case it is a single, other multiple. I'd suggest looking more at:

  https://docs.openstack.org/infra/manual/testing.html

As for downstream RDO, the same is going to apply once we start adding more
cloud providers. I would look to see if you actually need that much space for
deployments, and make try to mock the testing of that logic.

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Make amphora-agent support http rest api

2018-07-19 Thread Michael Johnson
I saw your storyboard for this.  Thank you for creating a story.

Since the controllers manage the certificates for the amphora (both
generation and rotation) the overhead to an operator should be
extremely low and limited to initial installation configuration.
Since we have automated the certificate handling we felt it was better
to only allow TLS connections for the management traffice to the
amphora.
Please feel free to discuss on the Storyboard story,
Michael

On Wed, Jul 18, 2018 at 7:52 PM Jeff Yang  wrote:
>
> In some private cloud environments, the possibility of vm being attacked is 
> very small, and all personnel are trusted. At this time, the administrator 
> hopes to reduce the complexity of octavia deployment and operation and 
> maintenance. We can let the amphora-agent provide the http api so that the 
> administrator can ignore the issue of the certificate.
> https://storyboard.openstack.org/#!/story/2003027
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed

2018-07-19 Thread Artom Lifshitz
Because we're waiting for the volume to become available before we
continue with the test [1], its tag still being present means Nova's
not cleaning up the device tags on volume detach. This is most likely
a bug. I'll look into it.

[1] 
https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L378

On Thu, Jul 19, 2018 at 7:09 AM, Slawomir Kaplonski  wrote:
> Hi,
>
> Since some time we see that test 
> tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment
>  is failing sometimes.
> Bug about that is reported for Tempest currently [1] but after small patch 
> [2] was merged I was today able to check what cause this issue.
>
> Test which is failing is in [3] and it looks that everything is going fine 
> with it up to last line of test. So volume and port are created, attached, 
> tags are set properly, both devices are detached properly also and at the end 
> test is failing as in http://169.254.169.254/openstack/latest/meta_data.json 
> still has some device inside.
> And it looks now from [4] that it is volume which isn’t removed from this 
> meta_data.json.
> So I think that it would be good if people from Nova and Cinder teams could 
> look at it and try to figure out what is going on there and how it can be 
> fixed.
>
> Thanks in advance for help.
>
> [1] https://bugs.launchpad.net/tempest/+bug/1775947
> [2] https://review.openstack.org/#/c/578765/
> [3] 
> https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L330
> [4] 
> http://logs.openstack.org/69/567369/15/check/tempest-full/528bc75/job-output.txt.gz#_2018-07-19_10_06_09_273919
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
--
Artom Lifshitz
Software Engineer, OpenStack Compute DFG

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-07-19 Thread Chris Dent


Greetings OpenStack community,

Today's meeting was again very brief as this time elmiko and dtantsur were out. 
There were no major items of discussion, but we made plans to check on the 
status of the GraphQL prototyping (Hi! How's it going?).

In addition to the light discussion there was also one guideline that was frozen for 
wider review and a new one introduced (see below). Both are realted to the handling of 
the "code" attribute in error responses.

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* Expand error code document to expect clarity
  https://review.openstack.org/#/c/577118/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Add links to errors-example.json
  https://review.openstack.org/#/c/578369/

# Guidelines Currently Under Review [3]

* Expand schema for error.codes to reflect reality
  https://review.openstack.org/#/c/580703/

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-19 Thread Fox, Kevin M
The primary issue I think is that  the Nova folks think there is too much in 
Nova already.

So there are probably more features that can be done to make it more in line 
with vCenter, and more features to make it more functionally like AWS. And at 
this point, neither are probably easy to get in.

Until Nova changes this stance, they are kind of forcing an either or (or 
neither), as Nova's position in the OpenStack community currently drives 
decisions in most of the other OpenStack projects.

I'm not laying blame on anyone. They have a hard job to do and not enough 
people to do it. That forces less then ideal solutions.

Not really sure how to resolve this.

Deciding "we will support both" is a good first step, but there are other big 
problems like this that need solving before it can be more then words on a page.

Thanks,
Kevin


From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, July 19, 2018 5:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Zane Bitter wrote:
> [...]
>> And I'm not convinced that's an either/or choice...
>
> I said specifically that it's an either/or/and choice.

I was speaking more about the "we need to pick between two approaches,
let's document them" that the technical vision exercise started as.
Basically I mean I'm missing clear examples of where pursuing AWS would
mean breaking vCenter.

> So it's not a binary choice but it's very much a ternary choice IMHO.
> The middle ground, where each project - or even each individual
> contributor within a project - picks an option independently and
> proceeds on the implicit assumption that everyone else chose the same
> option (although - spoiler alert - they didn't)... that's not a good
> place to be.

Right, so I think I'm leaning for an "and" choice.

Basically OpenStack wants to be an AWS, but ended up being used a lot as
a vCenter (for multiple reasons, including the limited success of
US-based public cloud offerings in 2011-2016). IMHO we should continue
to target an AWS, while doing our best to not break those who use it as
a vCenter. Would explicitly acknowledging that (we still want to do an
AWS, but we need to care about our vCenter users) get us the alignment
you seek ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all] Front page template for project team documentation

2018-07-19 Thread Petr Kovar
Hi all,

A spin-off discussion in https://review.openstack.org/#/c/579177/ resulted
in an idea to update our RST conventions for headings level 2 and 3 so that
our guidelines follow recommendations from
http://docutils.sourceforge.net/docs/user/rst/quickstart.html#sections.

The updated conventions also better reflect what most projects have been
using already, regardless of what was previously in our conventions.

To sum up, for headings level 2, use dashes:

Heading 2
-

For headings level 3, use tildes:

Heading 3
~

For details on the change, see:

https://review.openstack.org/#/c/583239/1/doc/doc-contrib-guide/source/rst-conv/titles.rst

Thanks,
pk


On Fri, 29 Jun 2018 16:45:53 +0200
Petr Kovar  wrote:

> Hi all,
> 
> Feedback from the Queens PTG included requests for the Documentation
> Project to provide guidance and recommendations on how to structure common
> content typically found on the front page for project team docs, located at
> doc/source/index.rst in the project team repository.
> 
> I've created a new docs spec, proposing a template to be used by project
> teams, and would like to ask the OpenStack community and, specifically, the
> project teams, to take a look, submit feedback on the spec, share
> comments, ideas, or concerns:
> 
>   https://review.openstack.org/#/c/579177/
> 
> The main goal of providing and using this template is to make it easier for
> users to find, navigate, and consume project team documentation, and for
> contributors to set up and maintain the project team docs.
> 
> The template would also serve as the basis for one of the future governance
> docs tags, which is a long-term plan for the docs team.
> 
> Thank you,
> pk
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-5, July 23-27

2018-07-19 Thread Matthew Thode
On 18-07-19 10:42:11, Sean McGinnis wrote:
> 
> Development Focus
> -
> 
> Teams should be focused on implementing planned work. Work should be wrapping
> up on client libraries to meet the client lib deadline Thursday, the 26th.
> 
> General Information
> ---
> 
> The final client library release is on Thursday the 26th. Releases will only 
> be
> allowed for critical fixes in libraries after this point as we stabilize
> requirements and give time for any unforeseen impacts from lib changes to
> trickle through.
> 
> If release critical library or client library releases are needed for Rocky
> past the freeze dates, you must request a Feature Freeze Exception (FFE) from
> the requirements team before we can do a new release to avoid having something
> released in Rocky that is not actually usable. This is done by posting to the
> openstack-dev mailing list with a subject line similar to:
> 
> [$PROJECT][requirements] FFE requested for $PROJECT_LIB
> 
> Include justification/reasoning for why a FFE is needed for this lib. If/when
> the requirements team OKs the post-freeze update, we can then process a new
> release. Including a link to the FFE in the release request is not required,
> but would be helpful in making sure we are clear to do a new release.
> 
> When requesting these library releases, you should also include the stable
> branching request with the review (as an example, see the "branches" section
> here:
> 
> http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2)
> 
> Cycle-trailing projects are reminded that all reviews to the requirements
> project will have a procedural -2 unless it recieves a FFE until stable/rocky
> is branched.
> 
> Upcoming Deadlines & Dates
> --
> 
> Stein PTL nominations: July 24-31 (pending finalization)
> Final client library release deadline: July 26
> Rocky-3 Milestone: July 26
> RC1 deadline: August 9
> 

Projects should also make sure their requirements files are up to date
as OpenStack now uses per-project requirements.  Further projects should
make sure they have a release containing the update.  This means that
updates to the requirements files falls to the individual projects and
not the requirements bot.  It is recommended that you have a
lower-constraints.txt file and test with it to know when you need to
update.  See the following example for how to run a basic tox LC job.
https://github.com/openstack/oslo.db/blob/master/tox.ini#L76-L81

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-5, July 23-27

2018-07-19 Thread Sean McGinnis

Development Focus
-

Teams should be focused on implementing planned work. Work should be wrapping
up on client libraries to meet the client lib deadline Thursday, the 26th.

General Information
---

The final client library release is on Thursday the 26th. Releases will only be
allowed for critical fixes in libraries after this point as we stabilize
requirements and give time for any unforeseen impacts from lib changes to
trickle through.

If release critical library or client library releases are needed for Rocky
past the freeze dates, you must request a Feature Freeze Exception (FFE) from
the requirements team before we can do a new release to avoid having something
released in Rocky that is not actually usable. This is done by posting to the
openstack-dev mailing list with a subject line similar to:

[$PROJECT][requirements] FFE requested for $PROJECT_LIB

Include justification/reasoning for why a FFE is needed for this lib. If/when
the requirements team OKs the post-freeze update, we can then process a new
release. Including a link to the FFE in the release request is not required,
but would be helpful in making sure we are clear to do a new release.

When requesting these library releases, you should also include the stable
branching request with the review (as an example, see the "branches" section
here:

http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2)

Cycle-trailing projects are reminded that all reviews to the requirements
project will have a procedural -2 unless it recieves a FFE until stable/rocky
is branched.

Upcoming Deadlines & Dates
--

Stein PTL nominations: July 24-31 (pending finalization)
Final client library release deadline: July 26
Rocky-3 Milestone: July 26
RC1 deadline: August 9

--
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disk space requirement - any way to lower it a little?

2018-07-19 Thread Cédric Jeanneret
Hello,

While trying to get a new validation¹ in the undercloud preflight
checks, I hit an (not so) unexpected issue with the CI:
it doesn't provide flavors with the minimal requirements, at least
regarding the disk space.

A quick-fix is to disable the validations in the CI - Wes has already
pushed a patch for that in the upstream CI:
https://review.openstack.org/#/c/583275/
We can consider this as a quick'n'temporary fix².

The issue is on the RDO CI: apparently, they provide instances with
"only" 55G of free space, making the checks fail:
https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46

So, the question is: would it be possible to lower the requirment to,
let's say, 50G? Where does that 60G³ come from?

Thanks for your help/feedback.

Cheers,

C.



¹ https://review.openstack.org/#/c/582917/

² as you might know, there's a BP for a unified validation framework,
and it will allow to get injected configuration in CI env in order to
lower the requirements if necessary:
https://blueprints.launchpad.net/tripleo/+spec/validation-framework

³
http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements

-- 
Cédric Jeanneret
Software Engineer
DFG:DF





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-07-19 Thread Jimmy McArthur

Hi all -

Follow up on the Edge paper specifically: 
https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192


This is now available. As I mentioned on IRC this morning, it should be 
VERY close to the PDF.  Probably just needs a quick review.


Let me know if I can assist with anything.

Thank you to i18n team for all of your help!!!

Cheers,
Jimmy

Jimmy McArthur wrote:

Ian raises some great points :) I'll try to address below...

Ian Y. Choi wrote:

Hello,

When I saw overall translation source strings on container 
whitepaper, I would infer that new edge computing whitepaper

source strings would include HTML markup tags.
One of the things I discussed with Ian and Frank in Vancouver is the 
expense of recreating PDFs with new translations.  It's prohibitively 
expensive for the Foundation as it requires design resources which we 
just don't have.  As a result, we created the Containers whitepaper in 
HTML, so that it could be easily updated w/o working with outside 
design contractors.  I indicated that we would also be moving the Edge 
paper to HTML so that we could prevent that additional design resource 
cost.

On the other hand, the source strings of edge computing whitepaper
which I18n team previously translated do not include HTML markup 
tags, since the source strings are based on just text format.
The version that Akihiro put together was based on the Edge PDF, which 
we unfortunately didn't have the resources to implement in the same 
format.


I really appreciate Akihiro's work on RST-based support on publishing 
translated edge computing whitepapers, since
translators do not have to re-translate all the strings. 
I would like to second this. It took a lot of initiative to work on 
the RST-based translation.  At the moment, it's just not usable for 
the reasons mentioned above.

On the other hand, it seems that I18n team needs to investigate on
translating similar strings of HTML-based edge computing whitepaper 
source strings, which would discourage translators.
Can you expand on this? I'm not entirely clear on why the HTML based 
translation is more difficult.


That's my point of view on translating edge computing whitepaper.

For translating container whitepaper, I want to further ask the 
followings since *I18n-based tools*
would mean for translators that translators can test and publish 
translated whitepapers locally:


- How to build translated container whitepaper using original 
Silverstripe-based repository?
  https://docs.openstack.org/i18n/latest/tools.html describes well 
how to build translated artifacts for RST-based OpenStack repositories
  but I could not find the way how to build translated container 
whitepaper with translated resources on Zanata.
This is a little tricky.  It's possible to set up a local version of 
the OpenStack website 
(https://github.com/OpenStackweb/openstack-org/blob/master/installation.md).  
However, we have to manually ingest the po files as they are completed 
and then push them out to production, so that wouldn't do much to help 
with your local build.  I'm open to suggestions on how we can make 
this process easier for the i18n team.


Thank you,
Jimmy



With many thanks,

/Ian

Jimmy McArthur wrote on 7/17/2018 11:01 PM:

Frank,

I'm sorry to hear about the displeasure around the Edge paper.  As 
mentioned in a prior thread, the RST format that Akihiro worked did 
not work with the  Zanata process that we have been using with our 
CMS.  Additionally, the existing EDGE page is a PDF, so we had to 
build a new template to work with the new HTML whitepaper layout we 
created for the Containers paper. I outlined this in the thread " 
[OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing 
Whitepaper Translation" on 6/25/18 and mentioned we would be ready 
with the template around 7/13.


We completed the work on the new whitepaper template and then put 
out the pot files on Zanata so we can get the po language files 
back. If this process is too cumbersome for the translation team, 
I'm open to discussion, but right now our entire translation process 
is based on the official OpenStack Docs translation process outlined 
by the i18n team: 
https://docs.openstack.org/i18n/latest/en_GB/tools.html


Again, I realize Akihiro put in some work on his own proposing the 
new translation type. If the i18n team is moving to this format 
instead, we can work on redoing our process.


Please let me know if I can clarify further.

Thanks,
Jimmy

Frank Kloeker wrote:

Hi Jimmy,

permission was added for you and Sebastian. The Container 
Whitepaper is on the Zanata frontpage now. But we removed Edge 
Computing whitepaper last week because there is a kind of 
displeasure in the team since the results of translation are still 
not published beside Chinese version. It would be nice if we have a 
commitment from the Foundation that results are published in a 
specific timeframe. This includes your 

[openstack-dev] [neutron][upgrade] Skip Neutron upgrade IRC meeting on July 19th

2018-07-19 Thread Lujin Luo
Hi everyone,

Due to we have two core members who cannot join the weekly meeting, we
think it would be better to skip this meeting and resume on next week.

If you have any questions, please reply to this thread.

Best regards,
Lujin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker)

2018-07-19 Thread Emilien Macchi
On Thu, Jul 19, 2018 at 6:02 AM Raoul Scarazzini  wrote:
[...]

> Small question aside related to all-in-one: we're talking about use
> cases in which we might want to go from 1 to 3 controllers, but how this
> can become a thing? I always thought to all-in-one as a developer/ci
> "tool", so why we should care about giving the possibility to expand?
>

We have a few other use-cases but 2 of them are:

- PoC deployed on the field, start with one controller, scale up to 3
controllers (with compute services deployed as well).
- Edge Computing, where we could think of a controller being scaled-out as
well, or a remote compute note being added, with VMs in HA with pacemaker.

But I agree that the first target for now is to fulfil the developer use
case, and PoC use case (on one node).

This question is related also to the main topic of this thread: it was
> proposed to replace Keepalived with anything (instead of Pacemaker), and
> one of the outcomes was that this approach would not guarantee some of
> the goals, like undercloud HA and keeping 1:1 structure between
> undercloud and overcloud. But what else are we supposed to control with
> Pacemaker on the undercloud apart from the IPs?
>

Nothing, AFIK. The VIPs were the only things we wanted to managed on a
single-node undercloud.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-19 Thread Thierry Carrez

Zane Bitter wrote:

[...]

And I'm not convinced that's an either/or choice...


I said specifically that it's an either/or/and choice.


I was speaking more about the "we need to pick between two approaches, 
let's document them" that the technical vision exercise started as.
Basically I mean I'm missing clear examples of where pursuing AWS would 
mean breaking vCenter.


So it's not a binary choice but it's very much a ternary choice IMHO. 
The middle ground, where each project - or even each individual 
contributor within a project - picks an option independently and 
proceeds on the implicit assumption that everyone else chose the same 
option (although - spoiler alert - they didn't)... that's not a good 
place to be.


Right, so I think I'm leaning for an "and" choice.

Basically OpenStack wants to be an AWS, but ended up being used a lot as 
a vCenter (for multiple reasons, including the limited success of 
US-based public cloud offerings in 2011-2016). IMHO we should continue 
to target an AWS, while doing our best to not break those who use it as 
a vCenter. Would explicitly acknowledging that (we still want to do an 
AWS, but we need to care about our vCenter users) get us the alignment 
you seek ?


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed

2018-07-19 Thread Slawomir Kaplonski
Hi,

Since some time we see that test 
tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment
 is failing sometimes.
Bug about that is reported for Tempest currently [1] but after small patch [2] 
was merged I was today able to check what cause this issue.

Test which is failing is in [3] and it looks that everything is going fine with 
it up to last line of test. So volume and port are created, attached, tags are 
set properly, both devices are detached properly also and at the end test is 
failing as in http://169.254.169.254/openstack/latest/meta_data.json still has 
some device inside.
And it looks now from [4] that it is volume which isn’t removed from this 
meta_data.json.
So I think that it would be good if people from Nova and Cinder teams could 
look at it and try to figure out what is going on there and how it can be fixed.

Thanks in advance for help.

[1] https://bugs.launchpad.net/tempest/+bug/1775947
[2] https://review.openstack.org/#/c/578765/
[3] 
https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L330
[4] 
http://logs.openstack.org/69/567369/15/check/tempest-full/528bc75/job-output.txt.gz#_2018-07-19_10_06_09_273919

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker)

2018-07-19 Thread Raoul Scarazzini
On 18/07/2018 22:36, Michele Baldessari wrote:
[...]
> Besides E), I think a reasonable use case is to be able to have a small
> all-in-one installation that mimicks a more "real-world" overcloud.
> I think there is a bit of value in that, as long as the code to make it
> happen is not horribly huge and complex (and I was under the impression
> from Emilien's patchset that this is not the case)
[...]

Small question aside related to all-in-one: we're talking about use
cases in which we might want to go from 1 to 3 controllers, but how this
can become a thing? I always thought to all-in-one as a developer/ci
"tool", so why we should care about giving the possibility to expand?

This question is related also to the main topic of this thread: it was
proposed to replace Keepalived with anything (instead of Pacemaker), and
one of the outcomes was that this approach would not guarantee some of
the goals, like undercloud HA and keeping 1:1 structure between
undercloud and overcloud. But what else are we supposed to control with
Pacemaker on the undercloud apart from the IPs?

-- 
Raoul Scarazzini
ra...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-07-19 Thread Thomas Goirand
On 07/18/2018 06:42 AM, Ian Wienand wrote:
> While I'm reserved about the
> idea of full platform functional tests, essentially having a
> wide-variety of up-to-date tox environments using some of the methods
> discussed there is, I think, a very practical way to be cow-catching
> some of the bigger issues with Python version updates.  If we are to
> expend resources, my 2c worth is that pushing in that direction gives
> the best return on effort.
> 
> -i

Hi Ian,

Thanks a lot for your reply, that's very useful. I very much agree that
testing the latest Qemu / libvirt could be a problem if it fails too
often, and same with other components, however, these needs to be
addressed anyway at some point. If we can't do it this way, then we have
to define a mechanism to find out. Maybe a dvsm periodic task unrelated
to a specific project would do?

Anyway, my post was *not* about functional testing, so let's not talk
about this. What I would love to get addressed is catching problems with
newer language updates. Having them early avoids downstream distribution
doing the heavy work, which is not sustainable considering the amount of
people (which is about 1 or 2 guys per distro), and that's what I would
like to be addressed.

For example, "async" becoming a keyword in Python 3.7 is something I
would have very much like to be caught by some kind of upstream CI
running unit tests, rather than Debian and Ubuntu package maintainers
fixing the problems as we get FTBFS (Fails To Build From Source) bugs
filed in the BTS, and when we find out by ourselves that some package
cannot be installed or built. This happened with oslo.messaging,
taskflow, etc. This is just the new Python 3.7 things, though there was
numerous problems with Python 3.6. Currently, it looks like Heat also
has unit test failures in Sid (not sure yet what the issue is).

Waiting for Bionic to be released to start gating unit tests on Python
3.6 is IMO a way too late, as for example Debian Sid was running Python
3.6 about a year before that, and that's what I would like to be fixed.

Using either Fedora or SuSE is fine to me, as long as it gets latest
Python language fast enough (does it go as fast as Debian testing?). If
it's for doing unit testing only (ie: no functional tests using Qemu,
libvirt and other component of this type) looks like a good plan.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron][qa] Should we add a tempest-slow job?

2018-07-19 Thread Slawomir Kaplonski
Hi,

Thanks. I just send patch [1] to add this new job to Neutron failure rate 
Grafana dashboard.

[1] https://review.openstack.org/#/c/583870/

> Wiadomość napisana przez Ghanshyam Mann  w dniu 
> 19.07.2018, o godz. 06:55:
> 
>> On Sun, May 13, 2018 at 1:20 PM, Ghanshyam Mann  
>> wrote: 
>>> On Fri, May 11, 2018 at 10:45 PM, Matt Riedemann  
>>> wrote: 
 The tempest-full job used to run API and scenario tests concurrently, and 
 if 
 you go back far enough I think it also ran slow tests. 
 
 Sometime in the last year or so, the full job was changed to run the 
 scenario tests in serial and exclude the slow tests altogether. So the API 
 tests run concurrently first, and then the scenario tests run in serial. 
 During that change, some other tests were identified as 'slow' and marked 
 as 
 such, meaning they don't get run in the normal tempest-full job. 
 
 There are some valuable scenario tests marked as slow, however, like the 
 only encrypted volume testing we have in tempest is marked slow so it 
 doesn't get run on every change for at least nova. 
>>> 
>>> Yes, basically slow tests were selected based on 
>>> https://ethercalc.openstack.org/nu56u2wrfb2b and there were frequent 
>>> gate failure for heavy tests mainly from ssh checks so we tried to 
>>> mark more tests as slow. 
>>> I agree that some of them are not really slow at least in today situation. 
>>> 
 
 There is only one job that can be run against nova changes which runs the 
 slow tests but it's in the experimental queue so people forget to run it. 
>>> 
>>> Tempest job 
>>> "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" 
>>> run those slow tests including migration and LVM  multibackend tests. 
>>> This job runs on tempest check pipeline and experimental (as you 
>>> mentioned) on nova and cinder [3]. We marked this as n-v to check its 
>>> stability and now it is good to go as voting on tempest. 
>>> 
 
 As a test, I've proposed a nova-slow job [1] which only runs the slow 
 tests 
 and only the compute API and scenario tests. Since there currently no 
 compute API tests marked as slow, it's really just running slow scenario 
 tests. Results show it runs 37 tests in about 37 minutes [2]. The overall 
 job runtime was 1 hour and 9 minutes, which is on average less than the 
 tempest-full job. The nova-slow job is also running scenarios that nova 
 patches don't actually care about, like the neutron IPv6 scenario tests. 
 
 My question is, should we make this a generic tempest-slow job which can 
 be 
 run either in the integrated-gate or at least in nova/neutron/cinder 
 consistently (I'm not sure if there are slow tests for just keystone or 
 glance)? I don't know if the other projects already have something like 
 this 
 that they gate on. If so, a nova-specific job for nova changes is fine for 
 me. 
>>> 
>>> +1 on idea. As of now slow marked tests are from nova, cinder and 
>>> neutron scenario tests and 2 API swift tests only [4]. I agree that 
>>> making a generic job in tempest is better for maintainability. We can 
>>> use existing job for that with below modification- 
>>> -  We can migrate 
>>> "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job 
>>> zuulv3 in tempest repo 
>>> -  We can see if we can move migration tests out of it and use 
>>> "nova-live-migration" job (in tempest check pipeline ) which is much 
>>> better in live migration env setup and controlled by nova. 
>>> -  then it can be name something like 
>>> "tempest-scenario-multinode-lvm-multibackend". 
>>> -  run this job in nova, cinder, neutron check pipeline instead of 
>>> experimental. 
>> 
>> Like this - 
>> https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job
>>  
>> 
>> That makes scenario job as generic with running all scenario tests 
>> including slow tests with concurrency 2. I made few cleanup and moved 
>> live migration tests out of it which is being run by 
>> 'nova-live-migration' job. Last patch making this job as voting on 
>> tempest side. 
>> 
>> If looks good, we can use this to run on project side pipeline as voting. 
> 
> Update on this thread:
> Old Scenario  job 
> "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" has been 
> migrated to Tempest as new job named "tempest-scenario-all" job[1] 
> 
> Changes from old job to new job:
> - This new job will run all the scenario tests including slow with lvm 
> multibackend. Same as old job
> -  Executed the live migration API tests out of it. Live migration API tests 
> runs on separate  nova job "nova-live-migration".
> - This new job runs as voting on Tempest check and gate pipeline.
> 
> This is ready to use for cross project also. i have pushed the patch to nova, 
> neutron, cinder to use this new job[3] and remove 
>