Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-16 Thread Cédric Jeanneret


On 08/17/2018 12:25 AM, Steve Baker wrote:
> 
> 
> On 15/08/18 21:32, Cédric Jeanneret wrote:
>> Dear Community,
>>
>> As you may know, a move toward Podman as replacement of Docker is starting.
>>
>> One of the issues with podman is the lack of daemon, precisely the lack
>> of a socket allowing to send commands and get a "computer formatted
>> output" (like JSON or YAML or...).
>>
>> In order to work that out, Podman has added support for varlink¹, using
>> the "socket activation" feature in Systemd.
>>
>> On my side, I would like to push forward the integration of varlink in
>> TripleO deployed containers, especially since it will allow the following:
>> # proper interface with Paunch (via python link)
> I'm not sure this would be desirable. If we're going to all container
> management via a socket I think we'd be better supported by using CRI-O.
> One of the advantages I see of podman is being able to manage services
> with systemd again.

Using the socket wouldn't prevent a "per service" systemd unit. Varlink
would just provide another way to manage the containers.
It's NOT like the docker daemon - it will not manage the containers on
startup for example. It's just an API endpoint, without any "automated
powers".

See it as an interesting complement to the CLI, allowing to access
containers data easily with a computer-oriented language like python3.

>> # a way to manage containers from within specific containers (think
>> "healthcheck", "monitoring") by mounting the socket as a shared volume
>>
>> # a way to get container statistics (think "metrics")
>>
>> # a way, if needed, to get an ansible module being able to talk to
>> podman (JSON is always better than plain text)
>>
>> # a way to secure the accesses to Podman management (we have to define
>> how varlink talks to Podman, maybe providing dedicated socket with
>> dedicated rights so that we can have dedicated users for specific tasks)
> Some of these cases might prove to be useful, but I do wonder if just
> making podman calls would be just as simple without the complexity of
> having another host-level service to manage. We can still do podman
> operations inside containers by bind-mounting in the container state.

I wouldn't mount the container state as-is for mainly security reasons.
I'd rather get the varlink abstraction rather than the plain `podman'
CLI - in addition, it is far, far easier for applications to get a
proper JSON instead of some random plain text - even if `podman' seems
to get a "--format" option. I really dislike calling "subprocess" things
when there is a nice API interface - maybe that's just me ;).

In addition, apparently the state is managed by some sqlite DB -
concurrent accesses to that DB isn't really a good idea, we really don't
want a corruption, do we?

> 
>> That said, I have some questions:
>> ° Does any of you have some experience with varlink and podman interface?
>> ° What do you think about that integration wish?
>> ° Does any of you have concern with this possible addition?
> I do worry a bit that it is advocating for a solution before we really
> understand the problems. The biggest unknown for me is what we do about
> healthchecks. Maybe varlink is part of the solution here, or maybe its a
> systemd timer which executes the healthcheck and restarts the service
> when required.

Maybe. My main concern is: would it be interesting to compare both
solutions?
The Healthchecks are clearly docker-specific, no interface exists atm in
the libpod for that. So we have to mimic it in the best way.
Maybe the healthchecks place is in systemd, and varlink would be used
only for external monitoring and metrics. That would also be a nice way
to explore.

I would not focus on only one of the possibilities I've listed. There
are probably even more possibilities I didn't see - once we get a proper
socket, anything is possible, the good and the bad ;).

>> Thank you for your feedback and ideas.
>>
>> Have a great day (or evening, or whatever suits the time you're reading
>> this ;))!
>>
>> C.
>>
>>
>> ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-16 Thread Cédric Jeanneret

>> On my side, I would like to push forward the integration of varlink in
>> TripleO deployed containers, especially since it will allow the
>> following:
>> # proper interface with Paunch (via python link)
> 
> "integration of varlink in TripleO deployed containers" sounds like we'd
> need to make some changes to the containers themselves, but is that the
> case? As i read the docs, it seems like a management API wrapper for
> Podman, so just an alternative interface to Podman CLI. I'd expect we'd
> use varlink from Paunch, but probably not from the containers
> themselves? (Perhaps that's what you meant, just making sure we're on
> the same page.)

In fact, the "podman varlink thing" is already distributed with the
podman package. In order to activate that socket, we just need to
activate a systemd unit that will create the socket - the "service"
itself is activated only when the socket is accessed.
The only thing we might need to add as a package is the libvarlink-util
(provides the "varlink" command) and the python3 binding
(python3-libvarlink iirc).

Varlink "activation" itself doesn't affect the containers.

And yep, it's just an alternative to `podman' CLI, providing a nicer
computer interface with python3 bindings in order to avoid
"subprocess.Popen" and the like, providing a nice JSON output (well.
mostly - I detected at least one output not properly formatted).

> 
>>
>> # a way to manage containers from within specific containers (think
>> "healthcheck", "monitoring") by mounting the socket as a shared volume
> 
> I think healthchecks are currently quite Docker-specific, so we could
> have a Podman-specific alternative here. We should be careful about how
> much container runtime specificity we introduce and keep though, and
> we'll probably have to amend our tools (e.g. pre-upgrade validations
> [2]) to work with both, at least until we decide whether to really make
> a full transition to Podman or not.

Of course - I just listed the possibilities activating varlink would
provide - proper PoCs and tests are to be done ;).

> 
>>
>> # a way to get container statistics (think "metrics")
>>
>> # a way, if needed, to get an ansible module being able to talk to
>> podman (JSON is always better than plain text)
>>
>> # a way to secure the accesses to Podman management (we have to define
>> how varlink talks to Podman, maybe providing dedicated socket with
>> dedicated rights so that we can have dedicated users for specific tasks)
>>
>> That said, I have some questions:
>> ° Does any of you have some experience with varlink and podman interface?
>> ° What do you think about that integration wish?
>> ° Does any of you have concern with this possible addition?
> 
> I like it, but we should probably sync up with Podman community if they
> consider varlink a "supported" interface for controlling Podman, and
> it's not just an experiment which will vanish. To me it certainly looks
> like a much better programmable interface than composing CLI calls and
> parsing their output, but we should make sure Podman folks think so too :)

I think we can say "supported", since they provide the varlink socket
and service directly in podman package. In addition, it was a request:
https://trello.com/c/8RQ6ZF4A/565-8-add-podman-varlink-subcommand
https://github.com/containers/libpod/pull/627

and it's pretty well followed regarding both issues and libpod API updates.

I'll ping them in order to validate that feeling.

> 
> Thanks for looking into this
> 
> Jirka
> 
> [2] https://review.openstack.org/#/c/582502/
> 
>>
>> Thank you for your feedback and ideas.
>>
>> Have a great day (or evening, or whatever suits the time you're reading
>> this ;))!
>>
>> C.
>>
>>
>> ¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/
>>
>>
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] A multi-cell instance-list performance test

2018-08-16 Thread Zhenyu Zheng
Hi,

Thanks alot for the reply, for your question #2, we did tests with two
kinds of deployments: 1. There is only 1 DB with all 10 cells(also cell0)
and it is on the same server with
the API; 2. We took 5 of the DBs to another machine on the same rack to
test out if it matters, and it turns out there are no big differences.

For question #3, we did a test with limit = 1000 and 10 cells:
as we can see, the CPU workload from API process and MySQL query is both
high in the first 3 seconds, but start from the 4th second, only API
process occupies the CPU,
and the memory consumption is low comparing to the CPU consumption. And
this is tested with the patch fix posted in previous mail.

[image: image.png]

[image: image.png]

BR,

Kevin

On Fri, Aug 17, 2018 at 2:45 AM Dan Smith  wrote:

> >  yes, the DB query was in serial, after some investigation, it seems
> that we are unable to perform eventlet.mockey_patch in uWSGI mode, so
> >  Yikun made this fix:
> >
> >  https://review.openstack.org/#/c/592285/
>
> Cool, good catch :)
>
> >
> >  After making this change, we test again, and we got this kind of data:
> >
> >   total collect sort view
> >  before monkey_patch 13.5745 11.7012 1.1511 0.5966
> >  after monkey_patch 12.8367 10.5471 1.5642 0.6041
> >
> >  The performance improved a little, and from the log we can saw:
>
> Since these all took ~1s when done in series, but now take ~10s in
> parallel, I think you must be hitting some performance bottleneck in
> either case, which is why the overall time barely changes. Some ideas:
>
> 1. In the real world, I think you really need to have 10x database
>servers or at least a DB server with plenty of cores loading from a
>very fast (or separate) disk in order to really ensure you're getting
>full parallelism of the DB work. However, because these queries all
>took ~1s in your serialized case, I expect this is not your problem.
>
> 2. What does the network look like between the api machine and the DB?
>
> 3. What do the memory and CPU usage of the api process look like while
>this is happening?
>
> Related to #3, even though we issue the requests to the DB in parallel,
> we still process the result of those calls in series in a single python
> thread on the API. That means all the work of reading the data from the
> socket, constructing the SQLA objects, turning those into nova objects,
> etc, all happens serially. It could be that the DB query is really a
> small part of the overall time and our serialized python handling of the
> result is the slow part. If you see the api process pegging a single
> core at 100% for ten seconds, I think that's likely what is happening.
>
> >  so, now the queries are in parallel, but the whole thing still seems
> >  serial.
>
> In your table, you show the time for "1 cell, 1000 instances" as ~3s and
> "10 cells, 1000 instances" as 10s. The problem with comparing those
> directly is that in the latter, you're actually pulling 10,000 records
> over the network, into memory, processing them, and then just returning
> the first 1000 from the sort. A closer comparison would be the "10
> cells, 100 instances" with "1 cell, 1000 instances". In both of those
> cases, you pull 1000 instances total from the db, into memory, and
> return 1000 from the sort. In that case, the multi-cell situation is
> faster (~2.3s vs. ~3.1s). You could also compare the "10 cells, 1000
> instances" case to "1 cell, 10,000 instances" just to confirm at the
> larger scale that it's better or at least the same.
>
> We _have_ to pull $limit instances from each cell, in case (according to
> the sort key) the first $limit instances are all in one cell. We _could_
> try to batch the results from each cell to avoid loading so many that we
> don't need, but we punted this as an optimization to be done later. I'm
> not sure it's really worth the complexity at this point, but it's
> something we could investigate.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][congress] help with tempest plugin jobs against stable/queens

2018-08-16 Thread Eric K
On Tue, Aug 14, 2018 at 9:34 PM, Ghanshyam Mann  wrote:
>   On Wed, 15 Aug 2018 09:37:18 +0900 Eric K  
> wrote 
>  > I'm adding jobs [1] to the tempest plugin to run tests against
>  > congress stable/queens. The job output seems to show stable/queens
>  > getting checked out [2], but I know the test is *not* run against
>  > queens because it's using features not available in queens. The
>  > expected result is for several tests to fail as seen here [3]. All
>  > hints and tips much appreciated!
>
> You are doing it in right way by 'override-checkout: stable/queens'. And as 
> log also show, congress is checkout from stable/queens. I tried to check the 
> results but could not get what tests should fail and why.
>
> If you can give me more idea, i can debug that.
>
> -gmann
Thanks so much gmann!
For example, looking at
'congress_tempest_plugin.tests.scenario.congress_datasources.test_vitrage.TestVitrageDriver'
here:
http://logs.openstack.org/61/591861/1/check/congress-devstack-api-mysql/36bacbe/logs/testr_results.html.gz

It shows passing 1 of 1, but that feature is not in the queens branch
at all. The expected result can be seen here:
http://logs.openstack.org/05/591805/2/check/congress-devstack-api-mysql/7d7b28e/logs/testr_results.html.gz
>
>  >
>  > [1] https://review.openstack.org/#/c/591861/1
>  > [2] 
> http://logs.openstack.org/61/591861/1/check/congress-devstack-api-mysql-queens/f7b5752/job-output.txt.gz#_2018-08-14_22_30_36_899501
>  > [3] https://review.openstack.org/#/c/591805/ (the depends-on is
>  > irrelevant because that patch has been merged)
>  >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] need help triaging a vmware driver bug

2018-08-16 Thread melanie witt

Hello VMware peeps,

I've been trying to triage a bug in New status for the VMware driver 
without success:


https://bugs.launchpad.net/nova/+bug/1744182 - can not create instance 
when using vmware nova driver


I tend to think the problem is not related to nova because the instance 
create fails with a message that sounds related to the VMware backend:


2018-01-18 06:40:01.738 7 ERROR nova.compute.manager 
[req-bc40738a-a3ee-4d9c-bd67-32e6fb32df08 
32e0ed602bc549f48f7caf401420b628 7179dd1be7ef4cf2906b41b97970a0f6 - 
default default] [instance: b4b7cabe-f78b-40d9-8856-3b6c213efd73] 
Instance failed to spawn: VimFaultException: An error occurred during 
host configuration.

Faults: ['PlatformConfigFault']

And VMware CI has been running in the gate and successfully creating 
instances during the tempest tests.


Can anyone help triage this bug?

Thanks in advance.

Best,
-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-16 Thread Steve Baker



On 15/08/18 21:32, Cédric Jeanneret wrote:

Dear Community,

As you may know, a move toward Podman as replacement of Docker is starting.

One of the issues with podman is the lack of daemon, precisely the lack
of a socket allowing to send commands and get a "computer formatted
output" (like JSON or YAML or...).

In order to work that out, Podman has added support for varlink¹, using
the "socket activation" feature in Systemd.

On my side, I would like to push forward the integration of varlink in
TripleO deployed containers, especially since it will allow the following:
# proper interface with Paunch (via python link)
I'm not sure this would be desirable. If we're going to all container 
management via a socket I think we'd be better supported by using CRI-O. 
One of the advantages I see of podman is being able to manage services 
with systemd again.

# a way to manage containers from within specific containers (think
"healthcheck", "monitoring") by mounting the socket as a shared volume

# a way to get container statistics (think "metrics")

# a way, if needed, to get an ansible module being able to talk to
podman (JSON is always better than plain text)

# a way to secure the accesses to Podman management (we have to define
how varlink talks to Podman, maybe providing dedicated socket with
dedicated rights so that we can have dedicated users for specific tasks)
Some of these cases might prove to be useful, but I do wonder if just 
making podman calls would be just as simple without the complexity of 
having another host-level service to manage. We can still do podman 
operations inside containers by bind-mounting in the container state.



That said, I have some questions:
° Does any of you have some experience with varlink and podman interface?
° What do you think about that integration wish?
° Does any of you have concern with this possible addition?
I do worry a bit that it is advocating for a solution before we really 
understand the problems. The biggest unknown for me is what we do about 
healthchecks. Maybe varlink is part of the solution here, or maybe its a 
systemd timer which executes the healthcheck and restarts the service 
when required.

Thank you for your feedback and ideas.

Have a great day (or evening, or whatever suits the time you're reading
this ;))!

C.


¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2

2018-08-16 Thread Jeremy Stanley
On 2018-08-17 06:56:11 +1000 (+1000), Tony Breeds wrote:
[...]
> How terrible would it be to branch openstackdocstheme and backport
> the fix without the pbr changes?  It might also be possible,
> though I'm not sure how we'd land it, to branch (stable/ocata)
> openstackdocstheme today and just revert the pbr changes to set
> the lower bound.
[...]

I think it would also be entirely reasonable to just not worry about
it, and let the people who asked for extended maintenance on older
branches do the legwork. We previously limited the number we'd keep
open because keeping those older branches updatable does in fact
require quite a bit of effort. When we agreed to the suggestion of
not closing branches, it was with the understanding that they won't
just suddenly get taken care of by the same people who were already
not doing that because they considered it a lot of extra work.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][oslo][release] FFE request for castellan

2018-08-16 Thread Ben Nemec
The backport has merged and I've proposed the release here: 
https://review.openstack.org/592746


On 08/15/2018 11:58 AM, Ade Lee wrote:

Done.

https://review.openstack.org/#/c/592154/

Thanks,
Ade

On Wed, 2018-08-15 at 09:20 -0500, Ben Nemec wrote:


On 08/14/2018 01:56 PM, Sean McGinnis wrote:

On 08/10/2018 10:15 AM, Ade Lee wrote:

Hi all,

I'd like to request a feature freeze exception to get the
following
change in for castellan.

https://review.openstack.org/#/c/575800/

This extends the functionality of the vault backend to provide
previously uninmplemented functionality, so it should not break
anyone.

The castellan vault plugin is used behind barbican in the
barbican-
vault plugin.  We'd like to get this change into Rocky so that
we can
release Barbican with complete functionality on this backend
(along
with a complete set of passing functional tests).


This does seem fairly low risk since it's just implementing a
function that
previously raised a NotImplemented exception.  However, with it
being so
late in the cycle I think we need the release team's input on
whether this
is possible.  Most of the release FFE's I've seen have been for
critical
bugs, not actual new features.  I've added that tag to this
thread so
hopefully they can weigh in.



As far as releases go, this should be fine. If this doesn't affect
any other
projects and would just be a late merging feature, as long as the
castellan
team has considered the risk of adding code so late and is
comfortable with
that, this is OK.

Castellan follows the cycle-with-intermediary release model, so the
final Rocky
release just needs to be done by next Thursday. I do see the
stable/rocky
branch has already been created for this repo, so it would need to
merge to
master first (technically stein), then get cherry-picked to
stable/rocky.


Okay, sounds good.  It's already merged to master so we're good
there.

Ade, can you get the backport proposed?

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2

2018-08-16 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2018-08-17 06:56:11 +1000:
> On Thu, Aug 16, 2018 at 10:45:32AM -0400, Doug Hellmann wrote:
> 
> > Is there any reason we can't uncap pbr, at least within the CI jobs?
> 
> It might work for the docs builds but jumping a major version of pbr,
> which if I recall caused problems ate the time (hence the lower-bound)
> for all octata projects wouldn't happen.
> 
> How terrible would it be to branch openstackdocstheme and backport the fix
> without the pbr changes?  It might also be possible, though I'm not sure
> how we'd land it, to branch (stable/ocata) openstackdocstheme today and
> just revert the pbr changes to set the lower bound.
> 
> If you let me know what the important changes are to functionality in
> oslosdocstheme I can play with it next week.  Having said that I'm aware
> there is time pressure here so I'm happy for others to do it
> 
> Yours Tony.

The thing we need is the deprecation badge support in
https://review.openstack.org/#/c/585517/

It if backporting that to an older version of the theme is going
to be easier, and we don't care about adding a feature to a stable
branch for that, than I'm OK with doing it that way.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] Technical Vision statement: feedback sought

2018-08-16 Thread Zane Bitter
The TC has undertaken to attempt to write a technical vision statement 
for OpenStack that documents the community's consensus on what we're 
trying to build. To date the only thing we've had to guide us is the 
mission statement[1], which is exactly one sentence long and uses 
undefined terms (like 'cloud'). That can lead to diverging perspectives 
and poor communication.


No group is charged with designing OpenStack at a high level - it is the 
sum of what individual teams produce. So the only way we're going to end 
up with a coherent offering is if we're all moving in the same direction.


The TC has also identified that we're having conversations about whether 
a new project fits with the OpenStack mission too late - only after the 
project applies to become official. We're hoping that updates to this 
document can provide a mechanism to have those conversations earlier.


A first draft review is now available for comment:

https://review.openstack.org/592205

We're soliciting feedback on the review, on the mailing list, on IRC 
during TC office hours or any time that's convenient to you in 
#openstack-tc, and during the PTG in Denver.


If the vision as written broadly matches yours then we'd like to hear 
from you, and if it does not we *need* to hear from you. The goal is to 
have something that the entire community can buy into, and although that 
means not everyone will be able to get their way on every topic we are 
more than willing to make changes in order to find consensus. Everything 
is up for grabs, including the form and structure of the document itself.


cheers,
Zane.

[1] 
https://docs.openstack.org/project-team-guide/introduction.html#the-mission


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] migrating to storyboard

2018-08-16 Thread Kendall Nelson
Hello :)

On Thu, Aug 16, 2018 at 12:47 PM Jay S Bryant  wrote:

> Hey,
>
> Well, the attachments are one of the things holding us up along with
> reduced participation in the project and a number of other challenges.
> Getting the time to prepare for the move has been difficult.
>
>
I wouldn't really say we have reduced participation- we've always been a
small team. In the last year, we've actually seen more involvement from new
contributors (new and future users of sb) which has been awesome :) We even
had/have an outreachy intern that has been working on making searching and
filtering even better.

Prioritizing when to invest time to migrate has been hard for several
projects so Cinder isn't alone, no worries :)


> I am planning to take some time before the PTG to look at how Ironic has
> been using Storyboard and take this forward to the team at the PTG to try
> and spur the process along.
>
>
Glad to hear it! Once I get the SB room on the schedule, you are welcome to
join the conversations there.  We would love any feedback you have on what
the 'other challenges' are that you mentioned above.

> Jay Bryant - (jungleboyj)
>
> On 8/16/2018 2:22 PM, Kendall Nelson wrote:
>
> Hey :)
>
> Yes, I know attachments are important to a few projects. They are on our
> todo list and we plan to talk about how to implement them at the upcoming
> PTG[1].
>
> Unfortunately, we have had other things that are taking priority over
> attachments. We would really love to migrate you all, but if attachments is
> what is really blocking you and there is no other workable solution, I'm
> more than willing to review patches if you want to help out to move things
> along a little faster :)
>
> -Kendall Nelson (diablo_rojo)
>
> [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning
>
> On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant  wrote:
>
>>
>>
>> On 8/15/2018 11:43 AM, Chris Friesen wrote:
>> > On 08/14/2018 10:33 AM, Tobias Urdin wrote:
>> >
>> >> My goal is that we will be able to swap to Storyboard during the
>> >> Stein cycle but
>> >> considering that we have a low activity on
>> >> bugs my opinion is that we could do this swap very easily anything
>> >> soon as long
>> >> as everybody is in favor of it.
>> >>
>> >> Please let me know what you think about moving to Storyboard?
>> >
>> > Not a puppet dev, but am currently using Storyboard.
>> >
>> > One of the things we've run into is that there is no way to attach log
>> > files for bug reports to a story.  There's an open story on this[1]
>> > but it's not assigned to anyone.
>> >
>> > Chris
>> >
>> >
>> > [1] https://storyboard.openstack.org/#!/story/2003071
>> >
>> Cinder is planning on holding on any migration, like Manila, until the
>> file attachment issue is resolved.
>>
>> Jay
>> >
>> __
>> >
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>

Thanks!

- Kendall (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2

2018-08-16 Thread Tony Breeds
On Thu, Aug 16, 2018 at 10:45:32AM -0400, Doug Hellmann wrote:

> Is there any reason we can't uncap pbr, at least within the CI jobs?

It might work for the docs builds but jumping a major version of pbr,
which if I recall caused problems ate the time (hence the lower-bound)
for all octata projects wouldn't happen.

How terrible would it be to branch openstackdocstheme and backport the fix
without the pbr changes?  It might also be possible, though I'm not sure
how we'd land it, to branch (stable/ocata) openstackdocstheme today and
just revert the pbr changes to set the lower bound.

If you let me know what the important changes are to functionality in
oslosdocstheme I can play with it next week.  Having said that I'm aware
there is time pressure here so I'm happy for others to do it

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] migrating to storyboard

2018-08-16 Thread Kendall Nelson
We can definitely add that to our PTG discussion agenda if you want to come
by with feedback. Octavia wrote up and Etherpad and passed that along to
us. Either way works.

Once the PTG bot is ready to go I plan to reserve a day or part of a day
dedicated to StoryBoard.

-Kendall

On Thu, Aug 16, 2018 at 1:35 PM Ben Nemec  wrote:

> Is there any plan to have a session where the current and future users
> of storyboard can get together and discuss how it's going?
>
> On 08/16/2018 02:47 PM, Jay S Bryant wrote:
> > Hey,
> >
> > Well, the attachments are one of the things holding us up along with
> > reduced participation in the project and a number of other challenges.
> > Getting the time to prepare for the move has been difficult.
> >
> > I am planning to take some time before the PTG to look at how Ironic has
> > been using Storyboard and take this forward to the team at the PTG to
> > try and spur the process along.
> >
> > Jay Bryant - (jungleboyj)
> >
> >
> > On 8/16/2018 2:22 PM, Kendall Nelson wrote:
> >> Hey :)
> >>
> >> Yes, I know attachments are important to a few projects. They are on
> >> our todo list and we plan to talk about how to implement them at the
> >> upcoming PTG[1].
> >>
> >> Unfortunately, we have had other things that are taking priority over
> >> attachments. We would really love to migrate you all, but if
> >> attachments is what is really blocking you and there is no other
> >> workable solution, I'm more than willing to review patches if you want
> >> to help out to move things along a little faster :)
> >>
> >> -Kendall Nelson (diablo_rojo)
> >>
> >> [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning
> >>
> >> On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant  >> > wrote:
> >>
> >>
> >>
> >> On 8/15/2018 11:43 AM, Chris Friesen wrote:
> >> > On 08/14/2018 10:33 AM, Tobias Urdin wrote:
> >> >
> >> >> My goal is that we will be able to swap to Storyboard during the
> >> >> Stein cycle but
> >> >> considering that we have a low activity on
> >> >> bugs my opinion is that we could do this swap very easily
> anything
> >> >> soon as long
> >> >> as everybody is in favor of it.
> >> >>
> >> >> Please let me know what you think about moving to Storyboard?
> >> >
> >> > Not a puppet dev, but am currently using Storyboard.
> >> >
> >> > One of the things we've run into is that there is no way to
> >> attach log
> >> > files for bug reports to a story.  There's an open story on
> this[1]
> >> > but it's not assigned to anyone.
> >> >
> >> > Chris
> >> >
> >> >
> >> > [1] https://storyboard.openstack.org/#!/story/2003071
> >> 
> >> >
> >> Cinder is planning on holding on any migration, like Manila, until
> >> the
> >> file attachment issue is resolved.
> >>
> >> Jay
> >> >
> >>
>  __
> >>
> >> >
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
>  __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] migrating to storyboard

2018-08-16 Thread Ben Nemec
Is there any plan to have a session where the current and future users 
of storyboard can get together and discuss how it's going?


On 08/16/2018 02:47 PM, Jay S Bryant wrote:

Hey,

Well, the attachments are one of the things holding us up along with 
reduced participation in the project and a number of other challenges.  
Getting the time to prepare for the move has been difficult.


I am planning to take some time before the PTG to look at how Ironic has 
been using Storyboard and take this forward to the team at the PTG to 
try and spur the process along.


Jay Bryant - (jungleboyj)


On 8/16/2018 2:22 PM, Kendall Nelson wrote:

Hey :)

Yes, I know attachments are important to a few projects. They are on 
our todo list and we plan to talk about how to implement them at the 
upcoming PTG[1].


Unfortunately, we have had other things that are taking priority over 
attachments. We would really love to migrate you all, but if 
attachments is what is really blocking you and there is no other 
workable solution, I'm more than willing to review patches if you want 
to help out to move things along a little faster :)


-Kendall Nelson (diablo_rojo)

[1]https://etherpad.openstack.org/p/sb-stein-ptg-planning

On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant > wrote:




On 8/15/2018 11:43 AM, Chris Friesen wrote:
> On 08/14/2018 10:33 AM, Tobias Urdin wrote:
>
>> My goal is that we will be able to swap to Storyboard during the
>> Stein cycle but
>> considering that we have a low activity on
>> bugs my opinion is that we could do this swap very easily anything
>> soon as long
>> as everybody is in favor of it.
>>
>> Please let me know what you think about moving to Storyboard?
>
> Not a puppet dev, but am currently using Storyboard.
>
> One of the things we've run into is that there is no way to
attach log
> files for bug reports to a story.  There's an open story on this[1]
> but it's not assigned to anyone.
>
> Chris
>
>
> [1] https://storyboard.openstack.org/#!/story/2003071

>
Cinder is planning on holding on any migration, like Manila, until
the
file attachment issue is resolved.

Jay
>
__

>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] deployements fails when using custom nic config

2018-08-16 Thread Samuel Monderer
Hi,

I'm using the attached file for controller nic configuration and I'm
referencing to it as following
resource_registry:
  # Network Interface templates to use (these files must exist). You can
  # override these by including one of the net-*.yaml environment files,
  # such as net-bond-with-vlans.yaml, or modifying the list here.
  # Port assignments for the Controller
  OS::TripleO::Controller::Net::SoftwareConfig:
/home/stack/templates/nic-configs/controller.yaml

and I get the following error

2018-08-16 15:51:59Z
[overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0]:
CREATE_FAILED  Error: resources[0]: Deployment to server failed:
deploy_status_code : Deployment exited with non-zero status code: 2
2018-08-16 15:51:59Z
[overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_FAILED
Resource CREATE failed: Error: resources[0]: Deployment to server failed:
deploy_status_code : Deployment exited with non-zero status code: 2
2018-08-16 15:52:00Z
[overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_FAILED
Error: resources.ControllerDeployment_Step1.resources[0]: Deployment to
server failed: deploy_status_code: Deployment exited with non-zero status
code: 2
2018-08-16 15:52:00Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED
Resource CREATE failed: Error:
resources.ControllerDeployment_Step1.resources[0]: Deployment to server
failed: deploy_status_code: Deployment exited with non-zero status code: 2
2018-08-16 15:52:01Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED  Error:
resources.AllNodesDeploySteps.resources.ControllerDeployment_Step1.resources[0]:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 2
2018-08-16 15:52:01Z [overcloud]: CREATE_FAILED  Resource CREATE failed:
Error:
resources.AllNodesDeploySteps.resources.ControllerDeployment_Step1.resources[0]:
Deployment to server failed: deploy_status_code: Deployment exited with
non-zero status code: 2

 Stack overcloud CREATE_FAILED

overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0:
  resource_type: OS::Heat::StructuredDeployment
  physical_resource_id: 8edfbb96-9b4d-4839-8b17-f8abf0644475
  status: CREATE_FAILED
  status_reason: |
Error: resources[0]: Deployment to server failed: deploy_status_code :
Deployment exited with non-zero status code: 2
  deploy_stdout: |
...
"2018-08-16 18:51:54,967 ERROR: 23177 -- ERROR configuring
neutron",
"2018-08-16 18:51:54,967 ERROR: 23177 -- ERROR configuring
horizon",
"2018-08-16 18:51:54,968 ERROR: 23177 -- ERROR configuring
heat_api_cfn"
]
}
to retry, use: --limit
@/var/lib/heat-config/heat-config-ansible/48a5902a-5987-46e4-a06b-e3f5487bf3d2_playbook.retry

PLAY RECAP
*
localhost  : ok=26   changed=13   unreachable=0
failed=1

(truncated, view all with --long)
  deploy_stderr: |

Heat Stack create failed.
Heat Stack create failed.
(undercloud) [stack@staging-director ~]$

When I checked the controller node I found that it had no default gateway
configured

Regards,
Samuel


controller.yaml
Description: application/yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] migrating to storyboard

2018-08-16 Thread Jay S Bryant

Hey,

Well, the attachments are one of the things holding us up along with 
reduced participation in the project and a number of other challenges.  
Getting the time to prepare for the move has been difficult.


I am planning to take some time before the PTG to look at how Ironic has 
been using Storyboard and take this forward to the team at the PTG to 
try and spur the process along.


Jay Bryant - (jungleboyj)


On 8/16/2018 2:22 PM, Kendall Nelson wrote:

Hey :)

Yes, I know attachments are important to a few projects. They are on 
our todo list and we plan to talk about how to implement them at the 
upcoming PTG[1].


Unfortunately, we have had other things that are taking priority over 
attachments. We would really love to migrate you all, but if 
attachments is what is really blocking you and there is no other 
workable solution, I'm more than willing to review patches if you want 
to help out to move things along a little faster :)


-Kendall Nelson (diablo_rojo)

[1]https://etherpad.openstack.org/p/sb-stein-ptg-planning

On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant > wrote:




On 8/15/2018 11:43 AM, Chris Friesen wrote:
> On 08/14/2018 10:33 AM, Tobias Urdin wrote:
>
>> My goal is that we will be able to swap to Storyboard during the
>> Stein cycle but
>> considering that we have a low activity on
>> bugs my opinion is that we could do this swap very easily anything
>> soon as long
>> as everybody is in favor of it.
>>
>> Please let me know what you think about moving to Storyboard?
>
> Not a puppet dev, but am currently using Storyboard.
>
> One of the things we've run into is that there is no way to
attach log
> files for bug reports to a story.  There's an open story on this[1]
> but it's not assigned to anyone.
>
> Chris
>
>
> [1] https://storyboard.openstack.org/#!/story/2003071

>
Cinder is planning on holding on any migration, like Manila, until
the
file attachment issue is resolved.

Jay
>
__

>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] migrating to storyboard

2018-08-16 Thread Kendall Nelson
Hey :)

Yes, I know attachments are important to a few projects. They are on our
todo list and we plan to talk about how to implement them at the upcoming
PTG[1].

Unfortunately, we have had other things that are taking priority over
attachments. We would really love to migrate you all, but if attachments is
what is really blocking you and there is no other workable solution, I'm
more than willing to review patches if you want to help out to move things
along a little faster :)

-Kendall Nelson (diablo_rojo)

[1]https://etherpad.openstack.org/p/sb-stein-ptg-planning

On Wed, Aug 15, 2018 at 1:49 PM Jay S Bryant  wrote:

>
>
> On 8/15/2018 11:43 AM, Chris Friesen wrote:
> > On 08/14/2018 10:33 AM, Tobias Urdin wrote:
> >
> >> My goal is that we will be able to swap to Storyboard during the
> >> Stein cycle but
> >> considering that we have a low activity on
> >> bugs my opinion is that we could do this swap very easily anything
> >> soon as long
> >> as everybody is in favor of it.
> >>
> >> Please let me know what you think about moving to Storyboard?
> >
> > Not a puppet dev, but am currently using Storyboard.
> >
> > One of the things we've run into is that there is no way to attach log
> > files for bug reports to a story.  There's an open story on this[1]
> > but it's not assigned to anyone.
> >
> > Chris
> >
> >
> > [1] https://storyboard.openstack.org/#!/story/2003071
> >
> Cinder is planning on holding on any migration, like Manila, until the
> file attachment issue is resolved.
>
> Jay
> >
> __
> >
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] migrating to storyboard

2018-08-16 Thread Kendall Nelson
Hey :)

I created all the puppet openstack repos in the storyboard-dev envrionment
and made a project group[1]. I am struggling a bit with finding all of your
launchpad projects to perform the migrations through, can you share a list
of all of them?

-Kendall (diablo_rojo)

[1] https://storyboard-dev.openstack.org/#!/project_group/60

On Wed, Aug 15, 2018 at 12:08 AM Tobias Urdin 
wrote:

> Hello Kendall,
>
> Thanks for your reply, that sounds awesome!
> We can then dig around and see how everything looks when all project bugs
> are imported to stories.
>
> I see no issues with being able to move to Storyboard anytime soon if the
> feedback for
> moving is positive.
>
> Best regards
>
> Tobias
>
>
> On 08/14/2018 09:06 PM, Kendall Nelson wrote:
>
> Hello!
>
> The error you hit can be resolved by adding launchpadlib to your tox.ini
> if I recall correctly..
>
> also, if you'd like, I can run a test migration of puppet's launchpad
> projects into our storyboard-dev db (where I've done a ton of other test
> migrations) if you want to see how it looks/works with a larger db. Just
> let me know and I can kick it off.
>
> As for a time to migrate, if you all are good with it, we usually schedule
> for Friday's so there is even less activity. Its a small project config
> change and then we just need an infra core to kick off the script once the
> change merges.
>
> -Kendall (diablo_rojo)
>
> On Tue, Aug 14, 2018 at 9:33 AM Tobias Urdin 
> wrote:
>
>> Hello all incredible Puppeters,
>>
>> I've tested setting up an Storyboard instance and test migrated
>> puppet-ceph and it went without any issues there using the documentation
>> [1] [2]
>> with just one minor issue during the SB setup [3].
>>
>> My goal is that we will be able to swap to Storyboard during the Stein
>> cycle but considering that we have a low activity on
>> bugs my opinion is that we could do this swap very easily anything soon
>> as long as everybody is in favor of it.
>>
>> Please let me know what you think about moving to Storyboard?
>> If everybody is in favor of it we can request a migration to infra
>> according to documentation [2].
>>
>> I will continue to test the import of all our project while people are
>> collecting their thoughts and feedback :)
>>
>> Best regards
>> Tobias
>>
>> [1] https://docs.openstack.org/infra/storyboard/install/development.html
>> [2] https://docs.openstack.org/infra/storyboard/migration.html
>> [3] It failed with an error about launchpadlib not being installed,
>> solved with `tox -e venv pip install launchpadlib`
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] A multi-cell instance-list performance test

2018-08-16 Thread Dan Smith
>  yes, the DB query was in serial, after some investigation, it seems that we 
> are unable to perform eventlet.mockey_patch in uWSGI mode, so
>  Yikun made this fix:
>
>  https://review.openstack.org/#/c/592285/

Cool, good catch :)

>
>  After making this change, we test again, and we got this kind of data:
>
>    total collect sort view 
>  before monkey_patch 13.5745 11.7012 1.1511 0.5966 
>  after monkey_patch 12.8367 10.5471 1.5642 0.6041 
>
>  The performance improved a little, and from the log we can saw:

Since these all took ~1s when done in series, but now take ~10s in
parallel, I think you must be hitting some performance bottleneck in
either case, which is why the overall time barely changes. Some ideas:

1. In the real world, I think you really need to have 10x database
   servers or at least a DB server with plenty of cores loading from a
   very fast (or separate) disk in order to really ensure you're getting
   full parallelism of the DB work. However, because these queries all
   took ~1s in your serialized case, I expect this is not your problem.

2. What does the network look like between the api machine and the DB?

3. What do the memory and CPU usage of the api process look like while
   this is happening?

Related to #3, even though we issue the requests to the DB in parallel,
we still process the result of those calls in series in a single python
thread on the API. That means all the work of reading the data from the
socket, constructing the SQLA objects, turning those into nova objects,
etc, all happens serially. It could be that the DB query is really a
small part of the overall time and our serialized python handling of the
result is the slow part. If you see the api process pegging a single
core at 100% for ten seconds, I think that's likely what is happening.

>  so, now the queries are in parallel, but the whole thing still seems
>  serial.

In your table, you show the time for "1 cell, 1000 instances" as ~3s and
"10 cells, 1000 instances" as 10s. The problem with comparing those
directly is that in the latter, you're actually pulling 10,000 records
over the network, into memory, processing them, and then just returning
the first 1000 from the sort. A closer comparison would be the "10
cells, 100 instances" with "1 cell, 1000 instances". In both of those
cases, you pull 1000 instances total from the db, into memory, and
return 1000 from the sort. In that case, the multi-cell situation is
faster (~2.3s vs. ~3.1s). You could also compare the "10 cells, 1000
instances" case to "1 cell, 10,000 instances" just to confirm at the
larger scale that it's better or at least the same.

We _have_ to pull $limit instances from each cell, in case (according to
the sort key) the first $limit instances are all in one cell. We _could_
try to batch the results from each cell to avoid loading so many that we
don't need, but we punted this as an optimization to be done later. I'm
not sure it's really worth the complexity at this point, but it's
something we could investigate.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Personal tool patterns in .gitignore cookiecutter

2018-08-16 Thread Sean McGinnis
On Thu, Aug 16, 2018 at 06:24:22PM +, Jeremy Stanley wrote:
> In response to some recent but misguided proposals from well-meaning
> contributors in various projects, I've submitted a change[*] for the
> openstack-dev/cookiecutter .gitignore template inserting a comment
> which recommends against including patterns related to personal
> choices of tooling (arbitrary editors, IDEs, operating systems...).
> It includes one suggestion for a popular alternative (creating a
> personal excludesfile specific to the tools you use), but there are
> of course multiple ways it can be solved.
> 
> This is not an attempt to set policy, but merely provides a
> recommended default for new repositories in hopes that projects can
> over time reduce some of the noise related to unwanted .gitignore
> additions. If it merges, projects who disagree with this default can
> of course modify or remove the comment at the top of the file as
> they see fit when bootstrapping content for a new repository.
> Projects with existing repositories on which they'd like to apply
> this can also easily copy the comment text or port the patch.
> 
> If there seems to be some consensus that this change is appreciated,
> I'll remove the WIP flag and propose similar changes to our other
> cookiecutters for consistency.
> 
> [*] https://review.openstack.org/592520
> -- 
> Jeremy Stanley

The comments match my personal preference, and I do see it is just advisory, so
it is not mandating any policy that must be followed by all projects. I think
it is a good comment to include if for no other reason than to potentially
inform folks that there are other ways to address this than copying and pasting
the same change to every repo.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Personal tool patterns in .gitignore cookiecutter

2018-08-16 Thread Jeremy Stanley
In response to some recent but misguided proposals from well-meaning
contributors in various projects, I've submitted a change[*] for the
openstack-dev/cookiecutter .gitignore template inserting a comment
which recommends against including patterns related to personal
choices of tooling (arbitrary editors, IDEs, operating systems...).
It includes one suggestion for a popular alternative (creating a
personal excludesfile specific to the tools you use), but there are
of course multiple ways it can be solved.

This is not an attempt to set policy, but merely provides a
recommended default for new repositories in hopes that projects can
over time reduce some of the noise related to unwanted .gitignore
additions. If it merges, projects who disagree with this default can
of course modify or remove the comment at the top of the file as
they see fit when bootstrapping content for a new repository.
Projects with existing repositories on which they'd like to apply
this can also easily copy the comment text or port the patch.

If there seems to be some consensus that this change is appreciated,
I'll remove the WIP flag and propose similar changes to our other
cookiecutters for consistency.

[*] https://review.openstack.org/592520
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova-specs is open for Stein

2018-08-16 Thread melanie witt

Hey all,

Just wanted to give a quick heads up that the nova-specs repo [1] is now 
open for Stein spec proposals. Here's a link to the docs on the spec 
process:


https://specs.openstack.org/openstack/nova-specs/readme.html

Cheers,
-melanie

[1] https://github.com/openstack/nova-specs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] User Committee Nominations Closing Soon!

2018-08-16 Thread Ed Leafe
As I write this, there are just over 12 hours left to get in your nominations 
for the OpenStack User Committee. Nominations close at August 17, 05:59 UTC.

If you are an AUC and thinking about running what's stopping you? If you know 
of someone who would make a great committee member nominate them (with their 
permission, of course)! Help make a difference for Operators, Users and the 
Community!

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-08-16 Thread Ed Leafe
Greetings OpenStack community,

Another cozy meeting today, as conferences and summer holidays reduced our 
attendees. We mainly focused on  the agenda [7] for the upcoming Denver PTG 
[8]. One item we added was consideration of a spec for common healthcheck 
middleware across projects [9]. This had been proposed back in January, and 
seemed to have a lot of initial interest, but there hasn't been any activity on 
it since March. There does seem to be some interest in it still, but no one 
with enough free cycles to keep it updated. So we invite anyone who has an 
interest in this to come to the API-SIG session at the PTG on Monday, or, if 
you can't make it, add your comments to the review.

Two of the patches [10][11] introduced by cdent last week were deemed to not be 
changes to the guidelines, but rather minor additions, so given that they were 
approved by the cores, they were merged. The remaining patch [12] involves a 
lot more thought and discussion; it could be the subject of a book all by 
itself! But we'd like to keep it short and to the point. We also don't have 
time to write a book!

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* None

# API Guidelines Proposed for Freeze

* None

# Guidelines that are ready for wider review by the whole community.

* None

# Guidelines Currently Under Review [3]

* Add an api-design doc with design advice
  https://review.openstack.org/592003

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the 
OpenStack developer mailing list[1] with the tag "[api]" in the subject. In 
your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://etherpad.openstack.org/p/api-sig-stein-ptg
[8] https://www.openstack.org/ptg/
[9] https://review.openstack.org/#/c/531456/
[10] https://review.openstack.org/589131
[11] https://review.openstack.org/589132
[12] https://review.openstack.org/592003

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates

2018-08-16 Thread Alan Bishop
On Tue, Aug 14, 2018 at 9:20 AM Bogdan Dobrelya  wrote:

> On 8/13/18 9:47 PM, Giulio Fidente wrote:
> > Hello,
> >
> > I'd like to get some feedback regarding the remaining
> > work for the split controlplane spec implementation [1]
> >
> > Specifically, while for some services like nova-compute it is not
> > necessary to update the controlplane nodes after an edge cloud is
> > deployed, for other services, like cinder (or glance, probably
> > others), it is necessary to do an update of the config files on the
> > controlplane when a new edge cloud is deployed.
> >
> > In fact for services like cinder or glance, which are hosted in the
> > controlplane, we need to pull data from the edge clouds (for example
> > the newly deployed ceph cluster keyrings and fsid) to configure cinder
> > (or glance) with a new backend.
> >
> > It looks like this demands for some architectural changes to solve the >
> following two:
> >
> > - how do we trigger/drive updates of the controlplane nodes after the
> > edge cloud is deployed?
>
> Note, there is also a strict(?) requirement of local management
> capabilities for edge clouds temporary disconnected off the central
> controlplane. That complicates the updates triggering even more. We'll
> need at least a notification-and-triggering system to perform required
> state synchronizations, including conflicts resolving. If that's the
> case, the architecture changes for TripleO deployment framework are
> inevitable AFAICT.
>

This is another interesting point. I don't mean to disregard it, but want to
highlight the issue that Giulio and I (and others, I'm sure) are focused on.

As a cinder guy, I'll use cinder as an example. Cinder services running in
the
control plane need to be aware of the storage "backends" deployed at the
Edge. So if a split-stack deployment includes edge nodes running a ceph
cluster, the cinder services need to be updated to add the ceph cluster as a
new cinder backend. So, not only is control plane data needed in order to
deploy an additional stack at the edge, data from the edge deployment needs
to
be fed back into a subsequent stack update in the controlplane. Otherwise,
cinder (and other storage services) will have no way of utilizing ceph
clusters at the edge.

>
> > - how do we scale the controlplane parameters to accomodate for N
> > backends of the same type?
>

Yes, this is also a big problem for me. Currently, TripleO can deploy cinder
with multiple heterogeneous backends (e.g. one each of ceph, NFS, Vendor X,
Vendor Y, etc.). However, the current THT do not let you deploy multiple
instances of the same backend (e.g. more than one ceph). If the goal is to
deploy multiple edge nodes consisting of Compute+Ceph, then TripleO will
need
the ability to deploy multiple homogeneous cinder backends. This requirement
will likely apply to glance and manila as well.


> > A very rough approach to the latter could be to use jinja to scale up
> > the CephClient service so that we can have multiple copies of it in the
> > controlplane.
> >
> > Each instance of CephClient should provide the ceph config file and
> > keyring necessary for each cinder (or glance) backend.
> >
> > Also note that Ceph is only a particular example but we'd need a similar
> > workflow for any backend type.
> >
> > The etherpad for the PTG session [2] touches this, but it'd be good to
> > start this conversation before then.
> >
> > 1.
> >
> https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html
> >
> > 2.
> https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI is blocked

2018-08-16 Thread Wesley Hayutin
On Wed, Aug 15, 2018 at 10:13 PM Wesley Hayutin  wrote:

> On Wed, Aug 15, 2018 at 7:13 PM Alex Schultz  wrote:
>
>> Please do not approve or recheck anything until further notice. We've
>> got a few issues that have basically broken all the jobs.
>>
>> https://bugs.launchpad.net/tripleo/+bug/1786764
>
>
fix posted: https://review.openstack.org/#/c/592577/


>
>> https://bugs.launchpad.net/tripleo/+bug/1787226
>
>
Dupe of 1786764 


>
>> https://bugs.launchpad.net/tripleo/+bug/1787244
>
>
Fixed Released: https://review.openstack.org/592146


>
>> https://bugs.launchpad.net/tripleo/+bug/1787268
>
>
Proposed:
https://review.openstack.org/#/c/592233/
https://review.openstack.org/#/c/592275/



> https://bugs.launchpad.net/tripleo/+bug/1736950
>
> w
>

Will post a patch to skip the above tempest test.

Also the patch to re-enable build-test-packages, the code that injects your
change into a rpm is about to merge.
https://review.openstack.org/#/c/592218/

Thanks Steve, Alex, Jistr and others :)


>
>>
>> Thanks,
>> -Alex
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
>
> Wes Hayutin
>
> Associate MANAGER
>
> Red Hat
>
> 
>
> whayu...@redhat.comT: +1919 <+19197544114>4232509 IRC:  weshay
> 
>
> View my calendar and check my availability for meetings HERE
> 
>
-- 

Wes Hayutin

Associate MANAGER

Red Hat



whayu...@redhat.comT: +1919 <+19197544114>4232509 IRC:  weshay


View my calendar and check my availability for meetings HERE

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [community][Rocky] Save the Date: Community Meeting: Rocky + project updates

2018-08-16 Thread Anne Bertucio
Hi all,

Save the date for an OpenStack community meeting on August 30 at 3pm UTC. This 
is the evolution of the “Marketing Community Release Preview” meeting that 
we’ve had each cycle. While that meeting has always been open to all, we wanted 
to expand the topics and encourage anyone who was interested in getting updates 
on the Rocky release or the newer projects at OSF to attend. 

We’ll cover:
—What’s new in Rocky
(This info will still be at a fairly high level, so might not be new 
information if you’re someone who stays up to date in the dev ML or is actively 
involved in upstream work)

—Updates from Airship, Kata Containers, StarlingX, and Zuul

—What you can expect at the Berlin Summit in November

This meeting will be run over Zoom (look for info closer to the 30th) and will 
be recorded, so if you can’t make the time, don’t panic! 

Cheers,
Anne Bertucio
OpenStack Foundation
a...@openstack.org | irc: annabelleB





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-16 Thread Zane Bitter

On 15/08/18 16:34, Ben Nemec wrote:
Since there were no objections, I've added Zane to the oslo.service core 
team.  Thanks and welcome, Zane!


Thanks team! I'll try not to mess it up :)


On 08/03/2018 11:58 AM, Ben Nemec wrote:

Hi,

Zane has been doing some good work in oslo.service recently and I 
would like to add him to the core team.  I know he's got a lot on his 
plate already, but he has taken the time to propose and review patches 
in oslo.service and has demonstrated an understanding of the code.


Please respond with +1 or any concerns you may have.  Thanks.

-Ben

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-16 Thread Eric Fried
Thanks for this, gibi.

TL;DR: a).

I didn't look, but I'm pretty sure we're not caching allocations in the
report client. Today, nobody outside of nova (specifically the resource
tracker via the report client) is supposed to be mucking with instance
allocations, right? And given the global lock in the resource tracker,
it should be pretty difficult to race e.g. a resize and a delete in any
meaningful way. So short term, IMO it is reasonable to treat any
generation conflict as an error. No retries. Possible wrinkle on delete,
where it should be a failure unless forced.

Long term, I also can't come up with any scenario where it would be
appropriate to do a narrowly-focused GET+merge/replace+retry. But
implementing the above short-term plan shouldn't prevent us from adding
retries for individual scenarios later if we do uncover places where it
makes sense.

Here's some stream-of-consciousness that led me to the above opinions:

- On spawn, we send the allocation with a consumer gen of None because
we expect the consumer not to exist. If it exists, that should be a hard
fail. (Hopefully the only way this happens is a true UUID conflict.)

- On migration, when we create the migration UUID, ditto above ^

- On migration, when we transfer the allocations in either direction, a
conflict means someone managed to resize (or otherwise change
allocations?) since the last time we pulled data. Given the global lock
in the report client, this should have been tough to do. If it does
happen, I would think any retry would need to be done all the way back
at the claim, which I imagine is higher up than we should go. So again,
I think we should fail the migration and make the user retry.

- On destroy, a conflict again means someone managed a resize despite
the global lock. If I'm deleting an instance and something about it
changes, I would think I want the opportunity to reevaluate my decision
to delete it. That said, I would definitely want a way to force it (in
which case we can just use the DELETE call explicitly). But neither case
should be a retry, and certainly there is no destroy scenario where I
would want a "merging" of allocations to happen.

Thanks,
efried


On 08/16/2018 06:43 AM, Balázs Gibizer wrote:
> reformatted for readabiliy, sorry:
> 
> Hi,
> 
> tl;dr: To properly use consumer generation (placement 1.28) in Nova we
> need to decide how to handle consumer generation conflict from Nova
> perspective:
> a) Nova reads the current consumer_generation before the allocation
>   update operation and use that generation in the allocation update
>   operation.  If the allocation is changed between the read and the
>   update then nova fails the server lifecycle operation and let the
>   end user retry it.
> b) Like a) but in case of conflict nova blindly retries the
>   read-and-update operation pair couple of times and if only fails
>   the life cycle operation if run out of retries.
> c) Nova stores its own view of the allocation. When a consumer's
>   allocation needs to be modified then nova reads the current state
>   of the consumer from placement. Then nova combines the two
>   allocations to generate the new expected consumer state. In case
>   of generation conflict nova retries the read-combine-update
>   operation triplet.
> 
> Which way we should go now?
> 
> What should be or long term goal?
> 
> 
> Details:
> 
> There are plenty of affected lifecycle operations. See the patch series
> starting at [1].
> 
> For example:
> 
> The current patch[1] that handles the delete server case implements
> option b).  It simly reads the current consumer generation from
> placement and uses that to send a PUT /allocatons/{instance_uuid} with
> "allocations": {} in its body.
> 
> Here implementing option c) would mean that during server delete nova
> needs:
> 1) to compile its own view of the resource need of the server
>   (currently based on the flavor but in the future based on the
>   attached port's resource requests as well)
> 2) then read the current allocation of the server from placement
> 3) then subtract the server resource needs from the current allocation
>   and send the resulting allocation back in the update to placement
> 
> In the simple case this subtraction would result in an empty allocation
> sent to placement. Also in this simple case c) has the same effect as
> b) currently implementated in [1].
> 
> However if somebody outside of nova modifies the allocation of this
> consumer in a way that nova does not know about such changed resource
> need then b) and c) will result in different placement state after
> server delete.
> 
> I only know of one example, the change of neutron port's resource
> request while the port is attached. (Note, it is out of scope in the
> first step of bandwidth implementation.) In this specific example
> option c) can work if nova re-reads the port's resource request during
> delete when recalculates its own view of the server resource needs. But
> I don't know if 

Re: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2

2018-08-16 Thread Doug Hellmann
Excerpts from Andreas Jaeger's message of 2018-08-16 08:42:22 +0200:
> On 2018-08-16 07:38, Tony Breeds wrote:
> > On Thu, Aug 16, 2018 at 06:27:39AM +0200, Andreas Jaeger wrote:
> > 
> >> Ocata should be retired by now ;) Let's drop it...
> > 
> > *cough* extended maintenance *cough*  ;P
> 
> Ah, forget about that.
> 
> > So we don't need the Ocata docs to be rebuilt with this version?
> 
> Ocata uses older sphinx etc. It would be nice - but not sure about the
> effort,

We want *all* of the docs rebuilt with this version.

Is there any reason we can't uncap pbr, at least within the CI jobs?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] network isolation!!! do we still need to configure VLAN , CIDR, ... in network-environment.yaml

2018-08-16 Thread Samuel Monderer
Hi,

In ocata we used network environment  file to configure network parameters
as following

  InternalApiNetCidr: '172.16.2.0/24'
  TenantNetCidr: '172.16.0.0/24'
  ExternalNetCidr: '192.168.204.0/24'
  # Customize the VLAN IDs to match the local environment
  InternalApiNetworkVlanID: 711
  TenantNetworkVlanID: 714
  ExternalNetworkVlanID: 204
  InternalApiAllocationPools: [{'start': '172.16.2.4', 'end':
'172.16.2.250'}]
  TenantAllocationPools: [{'start': '172.16.0.4', 'end': '172.16.0.250'}]
  # Leave room if the external network is also used for floating IPs
  ExternalAllocationPools: [{'start': '192.168.204.6', 'end':
'192.168.204.99'}]

In queens now that we use nerwork_data.yaml do we still need to set the
parameters above?

Samuel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-16 Thread Jiří Stránský

On 16.8.2018 07:39, Cédric Jeanneret wrote:



On 08/16/2018 12:10 AM, Jason E. Rist wrote:

On 08/15/2018 03:32 AM, Cédric Jeanneret wrote:

Dear Community,

As you may know, a move toward Podman as replacement of Docker is starting.

One of the issues with podman is the lack of daemon, precisely the lack
of a socket allowing to send commands and get a "computer formatted
output" (like JSON or YAML or...).

In order to work that out, Podman has added support for varlink¹, using
the "socket activation" feature in Systemd.

On my side, I would like to push forward the integration of varlink in
TripleO deployed containers, especially since it will allow the following:
# proper interface with Paunch (via python link)

# a way to manage containers from within specific containers (think
"healthcheck", "monitoring") by mounting the socket as a shared volume

# a way to get container statistics (think "metrics")

# a way, if needed, to get an ansible module being able to talk to
podman (JSON is always better than plain text)

# a way to secure the accesses to Podman management (we have to define
how varlink talks to Podman, maybe providing dedicated socket with
dedicated rights so that we can have dedicated users for specific tasks)

That said, I have some questions:
° Does any of you have some experience with varlink and podman interface?
° What do you think about that integration wish?
° Does any of you have concern with this possible addition?

Thank you for your feedback and ideas.

Have a great day (or evening, or whatever suits the time you're reading
this ;))!

C.


¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



How might this effect upgrades?


What exactly? addition of varlink, or the whole podman thingy? The
question was more about "varlink" than "podman" in fact - I should maybe
have worded things otherwise... ?


Varlink shouldn't be a problem as it's just an additive interface. 
Switching container runtime might be a bit difficult though :)


When running any upgrade, we stop any containers that need updating, and 
replace them with new ones. In theory we could just as well start the 
new ones using a different runtime, all we need is to keep the same bind 
mounts etc. What would need to be investigated is whether support for 
this (stopping on one runtime, starting on another) needs to be 
implemented directly into tools like Paunch and Pacemaker, or if we can 
handle this one-time scenario just with additional code in 
upgrade_tasks. It might be a combination of both.


Problem might come with sidecar containers for Neutron, which generally 
don't like being restarted (it can induce data plane downtime). Advanced 
hackery might be needed on this front... :)


Either way i think we'd have to do some PoC of such migration before 
fully committing to it.


Jirka





-J





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-16 Thread Jiří Stránský

On 15.8.2018 11:32, Cédric Jeanneret wrote:

Dear Community,

As you may know, a move toward Podman as replacement of Docker is starting.

One of the issues with podman is the lack of daemon, precisely the lack
of a socket allowing to send commands and get a "computer formatted
output" (like JSON or YAML or...).

In order to work that out, Podman has added support for varlink¹, using
the "socket activation" feature in Systemd.

On my side, I would like to push forward the integration of varlink in
TripleO deployed containers, especially since it will allow the following:
# proper interface with Paunch (via python link)


"integration of varlink in TripleO deployed containers" sounds like we'd 
need to make some changes to the containers themselves, but is that the 
case? As i read the docs, it seems like a management API wrapper for 
Podman, so just an alternative interface to Podman CLI. I'd expect we'd 
use varlink from Paunch, but probably not from the containers 
themselves? (Perhaps that's what you meant, just making sure we're on 
the same page.)




# a way to manage containers from within specific containers (think
"healthcheck", "monitoring") by mounting the socket as a shared volume


I think healthchecks are currently quite Docker-specific, so we could 
have a Podman-specific alternative here. We should be careful about how 
much container runtime specificity we introduce and keep though, and 
we'll probably have to amend our tools (e.g. pre-upgrade validations 
[2]) to work with both, at least until we decide whether to really make 
a full transition to Podman or not.




# a way to get container statistics (think "metrics")

# a way, if needed, to get an ansible module being able to talk to
podman (JSON is always better than plain text)

# a way to secure the accesses to Podman management (we have to define
how varlink talks to Podman, maybe providing dedicated socket with
dedicated rights so that we can have dedicated users for specific tasks)

That said, I have some questions:
° Does any of you have some experience with varlink and podman interface?
° What do you think about that integration wish?
° Does any of you have concern with this possible addition?


I like it, but we should probably sync up with Podman community if they 
consider varlink a "supported" interface for controlling Podman, and 
it's not just an experiment which will vanish. To me it certainly looks 
like a much better programmable interface than composing CLI calls and 
parsing their output, but we should make sure Podman folks think so too :)


Thanks for looking into this

Jirka

[2] https://review.openstack.org/#/c/582502/



Thank you for your feedback and ideas.

Have a great day (or evening, or whatever suits the time you're reading
this ;))!

C.


¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] No meeting today

2018-08-16 Thread Telles Nobrega
Hi folks,

since a couple of our core reviewers are on PTO today we have decided not
to host a meeting today.

If you have any questions just ping us at #openstack-sahara

Thanks,
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat Brasil  

Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
 Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo Great Place to Work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-1, August 20-24

2018-08-16 Thread Sean McGinnis
The end is near!

Development Focus
-

Teams should be working on release critical bugs in preparation of the final
release candidate deadline this Thursday the 23rd.

Teams attending the PTG should also be preparing for those discussions and
capturing information in the etherpads:

https://wiki.openstack.org/wiki/PTG/Stein/Etherpads

General Information
---

Thursday, August 23 is the deadline for final Rocky release candidates. We will
then enter a quiet period until we tag the final release on August 29.

Actions
-

Watch for any translation patches coming through and merge them quickly. If
your project has a stable/rocky branch created, please make sure those patches
are also getting merged there. (Do not backport the ones from master)

Liaisons for projects with independent deliverables should import the release
history by preparing patches to openstack/releases.

Projects following the cycle-trailing model should be getting ready for the
cycle-trailing RC deadline coming up on August 30.

Please drop by #openstack-release with any questions or concerns about the
upcoming release.


Upcoming Deadlines & Dates
--

Final RC deadline: August 23
Rocky Release: August 29
Cycle trailing RC deadline: August 30
Cycle trailing Rocky release: November 28

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-16 Thread Jiří Stránský

On 16.8.2018 10:38, Steven Hardy wrote:

On Wed, Aug 15, 2018 at 10:48 PM, Jay Pipes  wrote:

On 08/15/2018 04:01 PM, Emilien Macchi wrote:


On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi mailto:emil...@redhat.com>> wrote:

 More seriously here: there is an ongoing effort to converge the
 tools around containerization within Red Hat, and we, TripleO are
 interested to continue the containerization of our services (which
 was initially done with Docker & Docker-Distribution).
 We're looking at how these containers could be managed by k8s one
 day but way before that we plan to swap out Docker and join CRI-O
 efforts, which seem to be using Podman + Buildah (among other things).

I guess my wording wasn't the best but Alex explained way better here:

http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52

If I may have a chance to rephrase, I guess our current intention is to
continue our containerization and investigate how we can improve our tooling
to better orchestrate the containers.
We have a nice interface (openstack/paunch) that allows us to run multiple
container backends, and we're currently looking outside of Docker to see how
we could solve our current challenges with the new tools.
We're looking at CRI-O because it happens to be a project with a great
community, focusing on some problems that we, TripleO have been facing since
we containerized our services.

We're doing all of this in the open, so feel free to ask any question.



I appreciate your response, Emilien, thank you. Alex' responses to Jeremy on
the #openstack-tc channel were informative, thank you Alex.

For now, it *seems* to me that all of the chosen tooling is very Red Hat
centric. Which makes sense to me, considering Triple-O is a Red Hat product.


Just as a point of clarification - TripleO is an OpenStack project,
and yes there is a downstream product derived from it, but we could
e.g support multiple container backends in TripleO if there was
community interest in supporting that.

Also I think Alex already explained that fairly clearly in the IRC
link that this is initially about proving our existing abstractions
work to enable alternate container backends.


+1, and with my upgrade-centric hat on, we've had a fair share of 
trouble with Docker -- update of the daemon causing otherwise needless 
downtime of services and sometimes data plane too. Most recent example i 
can think of is here [1][2] -- satisfactory solution still doesn't 
exist. So my 2 cents: i am very interested in exploring alternative 
container runtimes, and daemon-less sounds to me like a promising direction.


Jirka

[1] https://bugs.launchpad.net/tripleo/+bug/1777146
[2] https://review.openstack.org/#/c/575758/1/puppet/services/docker.yaml



Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer][tc] removing freezer from governance

2018-08-16 Thread Ivan Kolodyazhny
Hi Rog,

If your company uses Freezer in production clouds, maybe you can help the
community with supporting it?

I know, that it could be a long and not easy decision but it's the only way
you can be sure that the project is alive.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Sat, Aug 4, 2018 at 5:33 AM, Rong Zhu  wrote:

> Hi, all
>
> I think backup restore and disaster recovery is one the import things in
> OpenStack, And our
> company(ZTE) has already integrated freezer in our production. And did
> some features base on
> freezer, we could push those features to community. Could you give us a
> chance to take over
> freezer in Stein cycle, If things still no progress, we cloud do this
> action after Stein cycle.
>
> Thank you for your consideration.
>
> --
> Thanks,
> Rong Zhu
>
> On Sat, Aug 4, 2018 at 3:16 AM Doug Hellmann 
> wrote:
>
>> Based on the fact that the Freezer team missed the Rocky release and
>> Stein PTL elections, I have proposed a patch to remove the project from
>> governance. If the project is still being actively maintained and
>> someone wants to take over leadership, please let us know here in this
>> thread or on the patch.
>>
>> Doug
>>
>> https://review.openstack.org/#/c/588645/
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-16 Thread Ivan Kolodyazhny
Hi all,

Auto-created bugs could help in this effort. Honestly, nobody can say will
it work or not before we try it.

>From a Horizon's perspective, we need to have some solution which helps us
to know about new features
that would be good to add them to UI in the future. We started with an
etherpad [1] as a first step for now.

[1] https://etherpad.openstack.org/p/horizon-feature-gap

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Thu, Aug 2, 2018 at 11:38 PM, Jeremy Stanley  wrote:

> On 2018-08-02 14:16:10 -0500 (-0500), Sean McGinnis wrote:
> [...]
> > Interesting... I hadn't looked into Gerrit functionality enough to know
> about
> > these. Looks like this is probably what you are referring to?
> >
> > https://gerrit.googlesource.com/plugins/its-storyboard/
>
> Yes, that. Khai Do (zaro) did the bulk of the work implementing it
> for us but isn't around as much these days (we miss you!).
>
> > It's been awhile since I did anything significant with Java, but that
> might be
> > an option. Maybe a fun weekend project at least to see what it would
> take to
> > create an its-launchpad plugin.
> [...]
>
> Careful; if you let anyone know you've touched a Gerrit plug-in the
> requests for more help will never end.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-16 Thread Balázs Gibizer

reformatted for readabiliy, sorry:

Hi,

tl;dr: To properly use consumer generation (placement 1.28) in Nova we
need to decide how to handle consumer generation conflict from Nova
perspective:
a) Nova reads the current consumer_generation before the allocation
  update operation and use that generation in the allocation update
  operation.  If the allocation is changed between the read and the
  update then nova fails the server lifecycle operation and let the
  end user retry it.
b) Like a) but in case of conflict nova blindly retries the
  read-and-update operation pair couple of times and if only fails
  the life cycle operation if run out of retries.
c) Nova stores its own view of the allocation. When a consumer's
  allocation needs to be modified then nova reads the current state
  of the consumer from placement. Then nova combines the two
  allocations to generate the new expected consumer state. In case
  of generation conflict nova retries the read-combine-update
  operation triplet.

Which way we should go now?

What should be or long term goal?


Details:

There are plenty of affected lifecycle operations. See the patch series
starting at [1].

For example:

The current patch[1] that handles the delete server case implements
option b).  It simly reads the current consumer generation from
placement and uses that to send a PUT /allocatons/{instance_uuid} with
"allocations": {} in its body.

Here implementing option c) would mean that during server delete nova
needs:
1) to compile its own view of the resource need of the server
  (currently based on the flavor but in the future based on the
  attached port's resource requests as well)
2) then read the current allocation of the server from placement
3) then subtract the server resource needs from the current allocation
  and send the resulting allocation back in the update to placement

In the simple case this subtraction would result in an empty allocation
sent to placement. Also in this simple case c) has the same effect as
b) currently implementated in [1].

However if somebody outside of nova modifies the allocation of this
consumer in a way that nova does not know about such changed resource
need then b) and c) will result in different placement state after
server delete.

I only know of one example, the change of neutron port's resource
request while the port is attached. (Note, it is out of scope in the
first step of bandwidth implementation.) In this specific example
option c) can work if nova re-reads the port's resource request during
delete when recalculates its own view of the server resource needs. But
I don't know if every other resource (e.g.  accelerators) used by a
server can be / will be handled this way.


Other examples of affected lifecycle operations:

During a server migration moving the source host allocation from the
instance_uuid to a the migration_uuid fails with consumer generation
conflict because of the instance_uuid consumer generation. [2]

Confirming a migration fails as the deletion of the source host
allocation fails due to the consumer generation conflict of the
migration_uuid consumer that is being emptied.[3]

During scheduling of a new server putting allocation to instance_uuid
fails as the scheduler assumes that it is a new consumer and therefore
uses consumer_generation: None for the allocation, but placement
reports generation conflict. [4]

During a non-forced evacuation the scheduler tries to claim the
resource on the destination host with the instance_uuid, but that
consumer already holds the source allocation therefore the scheduler
cannot assume that the instance_uuid is a new consumer. [4]


[1] https://review.openstack.org/#/c/591597
[2] https://review.openstack.org/#/c/591810
[3] https://review.openstack.org/#/c/591811
[4] https://review.openstack.org/#/c/583667






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] how nova should behave when placement returns consumer generation conflict

2018-08-16 Thread Balázs Gibizer

Hi,

tl;dr: To properly use consumer generation (placement 1.28) in Nova we 
need to

decide how to handle consumer generation conflict from Nova perspective:
a) Nova reads the current consumer_generation before the allocation 
update

  operation and use that generation in the allocation update operation.
  If the allocation is changed between the read and the update then 
nova

  fails the server lifecycle operation and let the end user retry it.
b) Like a) but in case of conflict nova blindly retries the 
read-and-update
  operation pair couple of times and if only fails the life cycle 
operation

  if run out of retries.
c) Nova stores its own view of the allocation. When a consumer's 
allocation
  needs to be modified then nova reads the current state of the 
consumer from

  placement. Then nova combines the two allocations to generate the new
  expected consumer state. In case of generation conflict nova retries 
the

  read-combine-update operation triplet.

Which way we should go now?

What should be or long term goal?


Details:

There are plenty of affected lifecycle operations. See the patch series
starting at [1].

For example:

The current patch[1] that handles the delete server case implements 
option b).
It simly reads the current consumer generation from placement and uses 
that to
send a PUT /allocatons/{instance_uuid} with "allocations": {} in its 
body.


Here implementing option c) would mean that during server delete nova 
needs:
1) to compile its own view of the resource need of the server 
(currently based

  on the flavor but in the future based on the attached port's resource
  requests as well)
2) then read the current allocation of the server from placement
3) then subtract the server resource needs from the current allocation 
and

  send the resulting allocation back in the update to placement

In the simple case this subtraction would result in an empty allocation 
sent to
placement. Also in this simple case c) has the same effect as b) 
currently

implementated in [1].

However if somebody outside of nova modifies the allocation of this 
consumer in
a way that nova does not know about such changed resource need then b) 
and c)

will result in different placement state after server delete.

I only know of one example, the change of neutron port's resource 
request while
the port is attached. (Note, it is out of scope in the first step of 
bandwidth
implementation.) In this specific example option c) can work if nova 
re-reads
the port's resource request during delete when recalculates its own 
view of the

server resource needs. But I don't know if every other resource (e.g.
accelerators) used by a server can be / will be handled this way.


Other examples of affected lifecycle operations:

During a server migration moving the source host allocation from the
instance_uuid to a the migration_uuid fails with consumer generation 
conflict

because of the instance_uuid consumer generation. [2]

Confirming a migration fails as the deletion of the source host 
allocation
fails due to the consumer generation conflict of the migration_uuid 
consumer

that is being emptied.[3]

During scheduling of a new server putting allocation to instance_uuid 
fails as

the scheduler assumes that it is a new consumer and therefore uses
consumer_generation: None for the allocation, but placement reports 
generation

conflict. [4]

During a non-forced evacuation the scheduler tries to claim the 
resource on the
destination host with the instance_uuid, but that consumer already 
holds the
source allocation therefore the scheduler cannot assume that the 
instance_uuid

is a new consumer. [4]


Cheers,
gibi

[1] https://review.openstack.org/#/c/591597
[2] https://review.openstack.org/#/c/591810
[3] https://review.openstack.org/#/c/591811
[4] https://review.openstack.org/#/c/583667




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG

2018-08-16 Thread Tobias Rydberg

Hi folks,

Time for a new meeting for the Public Cloud WG. Agenda draft can be 
found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to 
add items to that list.


See you all later this afternoon at IRC 1400 UTC in #openstack-publiccloud

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators][nova] ask deployment question

2018-08-16 Thread Rambo
Hi,all
   I have some questions about deploy the large scale openstack cloud.Such 
as 
   1.Only in one region situation,How many physical machines are the 
biggest deployment scale in our community? 
   Can you tell me more about these combined with own practice? Would you 
give me some methods to learn it?Such as the website,blog and so on. Thank you 
very much!Looking forward to hearing from you.
















Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][qa] QA Rocky release status

2018-08-16 Thread Ghanshyam Mann
Hi All,

QA has lot of sub-projects and this mail is to track their release status for 
Rocky cycle. I will be on vacation from coming  Monday for next 2 weeks 
(visiting India) but will be online to complete the below IN-PROGRESS items and 
update the status here.  

IN-PROGRESS: 

1. devstack: Branch. Patch is pushed to branch for Rocky which is in hold 
state - IN-PROGRESS [1]

2. grenade: Branch. Patch is pushed to branch for Rocky which is in hold 
state - IN-PROGRESS [1]

3. patrole: Release done, patch is under review[2] - IN-PROGRESS

4. tempest: Release done, patch is under review[3] - IN-PROGRESS

COMPLETED (Done or no release required): 

5. bashate: independent release | Branch-less.  version 0.6.0 is released 
last month and no further release required in Rocky cycle.  - COMPLETED

6. coverage2sql: Branch-less.  Not any release yet and no specific release 
required for Rocky too. - COMPLETED 
  
7. devstack-plugin-ceph: Branch-less. Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

8. devstack-plugin-cookiecutter: Branch-less. Not any release yet and no 
specific release required for Rocky. - COMPLETED 

9. devstack-tools: Branch-less. version 0.4.0 is the latest version 
released and no further release required in Rocky cycle.  - COMPLETED

10. devstack-vagrant: Branch-less.  Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

11. eslint-config-openstack: Branch-less. version 4.0.1 is the latest 
version released. no further release required in Rocky cycle.  - COMPLETED

12. hacking: Branch-less. version 11.1.0 is the latest version released. no 
further release required in Rocky cycle.  - COMPLETED

13. karma-subunit-reporter: Branch-less. version v0.0.4 is the latest 
version released. no further release required in Rocky cycle.  - COMPLETED

14. openstack-health: Branch-less.  Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

15. os-performance-tools: Branch-less.  Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

16. os-testr: Branch-less. version 1.0.0 is the latest version released. no 
further release required in Rocky cycle.  - COMPLETED

17. qa-specs: Spec repo, no release needed. - COMPLETED

18. stackviz: Branch-less.  Not any release yet and no specific release 
required for Rocky too. - COMPLETED 

19. tempest-plugin-cookiecutter: Branch-less.  Not any release yet and no 
specific release required for Rocky too. - COMPLETED

20. tempest-lib: Deprecated repo, No released needed for Rocky - COMPLETED

21. tempest-stress: Branch-less.  Not any release yet and no specific 
release required for Rocky too. - COMPLETED

22. devstack-plugin-container: Branch. Release and Branched done[4] - 
COMPLETED


[1] 
https://review.openstack.org/#/q/topic:rocky-branch-devstack-grenade+(status:open+OR+status:merged)
 
[2] https://review.openstack.org/#/c/592277/
[3] https://review.openstack.org/#/c/592276/
[4] https://review.openstack.org/#/c/591804/ 

-gmann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] A multi-cell instance-list performance test

2018-08-16 Thread Yikun Jiang
Some more information:
*1. How did we record the time when listing?*
you can see all our changes in:
http://paste.openstack.org/show/728162/

Total cost:  L26
Construct view: L43
Data gather per cell cost:   L152
Data gather all cells cost: L174
Merge Sort cost: L198

*2. Why it is not parallel in the first result?*
The root reason of gathering data in first table is not in parallel because
we don’t enable
eventlet.monkey_patch (especially, time flag is not True) under the uswgi
mode.

Then the oslo_db’s thread yield [2] doesn’t work, and all db data gathering
threads are blocked
until they get all data from db[1].

Finally the gathering process looks like is executed in serial, so we fix
it in [2]

but after fix[2], it still has no more improvement as we expected, looks
like every thread is influenced by
each other, so we need your idea. : )

[1]
https://github.com/openstack/oslo.db/blob/256ebc3/oslo_db/sqlalchemy/engines.py#L51
[2] https://review.openstack.org/#/c/592285/

Regards,
Yikun

Jiang Yikun(Kero)
Mail: yikunk...@gmail.com


Zhenyu Zheng  于2018年8月16日周四 下午3:54写道:

> Hi, Nova
>
> As the Cells v2 architecture is getting mature, and CERN used it and seems
> worked well, *Huawei *is also willing to consider using this in our
> Public Cloud deployments.
> As we still have concerns about the performance when doing multi-cell
> listing, recently *Yikun Jiang* and I have done a performance test for
> ``instance list`` across
> multi-cell deployment, we would like share our test results and findings.
>
> First, I want to point out our testing environment, as we(Yikun and I) are
> doing this as a concept test(to show the ratio between time consumptions
> for query data from
> DB and sorting etc.) so we are doing it on our own machine, the machine
> has 16 CPUs and 80 GB RAM, as it is old, so the Disk might be slow. So we
> will not judging
> the time consumption data itself, but the overall logic and the ratios
> between different steps. We are doing it with a devstack deployment on this
> single machine.
>
> Then I would like to share our test plan, we will setup 10 cells
> (cell1~cell10) and we will generate 1 instance records in those cells
> (considering 20 instances per
> host, it would be like 500 hosts, which seems a good size for a cell),
> cell0 is kept empty as the number for errored instance could be very less
> and it doesn't really matter.
> We will test the time consumption for listing instances across 1,2,5, and
> 10 cells(cell0 will be always queried, so it is actually 2, 3, 6 and 11
> cells) with the limit of
> 100, 200, 500 and 1000, as the default maximum limit is 1000. In order to
> get more general results, we tested the list with default sort key and dir,
> sort by
> instance_uuid and sort by uuid & name, this should provide a more general
> result.
>
> This is what we got(the time unit is second):
>
> *Default sort*
>
> *Uuid* *Sort*
>
> *uuid+name* *Sort*
>
> *Cell*
>
> *Num*
>
> *Limit*
>
>
> *Total*
>
> *Cost*
>
> *Data Gather Cost*
>
> *Merge Sort Cost*
>
> *Construct View*
>
> *Total*
>
> *Cost*
>
> *Data Gather Cost*
>
> *Merge Sort Cost*
>
> *Construct View*
>
> *Total*
>
> *Cost*
>
> *Data Gather Cost*
>
> *Merge Sort Cost*
>
> *Construct View*
>
> 10
>
> 100
>
> 2.3313
>
> 2.1306
>
> 0.1145
>
> 0.0672
>
> 2.3693
>
> 2.1343
>
> 0.1148
>
> 0.1016
>
> 2.3284
>
> 2.1264
>
> 0.1145
>
> 0.0679
>
> 200
>
> 3.5979
>
> 3.2137
>
> 0.2287
>
> 0.1265
>
> 3.5316
>
> 3.1509
>
> 0.2265
>
> 0.1255
>
> 3.481
>
> 3.054
>
> 0.2697
>
> 0.1284
>
> 500
>
> 7.1952
>
> 6.2597
>
> 0.5704
>
> 0.3029
>
> 7.5057
>
> 6.4761
>
> 0.6263
>
> 0.341
>
> 7.4885
>
> 6.4623
>
> 0.6239
>
> 0.3404
>
> 1000
>
> 13.5745
>
> 11.7012
>
> 1.1511
>
> 0.5966
>
> 13.8408
>
> 11.9007
>
> 1.2268
>
> 0.5939
>
> 13.8813
>
> 11.913
>
> 1.2301
>
> 0.6187
>
> 5
>
> 100
>
> 1.3142
>
> 1.1003
>
> 0.1163
>
> 0.0706
>
> 1.2458
>
> 1.0498
>
> 0.1163
>
> 0.0665
>
> 1.2528
>
> 1.0579
>
> 0.1161
>
> 0.066
>
> 200
>
> 2.0151
>
> 1.6063
>
> 0.2645
>
> 0.1255
>
> 1.9866
>
> 1.5386
>
> 0.2668
>
> 0.1615
>
> 2.0352
>
> 1.6246
>
> 0.2646
>
> 0.1262
>
> 500
>
> 4.2109
>
> 3.1358
>
> 0.7033
>
> 0.3343
>
> 4.1605
>
> 3.0893
>
> 0.6951
>
> 0.3384
>
> 4.1972
>
> 3.2461
>
> 0.6104
>
> 0.3028
>
> 1000
>
> 7.841
>
> 5.8881
>
> 1.2027
>
> 0.6802
>
> 7.7135
>
> 5.9121
>
> 1.1363
>
> 0.5969
>
> 7.8377
>
> 5.9385
>
> 1.1936
>
> 0.6376
>
> 2
>
> 100
>
> 0.6736
>
> 0.4727
>
> 0.1113
>
> 0.0822
>
> 0.605
>
> 0.4192
>
> 0.1105
>
> 0.0656
>
> 0.688
>
> 0.4613
>
> 0.1126
>
> 0.0682
>
> 200
>
> 1.1226
>
> 0.7229
>
> 0.2577
>
> 0.1255
>
> 1.0268
>
> 0.6671
>
> 0.2255
>
> 0.1254
>
> 1.2805
>
> 0.8171
>
> 0.
>
> 0.1258
>
> 500
>
> 2.2358
>
> 1.3506
>
> 0.5595
>
> 0.3026
>
> 2.3307
>
> 1.2748
>
> 0.6581
>
> 0.3362
>
> 2.741
>
> 1.6023
>
> 0.633
>
> 0.3365
>
> 1000
>
> 4.2079
>
> 2.3367
>
> 1.2053
>
> 0.5986
>
> 4.2384
>
> 2.4071
>
> 1.2017
>
> 0.633
>
> 4.3437
>
> 2.4136
>
> 1.217
>
> 0.6394
>
> 1
>
> 100
>
> 0.4857

Re: [openstack-dev] [docs][nova] about update flavor

2018-08-16 Thread Zhenyu Zheng
I mean this https://review.openstack.org/#/c/491442/ and the related ML

http://lists.openstack.org/pipermail/openstack-dev/2017-August/120540.html


On Thu, Aug 16, 2018 at 4:30 PM Rambo  wrote:

> Sorry,I don't understand what has been removed,the docs about the update
> the flavor's cpu?Otherwise,why we don't consider to add the function that
> we can update the flavor's cpu?
>
>
> -- Original --
> *From:* "Zhenyu Zheng";
> *Date:* 2018年8月16日(星期四) 下午3:56
> *To:* "OpenStack Developmen";
> *Subject:* Re: [openstack-dev] [docs][nova] about update flavor
>
> We only allow update flavor descriptions(added in microversion 2.55) in
> Nova and what the horizon did was just delete the old one and create a new
> one, and I think it has been removed in last year.
>
> On Thu, Aug 16, 2018 at 3:19 PM Rambo  wrote:
>
>> Hi,all
>>
>>   I find it is supported that we can update the flavor name, VCPUs,
>> RAM, root disk, ephemeral disk and so on in doc.openstack.org[1].But
>> only can change the flavor propertity in fact.Is the document wrong?Can you
>> tell me more about this ?Thank you very much.
>>
>> [1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html
>>
>>
>>
>>
>>
>>
>>
>>
>> Best Regards
>> Rambo
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-16 Thread Steven Hardy
On Wed, Aug 15, 2018 at 10:48 PM, Jay Pipes  wrote:
> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
>>
>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi > > wrote:
>>
>> More seriously here: there is an ongoing effort to converge the
>> tools around containerization within Red Hat, and we, TripleO are
>> interested to continue the containerization of our services (which
>> was initially done with Docker & Docker-Distribution).
>> We're looking at how these containers could be managed by k8s one
>> day but way before that we plan to swap out Docker and join CRI-O
>> efforts, which seem to be using Podman + Buildah (among other things).
>>
>> I guess my wording wasn't the best but Alex explained way better here:
>>
>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
>>
>> If I may have a chance to rephrase, I guess our current intention is to
>> continue our containerization and investigate how we can improve our tooling
>> to better orchestrate the containers.
>> We have a nice interface (openstack/paunch) that allows us to run multiple
>> container backends, and we're currently looking outside of Docker to see how
>> we could solve our current challenges with the new tools.
>> We're looking at CRI-O because it happens to be a project with a great
>> community, focusing on some problems that we, TripleO have been facing since
>> we containerized our services.
>>
>> We're doing all of this in the open, so feel free to ask any question.
>
>
> I appreciate your response, Emilien, thank you. Alex' responses to Jeremy on
> the #openstack-tc channel were informative, thank you Alex.
>
> For now, it *seems* to me that all of the chosen tooling is very Red Hat
> centric. Which makes sense to me, considering Triple-O is a Red Hat product.

Just as a point of clarification - TripleO is an OpenStack project,
and yes there is a downstream product derived from it, but we could
e.g support multiple container backends in TripleO if there was
community interest in supporting that.

Also I think Alex already explained that fairly clearly in the IRC
link that this is initially about proving our existing abstractions
work to enable alternate container backends.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][nova] about update flavor

2018-08-16 Thread Rambo
Sorry,I don't understand what has been removed,the docs about the update the 
flavor's cpu?Otherwise,why we don't consider to add the function that we can 
update the flavor's cpu?
 
 
-- Original --
From: "Zhenyu Zheng"; 
Date: 2018年8月16日(星期四) 下午3:56
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [docs][nova] about update flavor

 
We only allow update flavor descriptions(added in microversion 2.55) in Nova 
and what the horizon did was just delete the old one and create a new one, and 
I think it has been removed in last year.

On Thu, Aug 16, 2018 at 3:19 PM Rambo  wrote:

Hi,all


  I find it is supported that we can update the flavor name, VCPUs, RAM, 
root disk, ephemeral disk and so on in doc.openstack.org[1].But only can change 
the flavor propertity in fact.Is the document wrong?Can you tell me more about 
this ?Thank you very much.


[1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html
















Best Regards
Rambo


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] QA work

2018-08-16 Thread Ameet Gandhare
Hi Everybody,

Does anybody wants some QA/testing work to be done on their modules?
-Regards,
Ameet
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][nova] about update flavor

2018-08-16 Thread Zhenyu Zheng
We only allow update flavor descriptions(added in microversion 2.55) in
Nova and what the horizon did was just delete the old one and create a new
one, and I think it has been removed in last year.

On Thu, Aug 16, 2018 at 3:19 PM Rambo  wrote:

> Hi,all
>
>   I find it is supported that we can update the flavor name, VCPUs,
> RAM, root disk, ephemeral disk and so on in doc.openstack.org[1].But only
> can change the flavor propertity in fact.Is the document wrong?Can you tell
> me more about this ?Thank you very much.
>
> [1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html
>
>
>
>
>
>
>
>
> Best Regards
> Rambo
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] A multi-cell instance-list performance test

2018-08-16 Thread Zhenyu Zheng
Hi, Nova

As the Cells v2 architecture is getting mature, and CERN used it and seems
worked well, *Huawei *is also willing to consider using this in our Public
Cloud deployments.
As we still have concerns about the performance when doing multi-cell
listing, recently *Yikun Jiang* and I have done a performance test for
``instance list`` across
multi-cell deployment, we would like share our test results and findings.

First, I want to point out our testing environment, as we(Yikun and I) are
doing this as a concept test(to show the ratio between time consumptions
for query data from
DB and sorting etc.) so we are doing it on our own machine, the machine has
16 CPUs and 80 GB RAM, as it is old, so the Disk might be slow. So we will
not judging
the time consumption data itself, but the overall logic and the ratios
between different steps. We are doing it with a devstack deployment on this
single machine.

Then I would like to share our test plan, we will setup 10 cells
(cell1~cell10) and we will generate 1 instance records in those cells
(considering 20 instances per
host, it would be like 500 hosts, which seems a good size for a cell),
cell0 is kept empty as the number for errored instance could be very less
and it doesn't really matter.
We will test the time consumption for listing instances across 1,2,5, and
10 cells(cell0 will be always queried, so it is actually 2, 3, 6 and 11
cells) with the limit of
100, 200, 500 and 1000, as the default maximum limit is 1000. In order to
get more general results, we tested the list with default sort key and dir,
sort by
instance_uuid and sort by uuid & name, this should provide a more general
result.

This is what we got(the time unit is second):

*Default sort*

*Uuid* *Sort*

*uuid+name* *Sort*

*Cell*

*Num*

*Limit*


*Total*

*Cost*

*Data Gather Cost*

*Merge Sort Cost*

*Construct View*

*Total*

*Cost*

*Data Gather Cost*

*Merge Sort Cost*

*Construct View*

*Total*

*Cost*

*Data Gather Cost*

*Merge Sort Cost*

*Construct View*

10

100

2.3313

2.1306

0.1145

0.0672

2.3693

2.1343

0.1148

0.1016

2.3284

2.1264

0.1145

0.0679

200

3.5979

3.2137

0.2287

0.1265

3.5316

3.1509

0.2265

0.1255

3.481

3.054

0.2697

0.1284

500

7.1952

6.2597

0.5704

0.3029

7.5057

6.4761

0.6263

0.341

7.4885

6.4623

0.6239

0.3404

1000

13.5745

11.7012

1.1511

0.5966

13.8408

11.9007

1.2268

0.5939

13.8813

11.913

1.2301

0.6187

5

100

1.3142

1.1003

0.1163

0.0706

1.2458

1.0498

0.1163

0.0665

1.2528

1.0579

0.1161

0.066

200

2.0151

1.6063

0.2645

0.1255

1.9866

1.5386

0.2668

0.1615

2.0352

1.6246

0.2646

0.1262

500

4.2109

3.1358

0.7033

0.3343

4.1605

3.0893

0.6951

0.3384

4.1972

3.2461

0.6104

0.3028

1000

7.841

5.8881

1.2027

0.6802

7.7135

5.9121

1.1363

0.5969

7.8377

5.9385

1.1936

0.6376

2

100

0.6736

0.4727

0.1113

0.0822

0.605

0.4192

0.1105

0.0656

0.688

0.4613

0.1126

0.0682

200

1.1226

0.7229

0.2577

0.1255

1.0268

0.6671

0.2255

0.1254

1.2805

0.8171

0.

0.1258

500

2.2358

1.3506

0.5595

0.3026

2.3307

1.2748

0.6581

0.3362

2.741

1.6023

0.633

0.3365

1000

4.2079

2.3367

1.2053

0.5986

4.2384

2.4071

1.2017

0.633

4.3437

2.4136

1.217

0.6394

1

100

0.4857

0.2869

0.1097

0.069

0.4205

0.233

0.1131

0.0672

0.6372

0.3305

0.196

0.0681

200

0.6835

0.3236

0.2212

0.1256

0.

0.3754

0.261

0.13

0.9245

0.4527

0.227

0.129

500

1.5848

0.6415

0.6251

0.3043

1.6472

0.6554

0.6292

0.3053

1.9455

0.8201

0.5918

0.3447

1000

3.1692

1.2124

1.2246

0.6762

3.0836

1.2286

1.2055

0.643

3.0991

1.2248

1.2615

0.6028

Our conclusions from the data are:
1. The time consumption for *MERGE SORT* process has strong correlation
with the *LIMIT*,  and seems *not *effected by *number of cells;*
2. The major time consumption part of the whole process is actually the
data gathering process, so we will have a closer look on this

With we added some audit log in the code, and from the log we can saw:

02:24:53.376705 db begin, nova_cell0

02:24:53.425836 db end, nova_cell0: 0.0487968921661

02:24:53.426622 db begin, nova_cell1

02:24:54.451235 db end, nova_cell1: 1.02400803566

02:24:54.451991 db begin, nova_cell2

02:24:55.715769 db end, nova_cell2: 1.26333093643

02:24:55.716575 db begin, nova_cell3

02:24:56.963428 db end, nova_cell3: 1.24626398087

02:24:56.964202 db begin, nova_cell4

02:24:57.980187 db end, nova_cell4: 1.01546406746

02:24:57.980970 db begin, nova_cell5

02:24:59.279139 db end, nova_cell5: 1.29762792587

02:24:59.279904 db begin, nova_cell6

02:25:00.311717 db end, nova_cell6: 1.03130197525

02:25:00.312427 db begin, nova_cell7

02:25:01.654819 db end, nova_cell7: 1.34187483788

02:25:01.655643 db begin, nova_cell8

02:25:02.689731 db end, nova_cell8: 1.03352093697

02:25:02.690502 db begin, nova_cell9

02:25:04.076885 db end, nova_cell9: 1.38588285446


yes, the DB query was in serial, after some investigation, it seems that we
are unable to 

[openstack-dev] [docs][nova] about update flavor

2018-08-16 Thread Rambo
Hi,all


  I find it is supported that we can update the flavor name, VCPUs, RAM, 
root disk, ephemeral disk and so on in doc.openstack.org[1].But only can change 
the flavor propertity in fact.Is the document wrong?Can you tell me more about 
this ?Thank you very much.


[1]https://docs.openstack.org/horizon/latest/admin/manage-flavors.html
















Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Retire rst2bash

2018-08-16 Thread Andreas Jaeger
The rst2bash repo is dead and unused. It was created to help us with
Install Guide testing - and this implementation was not finished and the
Install Guide looks completely different now.

I propose to retire it, see
https://review.openstack.org/#/q/topic:retire-rst2bash for changes,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][release][docs] FFE for openstackdocstheme 1.21.2

2018-08-16 Thread Andreas Jaeger
On 2018-08-16 07:38, Tony Breeds wrote:
> On Thu, Aug 16, 2018 at 06:27:39AM +0200, Andreas Jaeger wrote:
> 
>> Ocata should be retired by now ;) Let's drop it...
> 
> *cough* extended maintenance *cough*  ;P

Ah, forget about that.

> So we don't need the Ocata docs to be rebuilt with this version?

Ocata uses older sphinx etc. It would be nice - but not sure about the
effort,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev