Re: [openstack-dev] [taas] LP project changes

2018-07-12 Thread Takashi Yamamoto
i went through existing bugs and prioritized them.
i'd recommend the others to do the same. there are not too many of them.

i also updated series and milestones.

On Mon, Jul 2, 2018 at 7:02 PM, Takashi Yamamoto  wrote:
> hi,
>
> I created a LP team "tap-as-a-service-drivers",
> whose initial members are same as the existing tap-as-a-service-core
> group on gerrit.
> I made the team the Maintainer and Driver of the tap-as-a-service project.
> This way, someone in the team can take it over even if I disappeared
> suddenly. :-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][pre] removing default ssh rule from tripleo::firewall::pre

2018-07-12 Thread Lars Kellogg-Stedman
I've had a few operators complain about the permissive rule tripleo
creates for ssh.  The current alternatives seems to be to either disable
tripleo firewall management completely, or move from the default-deny
model to a set of rules that include higher-priority blacklist rules
for ssh traffic.

I've just submitted a pair of reviews [1] that (a) remove the default
"allow ssh from everywhere" rule in tripleo::firewall:pre and (b) add
a DefaultFirewallRules parameter to the tripleo-firewall service.

The default value for this new parameter is the same rule that was
previously in tripleo::firewall::pre, but now it can be replaced by an
operator as part of the deployment configuration.

For example, a deployment can include:

parameter_defaults:
  DefaultFirewallRules:
tripleo.tripleo_firewall.firewall_rules:
  '003 allow ssh from internal networks':
source: '172.16.0.0/22'
proto: 'tcp'
dport: 22
  '003 allow ssh from bastion host':
source: '192.168.1.10'
proto: 'tcp'
dport: 22

[1] 
https://review.openstack.org/#/q/topic:feature/firewall%20(status:open%20OR%20status:merged)

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Rocky blueprints

2018-07-12 Thread Tony Breeds
On Wed, Jul 11, 2018 at 10:39:30AM -0600, Alex Schultz wrote:
> Currently open with pending patches (may need FFE):
> - https://blueprints.launchpad.net/tripleo/+spec/multiarch-support

I'd like an FFE for this, the open reviews are in pretty good shape and
mostly merged. (or +W'd).

We'll need another tripleo-common release after
https://review.openstack.org/537768 merges which I'd really like to do
next week if possible.

There is some cleanup that can be done but nothing that's *needed* for
rocky.

After that there is still a validation that I need to write, and docs to
update.

I appreciate the help and support I've had from the TripleO community to
get to this point.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Tony Breeds
On Thu, Jul 12, 2018 at 11:31:34AM -0500, Monty Taylor wrote:

> there is also
> 
> https://review.openstack.org/#/c/580730/
> 
> which adds a role to install docker and configure it to use the correct
> registry.

 shiny! That'll take care of all the docker setup nice!

Can I create a job that Depends-On that one and see what happens when I
try to build/run containers?

/me suspects so but sometimes I like to check :)

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Tony Breeds
On Thu, Jul 12, 2018 at 11:05:09AM -0500, Matthew Thode wrote:

> I'm of the opinion that we should decouple from distro supported python
> versions and rely on what versions upstream python supports (longer
> lifetimes than our releases iirc).

Using docker/pyenv does this decoupling but I'm not convinced that any
option really means that we dont' end up running something that's EOL
somewhere.


Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Tony Breeds
On Thu, Jul 12, 2018 at 11:31:34AM -0500, Monty Taylor wrote:
 
> FWIW, I use pyenv for python versions on my laptop and love it. I've
> completely given up on distro-provided python for my own usage.

Hmm okay I'll look at that and how it'd play with the generate job.
It's quite possible I'm being short sighted but I'd really like to *not*
have to build anything.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Tony Breeds
On Thu, Jul 12, 2018 at 01:52:56PM +, Jeremy Stanley wrote:
> On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote:
> [...]
> > I think most of the problems with Fedora stability are around
> > bringing up a new Fedora every 6 months or so. They tend to change
> > sufficiently within that time period to make this a fairly
> > involved exercise. But once working they work for the ~13 months
> > of support they offer. I know Paul Belanger would like to iterate
> > more quickly and just keep the most recent Fedora available
> > (rather than ~2).
> [...]
> 
> Regardless its instability/churn makes it unsuitable for stable
> branch jobs because the support lifetime of the distro release is
> shorter than the maintenance lifetime of our stable branches. Would
> probably be fine for master branch jobs but not beyond, right?

Yup we only run the generate job on master, once we branch it's up to
poeple to update/review the lists.  So I'd hope that we'd have f28 and
f29 overlap and roll forward as needed/able

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Tony Breeds
On Thu, Jul 12, 2018 at 06:37:52AM -0700, Clark Boylan wrote:
> On Wed, Jul 11, 2018, at 9:34 PM, Tony Breeds wrote:
> > 1. Build pythons from source and use that to construct the venv
> >[please no]
> 
> Fungi mentions that 3.3 and 3.4 don't build easily on modern linux distros. 
> However, 3.3 and 3.4 are also unsupported by Python at this point, maybe we 
> can ignore them and focus on 3.5 and forward? We don't build new freeze lists 
> for the stable branches, this is just a concern for master right?

The focus is master, but it came up in the context of shoudl we just
remove the python_version=='3.4', it turns out that at least one OS that
will supported rock will be running with python 3.4 so while 3.4 is EOL
I have to admit I'd quite like to be able to keep the 3.4 stuff around
for rocky (and probably stein).

It isn't a hard requirement.

> > 2. Generate the constraints in an F28 image.  My F28 has ample python
> >versions:
> >  - /usr/bin/python2.6
> >  - /usr/bin/python2.7
> >  - /usr/bin/python3.3
> >  - /usr/bin/python3.4
> >  - /usr/bin/python3.5
> >  - /usr/bin/python3.6
> >  - /usr/bin/python3.7
> >I don't know how valid this still is but in the past fedora images
> >have been seen as unstable and hard to keep current.  If that isn't
> >still the feeling then we could go down this path.  Currently there a
> >few minor problems with bindep.txt on fedora and generate-constraints
> >doesn't work with py3 but these are pretty minor really.
> 
> I think most of the problems with Fedora stability are around  bringing up a 
> new Fedora every 6 months or so. They tend to change sufficiently within that 
> time period to make this a fairly involved exercise. But once working they 
> work for the ~13 months of support they offer. I know Paul Belanger would 
> like to iterate more quickly and just keep the most recent Fedora available 
> (rather than ~2).

Ok that's good context.  It isn't that once the images are built they
break it that they're hardish to build in the first place.  I'd love to
think that between Paul, Ian and I we'd be okay here but then again I
don't really know what I'm saying ;P

> > 3. Use docker images for python and generate the constraints with
> >them.  I've hacked up something we could use as a base for that in:
> >   https://review.openstack.org/581948
> > 
> >There are lots of open questions:
> >  - How do we make this nodepool/cloud provider friendly ?
> >* Currently the containers just talk to the main debian mirrors.
> >  Do we have debian packages? If so we could just do sed magic.
> 
> http://$MIRROR/debian (http://mirror.dfw.rax.openstack.org/debian for 
> example) should be a working amd64 debian package mirror.

\o/
 
> >  - Do/Can we run a registry per provider?
> 
> We do not, but we do have a caching dockerhub registry proxy in each 
> region/provider. http://$MIRROR:8081/registry-1.docker if using older docker 
> and http://$MIRROR:8082 for current docker. This was a compromise between 
> caching the Internet and reliability.

That'll do as long as it's easy to configure or transparent.
 
> >  - Can we generate and caches these images and only run pip install -U
> >g-r to speed up the build
> 
> Between cached upstream python docker images and prebuilt wheels mirrored in 
> every cloud provider region I wonder if this will save a significant amount 
> of time? May be worth starting without this and working from there if it 
> remains slow.

Yeah it may be that I'm over thinking it.  For me (locally) it's really
slow but perhaps with infrastructure you've mentioned it isn't worth it.
Certainly something to look at later if it's a problem.

> >  - Are we okay with using docker this way?
> 
> Should be fine, particularly if we are consuming the official Python images.

Yup that's the plan.  I've sent a PR to get some images we'd need built
that aren't there today.
> 
> > 
> > I like #2 the most but I wanted to seek wider feedback.
> 
> I think each proposed option should work as long as we understand the 
> limitations each presents. #2 should work fine if we have individuals 
> interested and able to spin up new Fedora images and migrate jobs to that 
> image after releases happen.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks

2018-07-12 Thread Emilien Macchi
On Tue, Jul 10, 2018 at 10:22 AM Jiří Stránský  wrote:

> Hi,
>
> with the move to config-download deployments, we'll be moving from
> executing external installers (like ceph-ansible) via Heat resources
> encapsulating Mistral workflows towards executing them via Ansible
> directly (nested Ansible process via external_deploy_tasks).
>
> Updates and upgrades still need to be addressed here. I think we should
> introduce external_update_tasks and external_upgrade_tasks for this
> purpose, but i see two options how to construct the workflow with them.
>
> During update (mentioning just updates, but upgrades would be done
> analogously) we could either:
>
> A) Run external_update_tasks, then external_deploy_tasks.
>
> This works with the assumption that updates are done very similarly to
> deployment. The external_update_tasks could do some prep work and/or
> export Ansible variables which then could affect what
> external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably
> override the playbook path). This way we could also disable specific
> parts of external_deploy_tasks on update, in case reuse is undesirable
> in some places.
>
> B) Run only external_update_tasks.
>
> This would mean code for updates/upgrades of externally deployed
> services would be completely separated from how their deployment is
> done. If we wanted to reuse some of the deployment tasks, we'd have to
> use the YAML anchor referencing mechanisms. (, *anchor)
>
> I think the options are comparable in terms of what is possible to
> implement with them, the main difference is what use cases we want to
> optimize for.
>
> Looking at what we currently have in external_deploy_tasks (e.g.
> [1][2]), i think we'd have to do a lot of explicit reuse if we went with
> B (inventory and variables generation, ...). So i'm leaning towards
> option A (WIP patch at [3]) which should give us this reuse more
> naturally. This approach would also be more in line with how we already
> do normal updates and upgrades (also reusing deployment tasks). Please
> let me know in case you have any concerns about such approach (looking
> especially at Ceph and OpenShift integrators :) ).
>

+1 for Option A as well, I feel like it's the one which would give us the
more of flexibility and also I'm not a big fan of the usage of Anchors for
this use case.
Some folks are currently working on extracting these tasks out of THT and I
can already see something like:

external_deploy_tasks
  - include_role:
  name: my-service
  tasks_from: deploy

external_update_tasks
  - include_role:
  name: my-service
  tasks_from: update

Or we could re-use the same playbooks, but use tags maybe.
Anyway, I like your proposal and I vote for option A.



> Thanks
>
> Jirka
>
> [1]
>
> https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/docker/services/ceph-ansible/ceph-base.yaml#L340-L467
> [2]
>
> https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/extraconfig/services/openshift-master.yaml#L70-L231
> [3] https://review.openstack.org/#/c/579170/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [TIP] tox release 3.1.1

2018-07-12 Thread Eric Fried
Here it is for nova.

https://review.openstack.org/#/c/582392/

>> also don't love that immediately bumping the lower bound for tox is
>> going to be kind of disruptive to a lot of people.

By "kind of disruptive," do you mean:

 $ tox -e blah
 ERROR: MinVersionError: tox version is 1.6, required is at least 3.1.1
 $ sudo pip install --upgrade tox
 
 $ tox -e blah
 

?

Thanks,
efried

On 07/09/2018 03:58 PM, Doug Hellmann wrote:
> Excerpts from Ben Nemec's message of 2018-07-09 15:42:02 -0500:
>>
>> On 07/09/2018 11:16 AM, Eric Fried wrote:
>>> Doug-
>>>
>>> How long til we can start relying on the new behavior in the gate?  I
>>> gots me some basepython to purge...
>>
>> I want to point out that most projects require a rather old version of 
>> tox, so chances are most people are not staying up to date with the very 
>> latest version.  I don't love the repetition in tox.ini right now, but I 
>> also don't love that immediately bumping the lower bound for tox is 
>> going to be kind of disruptive to a lot of people.
>>
>> 1: http://codesearch.openstack.org/?q=minversion=nope=tox.ini=
> 
> Good point. Any patches to clean up the repetition should probably
> go ahead and update that minimum version setting, too.
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] What do we lose if the reshaper stuff doesn't land in Rocky?

2018-07-12 Thread Jay Pipes

DB work is now pushed for the single transaction reshape() function:

https://review.openstack.org/#/c/582383

Note that in working on that, I uncovered a bug in 
AllocationList.delete_all() which needed to first be fixed:


https://bugs.launchpad.net/nova/+bug/1781430

A fix has been pushed here:

https://review.openstack.org/#/c/582382/

Best,
-jay

On 07/12/2018 10:45 AM, Matt Riedemann wrote:
Continuing the discussion from the nova meeting today [1], I'm trying to 
figure out what the risk / benefit / contingency is if we don't get the 
reshaper stuff done in Rocky.


In a nutshell, we need reshaper to migrate VGPU inventory for the 
libvirt and xenapi drivers from the root compute node resource provider 
to child providers in the compute node provider tree, because then we 
can support multiple VGPU type inventory on the same compute host. [2]


Looking at the status of the vgpu-rocky blueprint [3], the libvirt 
changes are in merge conflict but the xenapi changes are ready to go.


What I'm wondering is if we don't get reshaper done in Rocky, what does 
that prevent us from doing in Stein? For example, does it mean we can't 
support modeling NUMA in placement until the T release? Or does it just 
mean that we lose the upgrade window from Rocky to Stein such that we 
expect people to run the reshaper migration so that Stein code can 
assume the migration has been done and model nested resource providers?


If the former (no NUMA modeling until T), that's a big deal. If the 
latter, it makes the Stein code more complicated but it doesn't sound 
impossible, right? Wouldn't the Stein code just need to add some 
checking to see if the migration has been done before it can support 
some new features?


Obviously if we don't have reshaper done in Rocky then the xenapi driver 
can't support multiple VGPU types on the same compute host in Rocky - 
but isn't that kind of the exact same situation if we don't get reshaper 
done until Stein?


[1] 
http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-07-12-14.00.log.html#l-71 

[2] 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/vgpu-rocky.html 

[3] 
https://review.openstack.org/#/q/topic:bp/vgpu-rocky+(status:open+OR+status:merged) 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-07-12 Thread Thomas Goirand
Hi everyone!

It's yet another of these emails where I'm going to complain out of
frustration because of OpenStack having bugs when running with the
newest stuff... Sorry in advance ! :)

tl;dr: It's urgent, we need Python 3.7 uwsgi + SSL gate jobs.

Longer version:

When Python 3.6 reached Debian, i already forwarded a few patches. It
went quite ok, but still... When switching services to Python 3 for
Newton, I discover that many services still had issues with uwsgi /
mod_wsgi, and I spent a large amount of time trying to figure out ways
to fix the situation. Some patches are still not yet merged, even though
it was a community goal to have this support for Newton:

Neutron:
https://review.openstack.org/#/c/555608/
https://review.openstack.org/#/c/580049/

Neutron FWaaS:
https://review.openstack.org/#/c/580327/
https://review.openstack.org/#/c/579433/

Horizon tempest plugin:
https://review.openstack.org/#/c/575714/

Oslotet (clearly, the -1 is for someone considering only Devstack /
venv, not understanding packaging environment):
https://review.openstack.org/#/c/571962/

Designate:
As much as I know, it still doesn't support uwsgi / mod_wsgi (please let
me know if this changed recently).

There may be more, I didn't have much time investigating some projects
which are less important to me.

Now, both Debian and Ubuntu have Python 3.7. Every package which I
upload in Sid need to support that. Yet, OpenStack's CI is still lagging
with Python 3.5. And there's lots of things currently broken. We've
fixed most "async" stuff, though we are failing to rebuild
oslo.messaging (from Queens) with Python 3.7: unit tests are just
hanging doing nothing.

I'm very happy to do small contributions to each and every component
here and there whenever it's possible, but this time, it's becoming a
little bit frustrating. I sometimes even got replies like "hum ...
OpenStack only supports Python 3.5" a few times. That's not really
acceptable, unfortunately.

So moving forward, what I think needs to happen is:

- Get each and every project to actually gate using uwsgi for the API,
using both Python 3 and SSL (any other test environment is *NOT* a real
production environment).

- The gating has to happen with whatever is the latest Python 3 version
available. Best would even be if we could have that *BEFORE* it reaches
distributions like Debian and Ubuntu. I'm aware that there's been some
attempts in the OpenStack infra to have Debian Sid (which is probably
the distribution getting the updates the faster). This effort needs to
be restarted, and some (non-voting ?) gate jobs needs to be setup using
whatever the latest thing is. If it cannot happen with Sid, then I don't
know, choose another platform, and do the Python 3-latest gating...

The current situation with the gate still doing Python 3.5 only jobs is
just not sustainable anymore. Moving forward, Python 2.7 will die. When
this happens, moving faster with Python 3 versions will be mandatory for
everyone, not only for fools like me who made the switch early.

 :)

Cheers,

Thomas Goirand (zigo)

P.S: A big thanks to everyone who where helpful for making the switch to
Python 3 in Debian, especially Annp and the rest of the Neutron team.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][nova] Safe guest shutdowns with kolla?

2018-07-12 Thread Clint Byrum
Greetings! We've been deploying with Kolla on CentOS 7 now for a while, and
we've recently noticed a rather troubling behavior when we shutdown
hypervisors.

Somewhere between systemd and libvirt's systemd-machined integration,
we see that guests get killed aggressively by SIGTERM'ing all of the
qemu-kvm processes. This seems to happen because they are scoped into
machine.slice, but systemd-machined is killed which drops those scopes
and thus results in killing off the machines.

In the past, we've used the libvirt-guests service when our libvirt was
running outside of containers. This worked splendidly, as we could
have it wait 5 minutes for VMs to attempt a graceful shutdown, avoiding
interrupting any running processes. But this service isn't available on
the host OS, as it won't be able to talk to libvirt inside the container.

The solution I've come up with for now is this:

[Unit]
Description=Manage libvirt guests in kolla safely
After=docker.service systemd-machined.service
Requires=docker.service

[Install]
WantedBy=sysinit.target

[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutStopSec=400
ExecStart=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh start
ExecStart=/usr/bin/docker start nova_compute
ExecStop=/usr/bin/docker stop nova_compute
ExecStop=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh 
shutdown

This doesn't seem to work, though I'm still trying to work out
the ordering and such. It should ensure that before we stop the
systemd-machined and destroy all of its scopes (thus, killing all the
vms), we run the libvirt-guests.sh script to try and shut them down. The
TimeoutStopSec=400 is because the script itself waits 300 seconds for any
VM that refuses to shutdown cleanly, so this gives it a chance to wait
for at least one of those. This is an imperfect solution but it allows us
to move forward after having made a reasonable attempt at clean shutdowns.

Anyway, just wondering if anybody else using kolla-ansible or kolla
containers in general have run into this problem, and whether or not
there are better/known solutions.

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ptl] Release countdown for week R-6, July 16-20

2018-07-12 Thread Doug Hellmann

Development Focus
-

Teams should be focused on implementing planned work. Work should be
wrapping up on non-client libraries to meet the lib deadline Thursday,
the 19th.

General Information
---

We are now getting close to the end of the cycle. The non-client library
(typically any lib other than the "python-$PROJECTclient" deliverables)
deadline is 19 July, followed quickly the next Thursday with the final
client library release. Releases for critical fixes will be allowed
after this point, but we will be much more restrictive about what is
allowed if there are more lib release requests after this point. Please
keep this in mind.

When requesting these library releases, you should also include the
stable branching request with the review (as an example, see the
"branches" section here:
http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2)


Upcoming Deadlines & Dates
--

Final non-client library release deadline: July 19
Final client library release deadline: July 26
Rocky-3 Milestone: July 26

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Monty Taylor

On 07/12/2018 11:05 AM, Matthew Thode wrote:

On 18-07-12 13:52:56, Jeremy Stanley wrote:

On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote:
[...]

I think most of the problems with Fedora stability are around
bringing up a new Fedora every 6 months or so. They tend to change
sufficiently within that time period to make this a fairly
involved exercise. But once working they work for the ~13 months
of support they offer. I know Paul Belanger would like to iterate
more quickly and just keep the most recent Fedora available
(rather than ~2).

[...]

Regardless its instability/churn makes it unsuitable for stable
branch jobs because the support lifetime of the distro release is
shorter than the maintenance lifetime of our stable branches. Would
probably be fine for master branch jobs but not beyond, right?


I'm of the opinion that we should decouple from distro supported python
versions and rely on what versions upstream python supports (longer
lifetimes than our releases iirc).


Yeah. I don't want to boil the ocean too much ... but as I mentioned in 
my other reply, I'm very pleased with pyenv. I would not be opposed to 
switching to that for all of our python installation needs. OTOH, I'm 
not going to push for it, nor do I have time to implement such a switch. 
But I'd vote for it and cheer someone on if they did.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-07-12 Thread Michael McCune
Greetings OpenStack community,

Today's meeting was very brief as both cdent and dtantsur were out.
There were no major items of discussion, but we did acknowledge the
efforts of the GraphQL proof of concept work[7] being led by Gilles
Dubreuil. This work continues to make progress and should provide an
interesting data point for the possibiity of future GraphQL usages.

In addition to the light discussion there was also one guideline
update that was merged this week, and a small infrastructure-related
patch that was merged.

As always if you're interested in helping out, in addition to coming
to the meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

* Expand error code document to expect clarity
  https://review.openstack.org/#/c/577118/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None

# Guidelines Currently Under Review [3]

* Add links to errors-example.json
  https://review.openstack.org/#/c/578369/

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and
service discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://storyboard.openstack.org/#!/story/2002782


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Monty Taylor

On 07/12/2018 08:37 AM, Clark Boylan wrote:

On Wed, Jul 11, 2018, at 9:34 PM, Tony Breeds wrote:

Hi Folks,
 We have a pit of a problem in openstack/requirements and I'd liek to
chat about it.

Currently when we generate constraints we create a venv for each
(system) python supplied on the command line, install all of
global-requirements into that venv and capture the pip freeze.

Where this falls down is if we want to generate a freeze for python 3.4
and 3.5 we need an image that has both of those.  We cheated and just
'clone' them so if python3 is 3.4 we copy the results to 3.5 and vice
versa.  This kinda worked for a while but it has drawbacks.

I can see a few of options:

1. Build pythons from source and use that to construct the venv
[please no]


Fungi mentions that 3.3 and 3.4 don't build easily on modern linux distros. 
However, 3.3 and 3.4 are also unsupported by Python at this point, maybe we can 
ignore them and focus on 3.5 and forward? We don't build new freeze lists for 
the stable branches, this is just a concern for master right?


FWIW, I use pyenv for python versions on my laptop and love it. I've 
completely given up on distro-provided python for my own usage.




2. Generate the constraints in an F28 image.  My F28 has ample python
versions:
  - /usr/bin/python2.6
  - /usr/bin/python2.7
  - /usr/bin/python3.3
  - /usr/bin/python3.4
  - /usr/bin/python3.5
  - /usr/bin/python3.6
  - /usr/bin/python3.7
I don't know how valid this still is but in the past fedora images
have been seen as unstable and hard to keep current.  If that isn't
still the feeling then we could go down this path.  Currently there a
few minor problems with bindep.txt on fedora and generate-constraints
doesn't work with py3 but these are pretty minor really.


I think most of the problems with Fedora stability are around  bringing up a 
new Fedora every 6 months or so. They tend to change sufficiently within that 
time period to make this a fairly involved exercise. But once working they work 
for the ~13 months of support they offer. I know Paul Belanger would like to 
iterate more quickly and just keep the most recent Fedora available (rather 
than ~2).



3. Use docker images for python and generate the constraints with
them.  I've hacked up something we could use as a base for that in:
   https://review.openstack.org/581948

There are lots of open questions:
  - How do we make this nodepool/cloud provider friendly ?
* Currently the containers just talk to the main debian mirrors.
  Do we have debian packages? If so we could just do sed magic.


http://$MIRROR/debian (http://mirror.dfw.rax.openstack.org/debian for example) 
should be a working amd64 debian package mirror.


  - Do/Can we run a registry per provider?


We do not, but we do have a caching dockerhub registry proxy in each 
region/provider. http://$MIRROR:8081/registry-1.docker if using older docker 
and http://$MIRROR:8082 for current docker. This was a compromise between 
caching the Internet and reliability.


there is also

https://review.openstack.org/#/c/580730/

which adds a role to install docker and configure it to use the correct 
registry.



  - Can we generate and caches these images and only run pip install -U
g-r to speed up the build


Between cached upstream python docker images and prebuilt wheels mirrored in 
every cloud provider region I wonder if this will save a significant amount of 
time? May be worth starting without this and working from there if it remains 
slow.


  - Are we okay with using docker this way?


Should be fine, particularly if we are consuming the official Python images.


Agree. python:3.6 and friends are great.



I like #2 the most but I wanted to seek wider feedback.


I think each proposed option should work as long as we understand the 
limitations each presents. #2 should work fine if we have individuals 
interested and able to spin up new Fedora images and migrate jobs to that image 
after releases happen.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Stein PTG planning etherpad

2018-07-12 Thread Ben Nemec

All the cool kids are doing it, so here's one for Oslo:

https://etherpad.openstack.org/p/oslo-stein-ptg-planning

I've populated it with a few topics that I expect to discuss, but feel 
free to add anything you're interested in.


Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume

2018-07-12 Thread Jay S Bryant



On 7/11/2018 1:20 AM, Luke Hinds wrote:



On Tue, Jul 10, 2018 at 9:08 PM, Jim Rollenhagen 
mailto:j...@jimrollenhagen.com>> wrote:


On Tue, Jul 10, 2018 at 3:28 PM, Martin Chlumsky
mailto:martin.chlum...@gmail.com>> wrote:

It is the workaround that is right and the discussion part
that is wrong.

I am familiar with this bug. Using thin volumes
_and/or_ enabling zero padding DOES ensure data contained
in a volume is actually deleted.


Great, that's super helpful. Thanks!

Is there someone (Luke?) on the list that can send a correction
for this OSSN to all the lists it needs to go to?

// jim


It can, but I would want to be sure we get an agreed consensus. The 
note has already gone through a review cycle where a cinder core 
approved the contents:


https://review.openstack.org/#/c/579094/

If someone wants to put forward a patch with the needed amendments , I 
can send out a correction to the lists.



All,

I have forwarded this note on to Helen Walsh at Dell EMC (Walsh, Helen 
) as they do not monitor the mailing list as 
closely.  Hopefully we can get her engaged to ensure we get the right 
update out there.


Thanks!



On Tue, Jul 10, 2018 at 10:41 AM Jim Rollenhagen
mailto:j...@jimrollenhagen.com>> wrote:

On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds
mailto:lhi...@redhat.com>> wrote:

Data retained after deletion of a ScaleIO volume
---

### Summary ###
Certain storage volume configurations allow newly
created volumes to
contain previous data. This could lead to leakage of
sensitive
information between tenants.

### Affected Services / Software ###
Cinder releases up to and including Queens with
ScaleIO volumes
using thin volumes and zero padding.


According to discussion in the bug, this bug occurs with
ScaleIO volumes using thick volumes and with zero padding
disabled.

If the bug is with thin volumes and zero padding, then the
workaround seems quite wrong. :)

I'm not super familiar with Cinder, so could some Cinder
folks check this out and re-issue a more accurate OSSN,
please?

// jim


### Discussion ###
Using both thin volumes and zero padding does not
ensure data contained
in a volume is actually deleted. The default volume
provisioning rule is
set to thick so most installations are likely not
affected. Operators
can check their configuration in `cinder.conf` or
check for zero padding
with this command `scli --query_all`.

 Recommended Actions 

Operators can use the following two workarounds, until
the release of
Rocky (planned 30th August 2018) which resolves the issue.

1. Swap to thin volumes

2. Ensure ScaleIO storage pools use zero-padding with:

`scli --modify_zero_padding_policy
    (((--protection_domain_id  |
    --protection_domain_name )
    --storage_pool_name ) | --storage_pool_id )
    (--enable_zero_padding | --disable_zero_padding)`

### Contacts / References ###
Author: Nick Tait
This OSSN :
https://wiki.openstack.org/wiki/OSSN/OSSN-0084

Original LaunchPad Bug :
https://bugs.launchpad.net/ossn/+bug/1699573

Mailing List : [Security] tag on
openstack-dev@lists.openstack.org

OpenStack Security Project :
https://launchpad.net/~openstack-ossg




__
OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe



http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Matthew Thode
On 18-07-12 13:52:56, Jeremy Stanley wrote:
> On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote:
> [...]
> > I think most of the problems with Fedora stability are around
> > bringing up a new Fedora every 6 months or so. They tend to change
> > sufficiently within that time period to make this a fairly
> > involved exercise. But once working they work for the ~13 months
> > of support they offer. I know Paul Belanger would like to iterate
> > more quickly and just keep the most recent Fedora available
> > (rather than ~2).
> [...]
> 
> Regardless its instability/churn makes it unsuitable for stable
> branch jobs because the support lifetime of the distro release is
> shorter than the maintenance lifetime of our stable branches. Would
> probably be fine for master branch jobs but not beyond, right?

I'm of the opinion that we should decouple from distro supported python
versions and rely on what versions upstream python supports (longer
lifetimes than our releases iirc).

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Planning Etherpad for Denver PTG

2018-07-12 Thread Ivan Kolodyazhny
Hi team,

I've created an etherpad [1] to gather topics for PTG discussions in
Denver.

Please, do not hesitate to add any topic you think is valuable even you
won't
attend PTG.


I hope to see all of you in September!


[1] https://etherpad.openstack.org/p/horizon-ptg-planning-denver-2018

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Rocky blueprints

2018-07-12 Thread Bogdan Dobrelya

On 7/11/18 7:39 PM, Alex Schultz wrote:

Hello everyone,

As milestone 3 is quickly approaching, it's time to review the open
blueprints[0] and their status.  It appears that we have made good
progress on implementing significant functionality this cycle but we
still have some open items.  Below is the list of blueprints that are
still open so we'll want to see if they will make M3 and if not, we'd
like to move them out to Stein and they won't make Rocky without an
FFE.

Currently not marked implemented but without any open patches (likely
implemented):
- https://blueprints.launchpad.net/tripleo/+spec/major-upgrade-workflow
- 
https://blueprints.launchpad.net/tripleo/+spec/tripleo-predictable-ctlplane-ips

Currently open with pending patches (may need FFE):
- https://blueprints.launchpad.net/tripleo/+spec/config-download-ui
- https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow
- https://blueprints.launchpad.net/tripleo/+spec/containerized-undercloud


This needs FFE please. The remaining work [0] is mostly cosmetic 
(defaults switching) though it's somewhat blocked on CI infrastructure 
readiness [1] for containerized undercloud and overcloud deployments. 
The situation had been drastically improved by the recent changes 
though, like longer container images caching, enabling ansible 
pipelining, using shared local container registries for undercloud and 
overcloud deployments and may be more I'm missing. There is also ongoing 
work to mitigate the CI walltime [2].


[0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132126.html
[1] https://trello.com/c/1yDVHmqm/115-switch-remaining-ci-jobs
[2] 
https://trello.com/c/PpNtarue/126-ci-break-the-openstack-infra-3h-timeout-wall



- https://blueprints.launchpad.net/tripleo/+spec/bluestore
- https://blueprints.launchpad.net/tripleo/+spec/gui-node-discovery-by-range
- https://blueprints.launchpad.net/tripleo/+spec/multiarch-support
- 
https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks-templates
- https://blueprints.launchpad.net/tripleo/+spec/sriov-vfs-as-network-interface
- https://blueprints.launchpad.net/tripleo/+spec/custom-validations

Currently open without work (should be moved to Stein):
- https://blueprints.launchpad.net/tripleo/+spec/automated-ui-testing
- https://blueprints.launchpad.net/tripleo/+spec/plan-from-git-in-gui
- https://blueprints.launchpad.net/tripleo/+spec/tripleo-ui-react-walkthrough
- 
https://blueprints.launchpad.net/tripleo/+spec/wrapping-workflow-for-node-operations
- https://blueprints.launchpad.net/tripleo/+spec/ironic-overcloud-ci


Please take some time to review this list and update it.  If you think
you are close to finishing out the feature and would like to request
an FFE please start getting that together with appropriate details and
justifications for the FFE.

Thanks,
-Alex

[0] https://blueprints.launchpad.net/tripleo/rocky

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Rocky blueprints

2018-07-12 Thread Harald Jensås
On Wed, 2018-07-11 at 10:39 -0600, Alex Schultz wrote:
> Hello everyone,
> 
> As milestone 3 is quickly approaching, it's time to review the open
> blueprints[0] and their status.  It appears that we have made good
> progress on implementing significant functionality this cycle but we
> still have some open items.  Below is the list of blueprints that are
> still open so we'll want to see if they will make M3 and if not, we'd
> like to move them out to Stein and they won't make Rocky without an
> FFE.

Thanks for the reminder. I'd like an FFE for the tripleo-routed-
networks-templates blueprint. (Hope this is formal enough.)

> Currently open with pending patches (may need FFE):
> 
> - https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-netwo
> rks-templates
> 

I have made quite a bit of progress on this over the last couple of
weeks. There is a bit more too do, but the two sets of changes up there
does improve things incrementally.

All the patches are under this topic:
- https://review.openstack.org/#/q/topic:bp/tripleo-routed-networks-tem
plates+(status:open+OR+status:merged)

If we manage to land the two patch series starting with ...
 - https://review.openstack.org/579580
and:
 - https://review.openstack.org/580235

... completing the ones starting with https://review.openstack.org/5821
80 and a couple of more follow ups should be achievable before RC1. (I
will be on PTO after tomorrow, returning August 13.)



Over the last couple of days I have also started using rdocloud and
OVB. I pushed this pull request yesterday: https://github.com/cybertron
/openstack-virtual-baremetal/pull/43. (We should be able to re-use this
in CI to get better coverage.)


These changes reduce the complexity of configuring routed networks for
the end-user greatly. I.e use the same overcloud node network config
template for roles in different routed networks, and remove the need to
do hiera overrides such as:

ComputeLeaf2ExtraConfig:
  nova::compute::libvirt::vncserver_listen: "%{hiera('internal_api2')}"
  neutron::agents::ml2::ovs::local_ip: "%{hiera('tenant2')}"
  [ ... and so on ... ]



--
Harald Jensås



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] What do we lose if the reshaper stuff doesn't land in Rocky?

2018-07-12 Thread Jay Pipes
Let's just get the darn thing done in Rocky. I will have the DB work up 
for review today.


-jay

On 07/12/2018 10:45 AM, Matt Riedemann wrote:
Continuing the discussion from the nova meeting today [1], I'm trying to 
figure out what the risk / benefit / contingency is if we don't get the 
reshaper stuff done in Rocky.


In a nutshell, we need reshaper to migrate VGPU inventory for the 
libvirt and xenapi drivers from the root compute node resource provider 
to child providers in the compute node provider tree, because then we 
can support multiple VGPU type inventory on the same compute host. [2]


Looking at the status of the vgpu-rocky blueprint [3], the libvirt 
changes are in merge conflict but the xenapi changes are ready to go.


What I'm wondering is if we don't get reshaper done in Rocky, what does 
that prevent us from doing in Stein? For example, does it mean we can't 
support modeling NUMA in placement until the T release? Or does it just 
mean that we lose the upgrade window from Rocky to Stein such that we 
expect people to run the reshaper migration so that Stein code can 
assume the migration has been done and model nested resource providers?


If the former (no NUMA modeling until T), that's a big deal. If the 
latter, it makes the Stein code more complicated but it doesn't sound 
impossible, right? Wouldn't the Stein code just need to add some 
checking to see if the migration has been done before it can support 
some new features?


Obviously if we don't have reshaper done in Rocky then the xenapi driver 
can't support multiple VGPU types on the same compute host in Rocky - 
but isn't that kind of the exact same situation if we don't get reshaper 
done until Stein?


[1] 
http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-07-12-14.00.log.html#l-71 

[2] 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/vgpu-rocky.html 

[3] 
https://review.openstack.org/#/q/topic:bp/vgpu-rocky+(status:open+OR+status:merged) 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] What do we lose if the reshaper stuff doesn't land in Rocky?

2018-07-12 Thread Matt Riedemann
Continuing the discussion from the nova meeting today [1], I'm trying to 
figure out what the risk / benefit / contingency is if we don't get the 
reshaper stuff done in Rocky.


In a nutshell, we need reshaper to migrate VGPU inventory for the 
libvirt and xenapi drivers from the root compute node resource provider 
to child providers in the compute node provider tree, because then we 
can support multiple VGPU type inventory on the same compute host. [2]


Looking at the status of the vgpu-rocky blueprint [3], the libvirt 
changes are in merge conflict but the xenapi changes are ready to go.


What I'm wondering is if we don't get reshaper done in Rocky, what does 
that prevent us from doing in Stein? For example, does it mean we can't 
support modeling NUMA in placement until the T release? Or does it just 
mean that we lose the upgrade window from Rocky to Stein such that we 
expect people to run the reshaper migration so that Stein code can 
assume the migration has been done and model nested resource providers?


If the former (no NUMA modeling until T), that's a big deal. If the 
latter, it makes the Stein code more complicated but it doesn't sound 
impossible, right? Wouldn't the Stein code just need to add some 
checking to see if the migration has been done before it can support 
some new features?


Obviously if we don't have reshaper done in Rocky then the xenapi driver 
can't support multiple VGPU types on the same compute host in Rocky - 
but isn't that kind of the exact same situation if we don't get reshaper 
done until Stein?


[1] 
http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-07-12-14.00.log.html#l-71
[2] 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/vgpu-rocky.html
[3] 
https://review.openstack.org/#/q/topic:bp/vgpu-rocky+(status:open+OR+status:merged)


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] New validation: ensure we actually have enough disk space on the undercloud

2018-07-12 Thread Cédric Jeanneret
Dear Stackers,

I'm currently looking for some inputs in order to get a new validation,
ran as a "preflight check" on the undercloud.

The aim is to ensure we actually have enough disk space for all the
files and, most importantly, the registry, being local on the
undercloud, or remote (provided the operator has access to it, of course).

Although the doc talks about minimum requirements, there's the "never
trust the user inputs" law, so it would be great to ensure the user
didn't overlook the requirements regarding disk space.

The "right" way would be to add a new validation directly in the
tripleo-validations repository, and run it at an early stage of the
undercloud deployment (and maybe once again before the overcloud deploy
starts, as disk space will probably change due to the registry and logs
and packages and so on).

There are a few details on this public trello card:
https://trello.com/c/QqBsMmP9/89-implement-storage-space-checks

What do you think? Care to provide some hints and tips for the correct
implementation?

Thank you!

Bests,

C.



-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Ironic Bug Day July 12 2018 1:00 - 2:00 PM UTC

2018-07-12 Thread Michael Turek

Hey all,

This month's bug day went pretty well! We discussed about 20 bugs (half 
old, half new). Many were triaged, some got marked invalid. For meeting 
minutes and details, see the etherpad [0].


The attendance was a bit low (Thank you for attending Julia and Adam!), 
but could be due to vacations that started last week ending. Either way, 
we decided to confirm the bug day for next month to give ample notice 
and hopefully improve attendance. I'd also like to encourage people to 
bring a bug with them that they consider interesting, overlooked, or 
important next time.


Next bug day will be August 2nd @ 13:00 - 14:00 UTC. Etherpad can be 
found here https://etherpad.openstack.org/p/ironic-bug-day-august-2018


If you have any questions or have any ideas to improve bug day, please 
don't hesitate to reach out to me! Hope to see you there!


Thanks!
Mike Turek 

[0] https://etherpad.openstack.org/p/ironic-bug-day-july-2018


On 7/10/18 4:31 PM, Michael Turek wrote:

Hey all,

This month's bug day was delayed a week and will take place on 
Thursday the 12th from 1:00 UTC to 2:00 UTC


For location, time, and agenda details please see 
https://etherpad.openstack.org/p/ironic-bug-day-july-2018


If you would like to propose topics, feel free to do it in the etherpad!

Thanks,
Mike Turek 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stable review

2018-07-12 Thread Brian Haley

On 07/12/2018 02:53 AM, Takashi Yamamoto wrote:

hi,

queens branch of networking-midonet has had no changes merged since
its creation.
the following commit would tell you how many gate blockers have been
accumulated.
https://review.openstack.org/#/c/572242/

it seems the stable team doesn't have a bandwidth to review subprojects
in a timely manner. i'm afraid that we need some policy changes.


In the future I would recommend just adding someone from the neutron 
stable team to the review, as we (I) don't have the bandwidth to go 
through the reviews of every sub-project.  Between Miguel, Armando, Gary 
and myself we can usually get to things pretty quickly. 
https://review.openstack.org/#/admin/groups/539,members


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Jeremy Stanley
On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote:
[...]
> I think most of the problems with Fedora stability are around
> bringing up a new Fedora every 6 months or so. They tend to change
> sufficiently within that time period to make this a fairly
> involved exercise. But once working they work for the ~13 months
> of support they offer. I know Paul Belanger would like to iterate
> more quickly and just keep the most recent Fedora available
> (rather than ~2).
[...]

Regardless its instability/churn makes it unsuitable for stable
branch jobs because the support lifetime of the distro release is
shorter than the maintenance lifetime of our stable branches. Would
probably be fine for master branch jobs but not beyond, right?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions

2018-07-12 Thread Clark Boylan
On Wed, Jul 11, 2018, at 9:34 PM, Tony Breeds wrote:
> Hi Folks,
> We have a pit of a problem in openstack/requirements and I'd liek to
> chat about it.
> 
> Currently when we generate constraints we create a venv for each
> (system) python supplied on the command line, install all of
> global-requirements into that venv and capture the pip freeze.
> 
> Where this falls down is if we want to generate a freeze for python 3.4
> and 3.5 we need an image that has both of those.  We cheated and just
> 'clone' them so if python3 is 3.4 we copy the results to 3.5 and vice
> versa.  This kinda worked for a while but it has drawbacks.
> 
> I can see a few of options:
> 
> 1. Build pythons from source and use that to construct the venv
>[please no]

Fungi mentions that 3.3 and 3.4 don't build easily on modern linux distros. 
However, 3.3 and 3.4 are also unsupported by Python at this point, maybe we can 
ignore them and focus on 3.5 and forward? We don't build new freeze lists for 
the stable branches, this is just a concern for master right?

> 
> 2. Generate the constraints in an F28 image.  My F28 has ample python
>versions:
>  - /usr/bin/python2.6
>  - /usr/bin/python2.7
>  - /usr/bin/python3.3
>  - /usr/bin/python3.4
>  - /usr/bin/python3.5
>  - /usr/bin/python3.6
>  - /usr/bin/python3.7
>I don't know how valid this still is but in the past fedora images
>have been seen as unstable and hard to keep current.  If that isn't
>still the feeling then we could go down this path.  Currently there a
>few minor problems with bindep.txt on fedora and generate-constraints
>doesn't work with py3 but these are pretty minor really.

I think most of the problems with Fedora stability are around  bringing up a 
new Fedora every 6 months or so. They tend to change sufficiently within that 
time period to make this a fairly involved exercise. But once working they work 
for the ~13 months of support they offer. I know Paul Belanger would like to 
iterate more quickly and just keep the most recent Fedora available (rather 
than ~2).

> 
> 3. Use docker images for python and generate the constraints with
>them.  I've hacked up something we could use as a base for that in:
>   https://review.openstack.org/581948
> 
>There are lots of open questions:
>  - How do we make this nodepool/cloud provider friendly ?
>* Currently the containers just talk to the main debian mirrors.
>  Do we have debian packages? If so we could just do sed magic.

http://$MIRROR/debian (http://mirror.dfw.rax.openstack.org/debian for example) 
should be a working amd64 debian package mirror.

>  - Do/Can we run a registry per provider?

We do not, but we do have a caching dockerhub registry proxy in each 
region/provider. http://$MIRROR:8081/registry-1.docker if using older docker 
and http://$MIRROR:8082 for current docker. This was a compromise between 
caching the Internet and reliability.

>  - Can we generate and caches these images and only run pip install -U
>g-r to speed up the build

Between cached upstream python docker images and prebuilt wheels mirrored in 
every cloud provider region I wonder if this will save a significant amount of 
time? May be worth starting without this and working from there if it remains 
slow.

>  - Are we okay with using docker this way?

Should be fine, particularly if we are consuming the official Python images.

> 
> I like #2 the most but I wanted to seek wider feedback.

I think each proposed option should work as long as we understand the 
limitations each presents. #2 should work fine if we have individuals 
interested and able to spin up new Fedora images and migrate jobs to that image 
after releases happen.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stable review

2018-07-12 Thread Boden Russell
On 7/12/18 4:10 AM, Takashi Yamamoto wrote:
> On Thu, Jul 12, 2018 at 6:13 PM, Tony Breeds  wrote:
>>
>> No we need more contributors to stable and extended maintenance periods.
>> This is not a new problem, and one we're trying to correct.
> 
> actually it is a new problem. at least worse than before.
> 

I'm no expert, but wanted to add my $0.02 as a developer who's invested
substantial time in trying to keep a different networking project up to
date with all the underpinning changes; some of which are noted in your
midonet stable/queens patch.

IMHO it's not realistic to think an OpenStack project (master or stable)
can go without routine maintenance for extended period of time in this
day and age; there are just too many dynamic underpinnings. A case in
point are the changes required for the Zuul v3 workstream that don't
appear to be fully propagated into a number of networking projects yet
[1], midonet included.

With that in mind I'm not sure we can just point at the neutron stable
team; there are community wide initiatives that ultimate drive
underpinning changes across many projects. I've found that you either
have to invest the time to "keep up", or "die". For reference I've been
spending nearly 4 person weeks per release just on such "maintenance"
items. It certainly takes time away from functionality that can be
delivered per release, but it seems it's just part of the work necessary
to keep your project "current".

If are wanting to reduce the amount work for projects to "stay current"
then IMO it's certainly a bigger issue than neutron.


[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131801.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Can we support key wrapping mechanisms other than CKM_AES_CBC_PAD?

2018-07-12 Thread Ade Lee
You probably also need to change the parameters being added to the
structure to match the chosen padding mechanism.

mech = self.ffi.new("CK_MECHANISM *")
mech.mechanism = CKM_AES_CBC_PAD
iv = self._generate_random(16, session)
mech.parameter = iv
mech.parameter_len = 16

> > CKR_ARGUMENTS_BAD probably indicates that whats in mech.parameter
> > is bad.  


On Wed, 2018-07-11 at 22:59 +1200, Lingxian Kong wrote:
> BTW, i am using `CKM_RSA_PKCS` because it's the only one of the
> suggested mechanisms that SoftHSM supports according to the output of
> `pkcs11-tool --module libsofthsm2.so ---slot $slot --list-
> mechanisms`.
> 
> $ pkcs11-tool --module libsofthsm2.so ---slot $slot --list-mechanisms
> ...
> RSA-PKCS, keySize={512,16384}, encrypt, decrypt, sign, verify, wrap,
> unwrap
> ...
> 
> 
> 
> 
> Cheers,
> Lingxian Kong
> 
> On Wed, Jul 11, 2018 at 10:48 PM, Lingxian Kong  > wrote:
> > Hi Ade,
> > 
> > Thanks for your reply.
> > 
> > I just replaced `CKM_AES_CBC_PAD` with `CKM_RSA_PKCS` here[1], of
> > course I defined `CKM_RSA_PKCS = 0x0001` in the code, but still
> > got the following error:
> > 
> > Jul 11 10:42:05 barbican-devstack devstack@barbican-svc.service[198
> > 97]: 2018-07-11 10:42:05.309 19900 WARNING
> > barbican.plugin.crypto.p11_crypto [req-f2d27105-4811-4c77-a321-
> > 2ac1399cc9d2 b268f84aef814ae
> > da17ad3fa38e0049d 7abe0e02baec4df2b6046d7ef7f44998 - default
> > default] Reinitializing PKCS#11 library: HSM returned response
> > code: 0x7L CKR_ARGUMENTS_BAD: P11CryptoPluginException: HSM
> > returned response code: 0x7L CKR_ARGUMENTS_BAD
> > 
> > [1]: https://github.com/openstack/barbican/blob/5dea5cec130b59ecfb8
> > d46435cd7eb3212894b4c/barbican/plugin/crypto/pkcs11.py#L496
> > 
> > 
> > Cheers,
> > Lingxian Kong
> > 
> > On Wed, Jul 11, 2018 at 9:18 PM, Ade Lee  wrote:
> > > Lingxian, 
> > > 
> > > I don't see any reason not to provide support for other wrapping
> > > mechanisms.
> > > 
> > > Have you tried hacking the code to use one of the other wrapping
> > > mechanisms to see if it works?  Ultimately, what is passed are
> > > parameters to CFFI.  As long as you pass in the right input and
> > > your
> > > PKCS#11 library can support it, then there should be no problem.
> > > 
> > > If it works, it makes sense to make the wrapping algorithm
> > > configurable
> > > for the plugin.  
> > > 
> > > It may or may not make sense to store the wrapping algorithm used
> > > in
> > > the secret plugin-metadata if we want to support migration to
> > > other
> > > HSMs.
> > > 
> > > Ade 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stable review

2018-07-12 Thread Takashi Yamamoto
hi,

On Thu, Jul 12, 2018 at 6:13 PM, Tony Breeds  wrote:
> On Thu, Jul 12, 2018 at 03:53:22PM +0900, Takashi Yamamoto wrote:
>> hi,
>>
>> queens branch of networking-midonet has had no changes merged since
>> its creation.
>> the following commit would tell you how many gate blockers have been
>> accumulated.
>> https://review.openstack.org/#/c/572242/
>>
>> it seems the stable team doesn't have a bandwidth to review subprojects
>> in a timely manner.
>
> The project specific stable team is responsible for reviewing those
> changes.  The global stable team will review project specific changes
> if they're requested to.  I'll treat this email as such a request.
>
> Please ask a member of neutron-stable-maint[1] to take a look at your
> review.

i was talking about neutron stable team. nothing about the global stable team.
sorry if it was confusing.

>
>> i'm afraid that we need some policy changes.
>
> No we need more contributors to stable and extended maintenance periods.
> This is not a new problem, and one we're trying to correct.

actually it is a new problem. at least worse than before.

>
> Yours Tony.
>
> [1] https://review.openstack.org/#/admin/groups/539,members
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Namespace isolation options

2018-07-12 Thread Luis Tomas Bolivar
Hi folks,

I'm working on the kuryr-kubernetes namespace feature to enable
isolation between the different namespaces, i.e., pods on namespace A
cannot 'talk' to pods or services on namespace B.

For the pods isolation, there is already a patch working:
https://review.openstack.org/#/c/579181

However, for the services is a bit more complex. There is some initial
work on:
https://review.openstack.org/#/c/581421

The above patch ensures isolation between services by modifying the
security group associated to the loadbalancer VM to only allow traffic
from ports with a given security group, in our case the one associated
to the namespace.

However, it is missing how to handle special cases, such as route and
services of LoadBalancer type. For the LoadBalancer type we have two option:
1) When the service is of LoadBalancer type not modify the security
group associated to it as it is meant to be accessible from outsite.
This basically is the out of the box behaviour of octavia. Pros: it is
simple to implement and does not require any extra information. Cons:
the svc can be accessed not only on the FIP, but also on the VIP.

2) Add a new security group rule also enabling the traffic from the
public-subnet CIDR. Pros: It will not enable access from the VIP, only
from the FIP. Cons: it either needs admin rights to get the
public-subnet CIDR or a new config option where we specify it.

Any preferences? I already tested option 1) and will update the patch
set with it shortly, but if option 2) is preferred, I will of course
update the PS accordingly.

Thanks!

Best regards,
Luis
-- 
LUIS TOMÁS BOLÍVAR
SENIOR SOFTWARE ENGINEER
Red Hat
Madrid, Spain
ltoma...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stable review

2018-07-12 Thread Tony Breeds
On Thu, Jul 12, 2018 at 03:53:22PM +0900, Takashi Yamamoto wrote:
> hi,
> 
> queens branch of networking-midonet has had no changes merged since
> its creation.
> the following commit would tell you how many gate blockers have been
> accumulated.
> https://review.openstack.org/#/c/572242/
> 
> it seems the stable team doesn't have a bandwidth to review subprojects
> in a timely manner.

The project specific stable team is responsible for reviewing those
changes.  The global stable team will review project specific changes
if they're requested to.  I'll treat this email as such a request.

Please ask a member of neutron-stable-maint[1] to take a look at your
review.

> i'm afraid that we need some policy changes.

No we need more contributors to stable and extended maintenance periods.
This is not a new problem, and one we're trying to correct.

Yours Tony.

[1] https://review.openstack.org/#/admin/groups/539,members


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][storyboard] Updates between SB and LP

2018-07-12 Thread Tony Breeds
Hi all,
The requirements team is only a light user of Launchpad and we're
looking at moving to StoryBoard as it looks like for the most part it'll
be a better fit.

To date the thing that has stopped us doing this is the handling of
bugs/stories that are shared between LP and SB.

Assume that requirements had migrated to SB, how would be deal with bugs
like: https://bugs.launchpad.net/openstack-requirements/+bug/1753969

Is there a, supportable, bi-directional path between SB and LP?

I suspect the answer is No.  I imagine if we only wanted to get
updates from LP reflected in our SB story we could just leave the
bug tracker open on LP and run the migration tool "often".

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Stable review

2018-07-12 Thread Takashi Yamamoto
hi,

queens branch of networking-midonet has had no changes merged since
its creation.
the following commit would tell you how many gate blockers have been
accumulated.
https://review.openstack.org/#/c/572242/

it seems the stable team doesn't have a bandwidth to review subprojects
in a timely manner. i'm afraid that we need some policy changes.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev