Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-19 Thread Florian Engelmann



No, I mean, Consul would be an extra dependency in a big list of dependencies 
OpenStack already has. OpenStack has so many it is causing operators to 
reconsider adoption. I'm asking, if existing dependencies can be made to solve 
the problem without adding more?

Stateful dependencies are much harder to deal with then stateless ones, as they 
take much more operator care/attention. Consul is stateful as is etcd, and etcd 
is already a dependency.

Can etcd be used instead so as not to put more load on the operators?


While etcd is a strong KV store it lacks many features consul has. Using 
consul for DNS based service discovery is very easy to implement without 
making it a dependency.
So we will start with a "external" consul and see how to handle the 
service registration without modifying the kolla containers or any 
kolla-ansible code.


All the best,
Flo




Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Wednesday, October 10, 2018 12:18 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
fabio and FQDN endpoints

by "another storage system" you mean the KV store of consul? That's just
someting consul brings with it...

consul is very strong in doing health checks

Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M:

etcd is an already approved openstack dependency. Could that be used instead of 
consul so as to not add yet another storage system? coredns with the 
https://coredns.io/plugins/etcd/ plugin would maybe do what you need?

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Monday, October 08, 2018 3:14 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio 
and FQDN endpoints

Hi,

I would like to start a discussion about some changes and additions I
would like to see in in kolla and kolla-ansible.

1. Keepalived is a problem in layer3 spine leaf networks as any floating
IP can only exist in one leaf (and VRRP is a problem in layer3). I would
like to use consul and registrar to get rid of the "internal" floating
IP and use consuls DNS service discovery to connect all services with
each other.

2. Using "ports" for external API (endpoint) access is a major headache
if a firewall is involved. I would like to configure the HAProxy (or
fabio) for the external access to use "Host:" like, eg. "Host:
keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS.
Any customer would just need HTTPS access and not have to open all those
ports in his firewall. For some enterprise customers it is not possible
to request FW changes like that.

3. HAProxy is not capable to handle "read/write" split with Galera. I
would like to introduce ProxySQL to be able to scale Galera.

4. HAProxy is fine but fabio integrates well with consul, statsd and
could be connected to a vault cluster to manage secure certificate access.

5. I would like to add vault as Barbican backend.

6. I would like to add an option to enable tokenless authentication for
all services with each other to get rid of all the openstack service
passwords (security issue).

What do you think about it?

All the best,
Florian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelm...@everyware.ch
web: http://www.everyware.ch

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelm...@everyware.ch
web: http://www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-19 Thread Florian Engelmann


On 17.10.2018 15:45, Florian Engelmann wrote:

On 10.10.2018 09:06, Florian Engelmann wrote:
Now I get you. I would say all configuration templates need to be 
changed to allow, eg.


$ grep http /etc/kolla/cinder-volume/cinder.conf
glance_api_servers = http://10.10.10.5:9292
auth_url = http://internal.somedomain.tld:35357
www_authenticate_uri = http://internal.somedomain.tld:5000
auth_url = http://internal.somedomain.tld:35357
auth_endpoint = http://internal.somedomain.tld:5000

to look like:

glance_api_servers = http://glance.service.somedomain.consul:9292
auth_url = http://keystone.service.somedomain.consul:35357
www_authenticate_uri = http://keystone.service.somedomain.consul:5000
auth_url = http://keystone.service.somedomain.consul:35357
auth_endpoint = http://keystone.service.somedomain.consul:5000



The idea with Consul looks interesting.

But I don't get your issue with VIP address and spine-leaf network.

What we have:
- controller1 behind leaf1 A/B pair with MLAG
- controller2 behind leaf2 A/B pair with MLAG
- controller3 behind leaf3 A/B pair with MLAG

The VIP address is active on one controller server.
When the server fail then the VIP will move to another controller 
server.

Where do you see a SPOF in this configuration?



So leaf1 2 and 3 have to share the same L2 domain, right (in IPv4 
network)?



Yes, they share L2 domain but we have ARP and ND suppression enabled.

It is an EVPN network where there is a L3 with VxLANs between leafs and 
spines.


So we don't care where a server is connected. It can be connected to any 
leaf.


Ok that sounds very interesting. Is it possible to share some internals? 
Which switch vendor/model do you use? How does you IP address schema 
look like?
If VxLAN is used between spine and leafs are you using VxLAN networking 
for Openstack as well? Where is your VTEP?






But we wanna deploy a layer3 spine-leaf network were every leaf is 
it's own L2 domain and everything above is layer3.


eg:

leaf1 = 10.1.1.0/24
leaf2 = 10.1.2.0/24
leaf2 = 10.1.3.0/24

So a VIP like, eg. 10.1.1.10 could only exist in leaf1


In my opinion it's a very constrained environment, I don't like the idea.


Regards,

Piotr



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelm...@everyware.ch
web: http://www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade

2018-10-19 Thread Giulio Fidente
On 10/14/18 5:07 PM, Emilien Macchi wrote:
> I recently wrote a blog post about how we could upgrade an host from
> Docker containers to Podman containers.
> 
> http://my1.fr/blog/openstack-containerization-with-podman-part-3-upgrades/
thanks Emilien this looks nice and I believe the basic approach
consisting of:

1) create the podman systemd unit
2) delete the docker container
3) start the podman container

could be used to upgrade the Ceph containers as well (via ceph-ansible)
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-19 Thread Florian Engelmann
currently we are testing what is needed to get consul + registrator and 
kolla/kolla-ansible play together nicely.


To get the services created in consul by registrator all kolla 
containers running relevant services (eg. keystone, nova, cinder, ... 
but also mariadb, memcached, es, ...) need to "--expose" their ports.

Registrator will use those "exposed" ports to add a service to consul.

I there any (existing) option to add those ports to the container 
bootstrap?

What about "docker_common_options"?

command should look like:

docker run -d --expose 5000/tcp --expose 35357/tcp --name=keystone ...



After testing registrator I recognized the project seems to be 
unmaintained. So we won't use registrator.


I just need to find another method to register a container (service) in 
consul after the contaier has started.


I would like to do so without changing any kolla container or 
kolla-ansible code.





Am 10/10/18 um 9:18 AM schrieb Florian Engelmann:
by "another storage system" you mean the KV store of consul? That's 
just someting consul brings with it...


consul is very strong in doing health checks

Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M:
etcd is an already approved openstack dependency. Could that be used 
instead of consul so as to not add yet another storage system? 
coredns with the https://coredns.io/plugins/etcd/ plugin would maybe 
do what you need?


Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Monday, October 08, 2018 3:14 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] add service discovery, proxysql, 
vault, fabio and FQDN endpoints


Hi,

I would like to start a discussion about some changes and additions I
would like to see in in kolla and kolla-ansible.

1. Keepalived is a problem in layer3 spine leaf networks as any floating
IP can only exist in one leaf (and VRRP is a problem in layer3). I would
like to use consul and registrar to get rid of the "internal" floating
IP and use consuls DNS service discovery to connect all services with
each other.

2. Using "ports" for external API (endpoint) access is a major headache
if a firewall is involved. I would like to configure the HAProxy (or
fabio) for the external access to use "Host:" like, eg. "Host:
keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS.
Any customer would just need HTTPS access and not have to open all those
ports in his firewall. For some enterprise customers it is not possible
to request FW changes like that.

3. HAProxy is not capable to handle "read/write" split with Galera. I
would like to introduce ProxySQL to be able to scale Galera.

4. HAProxy is fine but fabio integrates well with consul, statsd and
could be connected to a vault cluster to manage secure certificate 
access.


5. I would like to add vault as Barbican backend.

6. I would like to add an option to enable tokenless authentication for
all services with each other to get rid of all the openstack service
passwords (security issue).

What do you think about it?

All the best,
Florian

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelm...@everyware.ch
web: http://www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge"

2018-10-19 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

I’m adding the ECG mailing list to the discussion.

I think the root of the problem is that there is no single definition of „the 
edge” (except for [1]), but it changes 
from group to group or use case to use case. What I recognise as the 
commonalities in these edge definitions, are 1) a distributed cloud 
infrastructure (kind of a cloud of clouds) 2) need for automation or everything 
3) resource constraints for the control plane.

The different edge variants are putting different emphasis on these common 
needs based ont he use case discussed.

To have a more clear understanding of these definitions we could try the 
following:

  1.  Always add the definition of these to the given context
  2.  Check what other groups are using and adopt to that
  3.  Define our own language and expect everyone else to adopt

Br,
Gerg0



[1]: https://en.wikipedia.org/wiki/The_Edge

From: Jim Rollenhagen 
Sent: Thursday, October 18, 2018 11:43 PM
To: ful...@redhat.com; OpenStack Development Mailing List (not for usage 
questions) 
Cc: openstack-s...@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the 
use of terms "Edge" and "Far Edge"

On Thu, Oct 18, 2018 at 4:45 PM John Fulton 
mailto:johfu...@redhat.com>> wrote:
On Thu, Oct 18, 2018 at 11:56 AM Jim Rollenhagen 
mailto:j...@jimrollenhagen.com>> wrote:
>
> On Thu, Oct 18, 2018 at 10:23 AM Dmitry Tantsur 
> mailto:dtant...@redhat.com>> wrote:
>>
>> Hi all,
>>
>> Sorry for chiming in really late in this topic, but I think $subj is worth
>> discussing until we settle harder on the potentially confusing terminology.
>>
>> I think the difference between "Edge" and "Far Edge" is too vague to use 
>> these
>> terms in practice. Think about the "edge" metaphor itself: something rarely 
>> has
>> several layers of edges. A knife has an edge, there are no far edges. I 
>> imagine
>> zooming in and seeing more edges at the edge, and then it's quite cool 
>> indeed,
>> but is it really a useful metaphor for those who never used a strong 
>> microscope? :)
>>
>> I think in the trivial sense "Far Edge" is a tautology, and should be 
>> avoided.
>> As a weak proof of my words, I already see a lot of smart people confusing 
>> these
>> two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we
>> adopt a different terminology, even if it less consistent with typical 
>> marketing
>> term around the "Edge" movement.
>
>
> FWIW, we created rough definitions of "edge" and "far edge" during the edge 
> WG session in Denver.
> It's mostly based on latency to the end user, though we also talked about 
> quantities of compute resources, if someone can find the pictures.

Perhaps these are the pictures Jim was referring to?
 
https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0#

That's it, thank you!

// jim



I'm involved in some TripleO work called the split control plane:
  
https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html

After the PTG I saw that the split control plane was compatible with
the type of deployment discussed at the edge WG session in Denver and
described the compatibility at:
  
https://etherpad.openstack.org/p/tripleo-edge-working-group-split-control-plane

> See the picture and table here: 
> https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Overview
>
>> Now, I don't have really great suggestions. Something that came up in TripleO
>> discussions [1] is Core/Hub/Edge, which I think reflects the idea better.
>
>
> I'm also fine with these names, as they do describe the concepts well. :)
>
> // jim

I'm fine with these terms too. In split control plane there's a
deployment method for deploying a central site and then deploying
remote sites independently. That deployment method could be used to
deploy  Core/Hub/Edge sites too. E.g. deploy the Core using Heat stack
N. Deploy a Hub using stack N+1 and then deploy an Edge using stack
N+2 etc.

  John

>>
>> I'd be very interested to hear your ideas.
>>
>> Dmitry
>>
>> [1] https://etherpad.openstack.org/p/tripleo-edge-mvp
>>
>> ___
>> openstack-sigs mailing list
>> openstack-s...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for 

[openstack-dev] [cinder]ceph rbd replication group support

2018-10-19 Thread 王俊
Hi:
I have a question about rbd replication group, I want to know the plan or 
roadmap about it? Anybody work on it?
Blueprint: 
https://blueprints.launchpad.net/cinder/+spec/ceph-rbd-replication-group-support

Thanks


保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed

2018-10-19 Thread Sean McGinnis
This appears to be another in-transit job conflict with the py3 work.
Things should be fine, but we will need to manually propose the constraint
update since it was skipped.

On Fri, Oct 19, 2018, 09:14  wrote:

> Build failed.
>
> - release-openstack-python3
> http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python3/1cb87ba/
> : SUCCESS in 2m 44s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
> - release-openstack-python
> http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python/3a9339d/
> : POST_FAILURE in 2m 40s
>
> ___
> Release-job-failures mailing list
> release-job-failu...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Zane Bitter
There hasn't been a Python 2 release in 8 years, and during that time 
we've gotten used to the idea that that's the way things go. However, 
with the switch to Python 3 looming (we will drop support for Python 2 
in the U release[1]), history is no longer a good guide: Python 3 
releases drop as often as every year. We are already feeling the pain 
from this, as Linux distros have largely already completed the shift to 
Python 3, and those that have are on versions newer than the py35 we 
currently have in gate jobs.


We have traditionally held to the principle that we want each release to 
support the latest release of CentOS and the latest LTS release of 
Ubuntu, as they existed at the beginning of the release cycle.[2] 
Currently this means in practice one version of py2 and one of py3, but 
in the future it will mean two, usually different, versions of py3.


There are two separate issues that we need to address: unit tests (we'll 
define this as code tested in isolation, within or spawned from within 
the testing process), and integration tests (we'll define this as code 
running in its own process, tested from the outside). I have two 
separate but related proposal for how to handle those.


I'd like to avoid discussion which versions of things we think should be 
supported in Stein in this thread. Let's come up with a process that we 
think is a good one to take into T and beyond, and then retroactively 
apply it to Stein. Competing proposals are of course welcome, in 
addition to feedback on this one.


Unit Tests
--

For unit tests, the most important thing is to test on the versions of 
Python we target. It's less important to be using the exact distro that 
we want to target, because unit tests generally won't interact with 
stuff outside of Python.


I'd like to propose that we handle this by setting up a unit test 
template in openstack-zuul-jobs for each release. So for Stein we'd have 
openstack-python3-stein-jobs. This template would contain:


* A voting gate job for the highest minor version of py3 we want to 
support in that release.
* A voting gate job for the lowest minor version of py3 we want to 
support in that release.

* A periodic job for any interim minor releases.
* (Starting late in the cycle) a non-voting check job for the highest 
minor version of py3 we want to support in the *next* release (if 
different), on the master branch only.


So, for example, (and this is still under active debate) for Stein we 
might have gating jobs for py35 and py37, with a periodic job for py36. 
The T jobs might only have voting py36 and py37 jobs, but late in the T 
cycle we might add a non-voting py38 job on master so that people who 
haven't switched to the U template yet can see what, if anything, 
they'll need to fix.


We'll run the unit tests on any distro we can find that supports the 
version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian 
unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a 
particular Python version before trying to test it.


Before the start of each cycle, the TC would determine which range of 
versions we want to support, on the basis of the latest one we can find 
in any distro and the earliest one we're likely to need in one of the 
supported Linux distros. There will be a project-wide goal to switch the 
testing template from e.g. openstack-python3-stein-jobs to 
openstack-python3-treasure-jobs for every repo before the end of the 
cycle. We'll have goal champions as usual following up and helping teams 
with the process. We'll know where the problem areas are because we'll 
have added non-voting jobs for any new Python versions to the previous 
release's template.


Integration Tests
-

Integration tests do test, amongst other things, integration with 
non-openstack-supplied things in the distro, so it's important that we 
test on the actual distros we have identified as popular.[2] It's also 
important that every project be testing on the same distro at the end of 
a release, so we can be sure they all work together for users.


When a new release of CentOS or a new LTS release of Ubuntu comes out, 
the TC will create a project-wide goal for the *next* release cycle to 
switch all integration tests over to that distro. It's up to individual 
projects to make the switch for the tests that they own (e.g. it'd be 
the QA team for Tempest, but other individual projects for their own 
jobs). Again, there'll be a goal champion to monitor and follow up.



[1] 
https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html
[2] 
https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [manila] [contribute]

2018-10-19 Thread Tom Barron

On 19/10/18 15:27 +0300, Leni Kadali Mutungi wrote:

Hi all.

I've downloaded the manila project from GitHub as a zip 
file, unpacked it and have run `git fetch --depth=1` and 
been progressively running `git fetch --deepen=5` to get 
the commit history I need. For future reference, would a 
shallow clone e.g. `git clone depth=1` be enough to start 
working on the project or should one have the full commit 
history of the project?


--
-- Kind regards,
Leni Kadali Mutungi


Hi Leni,

First I'd like to extend a warm welcome to you as a new manila project 
contributor!  We have some contributor/developer documentation [1] 
that you may find useful. If you find any gaps or misinformation, we 
will be happy to work with you to address these.  In addition to this 
email list, the #openstack-manila IRC channel on freenode is a good 
place to ask questions.  Many of us run irc bouncers so we'll see the 
question even if we're not looking right when it is asked.  Finally, 
we have a meeting most weeks on Thursdays at 1500UTC in 
#openstack-meeting-alt -- agendas are posted here [2].  Also, here is 
our work-plan for the current Stein development cycle [3].


Now for your question about shallow clones.  I hope others who know 
more will chime in but here are my thoughts ...


Although having the full commit history for the project is useful, it 
is certainly possible to get started with a shallow clone of the 
project.  That said, I'm not sure if the space and 
download-time/bandwidth gains are going to be that significant because 
once you have the workspace you will want to run unit tests, pep8, 
etc. using tox as explained in the developer documentation mentioned 
earlier.   That will download virtual environments for manila's 
dependencies in your workspace (under .tox directory) that dwarf the 
space used for manila proper.


$ git clone --depth=1 g...@github.com:openstack/manila.git shallow-manila
Cloning into 'shallow-manila'...
...
$ git clone g...@github.com:openstack/manila.git deep-manila
Cloning into 'deep-manila'...
...
$ du -sh shallow-manila deep-manila/
20M shallow-manila
35M deep-manila/

But after we run tox inside shallow-manila and deep-manila we see:

$ du -sh shallow-manila deep-manila/
589Mshallow-manila
603Mdeep-manila/

Similarly, you are likely to want to run devstack locally and that 
will clone the repositories for the other openstack components you 
need and the savings from shallow clones won't be that significant 
relative to the total needed.


Happy developing!

-- Tom Barron (Manila PTL) irc: tbarron

[1] https://docs.openstack.org/manila/rocky/contributor/index.html
[2] https://wiki.openstack.org/wiki/Manila/Meetings
[3] https://wiki.openstack.org/wiki/Manila/SteinCycle

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Schedule - Seeking Community Review

2018-10-19 Thread Jimmy McArthur




Colleen Murphy 
October 17, 2018 at 12:55 AM

Couple of things:

1. I noticed Julia's session "Community outreach when culture, time 
zones, and language differ" and Thierry's session "Getting OpenStack 
users involved in the project" are scheduled at the same time on 
Tuesday, but they're quite related topics and I think many people 
(especially in the TC) would want to attend both sessions.
Thanks! Just fixed this, per Thierry's suggestion.  "Community 
outreach..." is now 3:20-4pm.


2. The session "You don't know nothing about Public Cloud SDKs, yet" 
doesn't seem to have a moderator listed.

Good catch!  Thank you.  That's now corrected as well.


Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
October 15, 2018 at 3:01 PM
Hi -

The Forum schedule is now up 
(https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262).  
If you see a glaring content conflict within the Forum itself, please 
let me know.


You can also view the Full Schedule in the attached PDF if that makes 
life easier...


NOTE: BoFs and WGs are still not all up on the schedule.  No need to 
let us know :)


Cheers,
Jimmy
___
Staff mailing list
st...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/staff


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread Boxiang Zhu


Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph 
cluster.
If the volume is in-use status[2], it will call the generic migration function. 
So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only 
supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is 
anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Next Manila meeting cancelled

2018-10-19 Thread Tom Barron
We have a number of manila cores and regular participants who cannot 
attend the regular Thursday Manila meeting this coming week so it is 
cancelled.


We will meet as normal the following Thursday, 1 November, at 1500 UTC 
on #openstack-meeting-alt [1].


Cheers,

-- Tom Barron (tbarron)

[1] https://wiki.openstack.org/wiki/Manila/Meetings


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-19 Thread Alex Schultz
+1
On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi  wrote:
>
> On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles 
>  wrote:
>>
>> I would like to propose Bob Fournier (bfournie) as a core reviewer in
>> TripleO. His patches and reviews have spanned quite a wide range in our
>> project, his reviews show great insight and quality and I think he would
>> be a addition to the core team.
>>
>> What do you folks think?
>
>
> Big +1, Bob is a solid contributor/reviewer. His area of knowledge has been 
> critical in all aspects of Hardware Provisioning integration but also in 
> other TripleO bits.
> --
> Emilien Macchi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed

2018-10-19 Thread Andreas Jaeger

On 19/10/2018 16.19, Sean McGinnis wrote:
This appears to be another in-transit job conflict with the py3 work. 
Things should be fine, but we will need to manually propose the 
constraint update since it was skipped.


On Fri, Oct 19, 2018, 09:14 > wrote:


Build failed.

- release-openstack-python3

http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python3/1cb87ba/
: SUCCESS in 2m 44s
- announce-release announce-release : SKIPPED
- propose-update-constraints propose-update-constraints : SKIPPED
- release-openstack-python

http://logs.openstack.org/33/33d839da8acb93a36a45186d158262f269e9bbd6/release/release-openstack-python/3a9339d/
: POST_FAILURE in 2m 40s


We're not using tox venv anymore for the release job, so I think we need 
to remove the "fetch-tox-output" role completely:


https://review.openstack.org/611886

Andreas



___
Release-job-failures mailing list
release-job-failu...@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread melanie witt

On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm. 
I can migrate the volume(in-use) from one host to another. The nova 
libvirt will call the 'rebase' to finish it. But if using ceph backend, 
it raises exception 'Swap only supports host devices'. So now it does 
not support to migrate volume(in-use). Does anyone do this work now? Or 
Is there any way to let me migrate volume(in-use) with ceph backend?


What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to 
enable migration of in-use volumes with ceph semi-recently (Queens).


On the nova side, the code looks for the source_path in the volume 
config, and if there is not one present, it raises 
NotImplementedError(_("Swap only supports host devices"). So in your 
environment, the volume configs must be missing a source_path.


If you are using at least Queens version, then there must be something 
additional missing that we would need to do to make the migration work.


[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-19 Thread John Fulton
+1
On Fri, Oct 19, 2018 at 9:46 AM Alex Schultz  wrote:
>
> +1
> On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi  wrote:
> >
> > On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles 
> >  wrote:
> >>
> >> I would like to propose Bob Fournier (bfournie) as a core reviewer in
> >> TripleO. His patches and reviews have spanned quite a wide range in our
> >> project, his reviews show great insight and quality and I think he would
> >> be a addition to the core team.
> >>
> >> What do you folks think?
> >
> >
> > Big +1, Bob is a solid contributor/reviewer. His area of knowledge has been 
> > critical in all aspects of Hardware Provisioning integration but also in 
> > other TripleO bits.
> > --
> > Emilien Macchi
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [goals][upgrade-checkers] Week R-26 Update

2018-10-19 Thread Matt Riedemann
Top posting just to try and summarize my thought that for the goal in 
Stein, I think we should focus on getting the base framework in place 
for each service project, along with any non-config (including policy) 
specific upgrade checks that make sense for each project.


As Ben mentioned, there are existing tools for validating config (I know 
BlueBox used to use the fatal_deprecations config in their CI/CD 
pipeline to know when they needed to change their deploy scripts because 
deploying new code from pre-prod would fail). Once we get the basics 
covered we can work, as a community, to figure out how best to integrate 
config validation into upgrade checks, because I don't really think we 
want to have upgrade checks that dump warnings for all deprecated 
options in addition to what is already provided by oslo.config/log. I 
have a feeling that would get so noisy that no one would ever pay 
attention to it. I'm mostly interested in the scenario that config is 
removed from code but still being set in the config file which could 
fail an upgrade on service restart (if an alias was removed for 
example), but I also tend to think those types of issues are case-by-case.


On 10/15/2018 3:29 PM, Ben Nemec wrote:



On 10/15/18 3:27 AM, Jean-Philippe Evrard wrote:

On Fri, 2018-10-12 at 17:05 -0500, Matt Riedemann wrote:

The big update this week is version 0.1.0 of oslo.upgradecheck was
released. The documentation along with usage examples can be found
here
[1]. A big thanks to Ben Nemec for getting that done since a few
projects were waiting for it.

In other updates, some changes were proposed in other projects [2].

And finally, Lance Bragstad and I had a discussion this week [3]
about
the validity of upgrade checks looking for deleted configuration
options. The main scenario I'm thinking about here is FFU where
someone
is going from Mitaka to Pike. Let's say a config option was
deprecated
in Newton and then removed in Ocata. As the operator is rolling
through
from Mitaka to Pike, they might have missed the deprecation signal
in
Newton and removal in Ocata. Does that mean we should have upgrade
checks that look at the configuration for deleted options, or
options
where the deprecated alias is removed? My thought is that if things
will
not work once they get to the target release and restart the service
code, which would definitely impact the upgrade, then checking for
those
scenarios is probably OK. If on the other hand the removed options
were
just tied to functionality that was removed and are otherwise not
causing any harm then I don't think we need a check for that. It was
noted that oslo.config has a new validation tool [4] so that would
take
care of some of this same work if run during upgrades. So I think
whether or not an upgrade check should be looking for config option
removal ultimately depends on the severity of what happens if the
manual
intervention to handle that removed option is not performed. That's
pretty broad, but these upgrade checks aren't really set in stone
for
what is applied to them. I'd like to get input from others on this,
especially operators and if they would find these types of checks
useful.

[1] https://docs.openstack.org/oslo.upgradecheck/latest/
[2] https://storyboard.openstack.org/#!/story/2003657
[3]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17 


[4]
http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html 





Hey,

Nice topic, thanks Matt!

TL:DR; I would rather fail explicitly for all removals, warning on all
deprecations. My concern is, by being more surgical, we'd have to
decide what's "not causing any harm" (and I think deployers/users are
best to determine what's not causing them any harm).
Also, it's probably more work to classify based on "severity".
The quick win here (for upgrade-checks) is not about being smart, but
being an exhaustive, standardized across projects, and _always used_
source of truth for upgrades, which is complemented by release notes.

Long answer:

At some point in the past, I was working full time on upgrades using
OpenStack-Ansible.

Our process was the following:
1) Read all the project's releases notes to find upgrade documentation
2) With said release notes, Adapt our deploy tools to handle the
upgrade, or/and write ourselves extra documentation+release notes for
our deployers.
3) Try the upgrade manually, fail because some release note was missing
x or y. Find root cause and retry from step 2 until success.

Here is where I see upgrade checkers improving things:
1) No need for deployment projects to parse all release notes for
configuration changes, as tooling to upgrade check would be directly
outputting things that need to change for scenario x or y that is
included in the deployment project. No need to iterate either.

2) Test real deployer use cases. The deployers using openstack-ansible
have ultimate flexibility without our code 

Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-19 Thread melanie witt

On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:


The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
is only for available volume migration between two pools from the same 
ceph cluster.
If the volume is in-use status[2], it will call the generic migration 
function. So that as you
describe it, on the nova side, it raises NotImplementedError(_("Swap 
only supports host devices").

The get_config of net volume[3] has not source_path.


Ah, OK, so you're trying to migrate a volume across two separate ceph 
clusters, and that is not supported.


So does anyone try to succeed to migrate volume(in-use) with ceph 
backend or is anyone doing something of it?


Hopefully someone can share their experience with trying to migrate 
volumes across separate ceph clusters. I unfortunately don't know 
anything about it.


Best,
-melanie


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-19 Thread Thierry Carrez

Ed Leafe wrote:

On Oct 15, 2018, at 7:40 AM, Chris Dent  wrote:


I'd like some input from the community on how we'd like this to go.


I would say it depends on the long-term plans for paste. Are we planning on 
weaning ourselves off of paste, and simply need to maintain it until that can 
be completed, or are we planning on encouraging its use?


Agree with Ed... is this something we plan to minimally maintain because 
we depend on it, something that needs feature work and that we want to 
encourage the adoption of, or something that we want to keep on 
life-support while we move away from it?


My assumption is that it's "something we plan to minimally maintain 
because we depend on it". in which case all options would work: the 
exact choice depends on whether there is anybody interested in helping 
maintaining it, and where those contributors prefer to do the work.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-19 Thread Alan Bishop
+1

On Fri, Oct 19, 2018 at 9:47 AM John Fulton  wrote:

> +1
> On Fri, Oct 19, 2018 at 9:46 AM Alex Schultz  wrote:
> >
> > +1
> > On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi 
> wrote:
> > >
> > > On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles <
> jaosor...@redhat.com> wrote:
> > >>
> > >> I would like to propose Bob Fournier (bfournie) as a core reviewer in
> > >> TripleO. His patches and reviews have spanned quite a wide range in
> our
> > >> project, his reviews show great insight and quality and I think he
> would
> > >> be a addition to the core team.
> > >>
> > >> What do you folks think?
> > >
> > >
> > > Big +1, Bob is a solid contributor/reviewer. His area of knowledge has
> been critical in all aspects of Hardware Provisioning integration but also
> in other TripleO bits.
> > > --
> > > Emilien Macchi
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-19 Thread Matt Riedemann
The big news this week is we have a couple of volunteer developers from 
NEC (Akhil Jain and Rajat Dhasmana) who are pushing the base framework 
changes across a lot of the projects [1]. I'm trying to review as many 
of these as I can. The request now is for the core teams on these 
projects to review them as well so we can keep moving, and then start 
thinking about non-placeholder specific checks for each project.


The one other open question I have is about the Adjutant change [2]. I 
know Adjutant is very new and I'm not sure what upgrades look like for 
that project, so I don't really know how valuable adding the upgrade 
check framework is to that project. Is it like Horizon where it's mostly 
stateless and fed off plugins? Because we don't have an upgrade check 
CLI for Horizon for that reason.


[1] 
https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged)

[2] https://review.openstack.org/#/c/611812/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-19 Thread Fox, Kevin M
Adding a stateless service provider on the existing etcd key value store would 
be pretty easy with something like coredns I think without adding another 
stateful storage dependency.

I don't really have a horse in the game other then I'm an operator and we're 
feeling overwhelmed by all the state stuff to maintain.

If consul is entirely optional, its probably fine to add the feature. But I 
worry operators may avoid it.

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Friday, October 19, 2018 1:17 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
fabio and FQDN endpoints

> No, I mean, Consul would be an extra dependency in a big list of dependencies 
> OpenStack already has. OpenStack has so many it is causing operators to 
> reconsider adoption. I'm asking, if existing dependencies can be made to 
> solve the problem without adding more?
>
> Stateful dependencies are much harder to deal with then stateless ones, as 
> they take much more operator care/attention. Consul is stateful as is etcd, 
> and etcd is already a dependency.
>
> Can etcd be used instead so as not to put more load on the operators?

While etcd is a strong KV store it lacks many features consul has. Using
consul for DNS based service discovery is very easy to implement without
making it a dependency.
So we will start with a "external" consul and see how to handle the
service registration without modifying the kolla containers or any
kolla-ansible code.

All the best,
Flo


>
> Thanks,
> Kevin
> 
> From: Florian Engelmann [florian.engelm...@everyware.ch]
> Sent: Wednesday, October 10, 2018 12:18 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
> fabio and FQDN endpoints
>
> by "another storage system" you mean the KV store of consul? That's just
> someting consul brings with it...
>
> consul is very strong in doing health checks
>
> Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M:
>> etcd is an already approved openstack dependency. Could that be used instead 
>> of consul so as to not add yet another storage system? coredns with the 
>> https://coredns.io/plugins/etcd/ plugin would maybe do what you need?
>>
>> Thanks,
>> Kevin
>> 
>> From: Florian Engelmann [florian.engelm...@everyware.ch]
>> Sent: Monday, October 08, 2018 3:14 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
>> fabio and FQDN endpoints
>>
>> Hi,
>>
>> I would like to start a discussion about some changes and additions I
>> would like to see in in kolla and kolla-ansible.
>>
>> 1. Keepalived is a problem in layer3 spine leaf networks as any floating
>> IP can only exist in one leaf (and VRRP is a problem in layer3). I would
>> like to use consul and registrar to get rid of the "internal" floating
>> IP and use consuls DNS service discovery to connect all services with
>> each other.
>>
>> 2. Using "ports" for external API (endpoint) access is a major headache
>> if a firewall is involved. I would like to configure the HAProxy (or
>> fabio) for the external access to use "Host:" like, eg. "Host:
>> keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS.
>> Any customer would just need HTTPS access and not have to open all those
>> ports in his firewall. For some enterprise customers it is not possible
>> to request FW changes like that.
>>
>> 3. HAProxy is not capable to handle "read/write" split with Galera. I
>> would like to introduce ProxySQL to be able to scale Galera.
>>
>> 4. HAProxy is fine but fabio integrates well with consul, statsd and
>> could be connected to a vault cluster to manage secure certificate access.
>>
>> 5. I would like to add vault as Barbican backend.
>>
>> 6. I would like to add an option to enable tokenless authentication for
>> all services with each other to get rid of all the openstack service
>> passwords (security issue).
>>
>> What do you think about it?
>>
>> All the best,
>> Florian
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
>
> EveryWare AG
> Florian Engelmann
> Systems Engineer
> Zurlindenstrasse 52a
> CH-8003 Zürich
>
> tel: +41 44 466 60 00
> fax: +41 44 466 60 10
> mail: mailto:florian.engelm...@everyware.ch
> web: http://www.everyware.ch
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

[openstack-dev] [glance] next python-glanceclient release

2018-10-19 Thread Brian Rosmaita
Hello Glancers,

I was looking at a Cinder patch [0] and it made me realize that we
should do a glanceclient release that includes the
multihash-download-verification [1] before the next scheduled Stein
release (which was to be 3.0.0, basically Rocky with v1 support removed;
see [2]).  I think it would be good to have the new verification change
released so other projects can consume the code and we can find out
sooner if it breaks anyone.  (I'm worried about allow_md5_fallback=False
[6], which I think is definitely the right thing for the CLI client, but
the discussion about allowing users to pick an os_hash_algo on Iain's
spec-lite [4] is making me worry about the effect that default value
could have on other services.)

Here are the options:
(1) backport [1] to stable/rocky and cut 2.12.1
(2) cut 2.13.0 from master and make it the first Stein glanceclient,
leaving legacy md5 checksum verification the only validation option in Rocky
(3) wait for 3.0.0 to include [1]
(4) change the default for allow_md5_fallback to True for the data()
function [6] (the CLI code already explicitly sets it and won't need to
be adjusted [5]) and then do (1) or (2) or (3)

Obviously, I don't like (3).  Not sure I like (4) either, but figured we
should at least think about it.

If we pick (1), we should merge the periodic tips job change [3] to
master and immediately backport it to stable/rocky before cutting the
release.  That way we won't have any unreleased patches sitting in
stable/rocky.

Let me know what you think.

cheers,
brian

[0] https://review.openstack.org/#/c/611081/
[1]
http://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=a4ea9f0720214bd4aa6d72e81776e1260b30ad2f
[2] https://launchpad.net/python-glanceclient/+series
[3] https://review.openstack.org/#/c/599844/
[4] https://review.openstack.org/#/c/597648/
[5]
http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/v2/shell.py?id=a4ea9f0720214bd4aa6d72e81776e1260b30ad2f#n521
[6]
http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/v2/images.py?id=a4ea9f0720214bd4aa6d72e81776e1260b30ad2f#n201

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-19 Thread Alex Schultz
On Fri, Oct 19, 2018 at 10:53 AM James Slagle  wrote:
>
> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  wrote:
> > Additionally I took a stab at combining the puppet/docker service
> > definitions for the aodh services in a similar structure to start
> > reducing the overhead we've had from maintaining the docker/puppet
> > implementations seperately.  You can see the patch
> > https://review.openstack.org/#/c/611188/ for an additional example of
> > this.
>
> That patch takes the approach of removing baremetal support. Is that
> what we agreed to do?
>

Since it's deprecated since Queens[0], yes? I think it is time to stop
continuing this method of installation.  Given that I'm not even sure
the upgrade process even works anymore with baremetal, I don't think
there's a reason to keep it as it directly impacts the time it takes
to perform deployments and also contributes to increased complexity
all around.

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html

> I'm not specifically opposed, as I'm pretty sure the baremetal
> implementations are no longer tested anywhere, but I know that Dan had
> some concerns about that last time around.
>
> The alternative we discussed was using jinja2 to include common
> data/tasks in both the puppet/docker/ansible implementations. That
> would also result in reducing the number of Heat resources in these
> stacks and hopefully reduce the amount of time it takes to
> create/update the ServiceChain stacks.
>

I'd rather we officially get rid of the one of the two methods and
converge on a single method without increasing the complexity via
jinja to continue to support both. If there's an improvement to be had
after we've converged on a single structure for including the base
bits, maybe we could do that then?

Thanks,
-Alex

> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Clark Boylan
On Fri, Oct 19, 2018, at 8:17 AM, Zane Bitter wrote:
> There hasn't been a Python 2 release in 8 years, and during that time 
> we've gotten used to the idea that that's the way things go. However, 
> with the switch to Python 3 looming (we will drop support for Python 2 
> in the U release[1]), history is no longer a good guide: Python 3 
> releases drop as often as every year. We are already feeling the pain 
> from this, as Linux distros have largely already completed the shift to 
> Python 3, and those that have are on versions newer than the py35 we 
> currently have in gate jobs.
> 
> We have traditionally held to the principle that we want each release to 
> support the latest release of CentOS and the latest LTS release of 
> Ubuntu, as they existed at the beginning of the release cycle.[2] 
> Currently this means in practice one version of py2 and one of py3, but 
> in the future it will mean two, usually different, versions of py3.
> 
> There are two separate issues that we need to address: unit tests (we'll 
> define this as code tested in isolation, within or spawned from within 
> the testing process), and integration tests (we'll define this as code 
> running in its own process, tested from the outside). I have two 
> separate but related proposal for how to handle those.
> 
> I'd like to avoid discussion which versions of things we think should be 
> supported in Stein in this thread. Let's come up with a process that we 
> think is a good one to take into T and beyond, and then retroactively 
> apply it to Stein. Competing proposals are of course welcome, in 
> addition to feedback on this one.
> 
> Unit Tests
> --
> 
> For unit tests, the most important thing is to test on the versions of 
> Python we target. It's less important to be using the exact distro that 
> we want to target, because unit tests generally won't interact with 
> stuff outside of Python.
> 
> I'd like to propose that we handle this by setting up a unit test 
> template in openstack-zuul-jobs for each release. So for Stein we'd have 
> openstack-python3-stein-jobs. This template would contain:

Because zuul config is branch specific we could set up every project to use a 
`openstack-python3-jobs` template then define that template differently on each 
branch. This would mean you only have to update the location where the template 
is defined and not need to update every other project after cutting a stable 
branch. I would suggest we take advantage of that to reduce churn.

> 
> * A voting gate job for the highest minor version of py3 we want to 
> support in that release.
> * A voting gate job for the lowest minor version of py3 we want to 
> support in that release.
> * A periodic job for any interim minor releases.
> * (Starting late in the cycle) a non-voting check job for the highest 
> minor version of py3 we want to support in the *next* release (if 
> different), on the master branch only.
> 
> So, for example, (and this is still under active debate) for Stein we 
> might have gating jobs for py35 and py37, with a periodic job for py36. 
> The T jobs might only have voting py36 and py37 jobs, but late in the T 
> cycle we might add a non-voting py38 job on master so that people who 
> haven't switched to the U template yet can see what, if anything, 
> they'll need to fix.
> 
> We'll run the unit tests on any distro we can find that supports the 
> version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian 
> unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a 
> particular Python version before trying to test it.
> 
> Before the start of each cycle, the TC would determine which range of 
> versions we want to support, on the basis of the latest one we can find 
> in any distro and the earliest one we're likely to need in one of the 
> supported Linux distros. There will be a project-wide goal to switch the 
> testing template from e.g. openstack-python3-stein-jobs to 
> openstack-python3-treasure-jobs for every repo before the end of the 
> cycle. We'll have goal champions as usual following up and helping teams 
> with the process. We'll know where the problem areas are because we'll 
> have added non-voting jobs for any new Python versions to the previous 
> release's template.

I don't know that this needs to be a project wide goal if you can just update 
the template on the master branch where the template is defined. Do that then 
every project is now running with the up to date version of the template. We 
should probably advertise when this is happening with some links to python 
version x.y breakages/features, but the process itself should be quick.

As for python version range selection I worry that that the criteria about 
relies on too much guesswork. I do think we should do our best to test future 
incoming versions of python even while not officially supporting them. We will 
have to support them at some point, either directly or via some later version 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-19 Thread James Slagle
On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  wrote:
> Additionally I took a stab at combining the puppet/docker service
> definitions for the aodh services in a similar structure to start
> reducing the overhead we've had from maintaining the docker/puppet
> implementations seperately.  You can see the patch
> https://review.openstack.org/#/c/611188/ for an additional example of
> this.

That patch takes the approach of removing baremetal support. Is that
what we agreed to do?

I'm not specifically opposed, as I'm pretty sure the baremetal
implementations are no longer tested anywhere, but I know that Dan had
some concerns about that last time around.

The alternative we discussed was using jinja2 to include common
data/tasks in both the puppet/docker/ansible implementations. That
would also result in reducing the number of Heat resources in these
stacks and hopefully reduce the amount of time it takes to
create/update the ServiceChain stacks.

--
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo][messaging] Removing “blocking” executor from oslo.messaging

2018-10-19 Thread Ken Giusti
Hi Renat,
After discussing this a bit with Ben on IRC we're going to push the removal
off to T milestone 1.

I really like Ben's idea re: adding a blocking entry to your project's
setup.cfg file.  We can remove the explicit check for blocking in
oslo.messaging so you won't get an annoying warning if you want to load
blocking on your own.

Let me know what you think, thanks.

On Fri, Oct 19, 2018 at 12:02 AM Renat Akhmerov 
wrote:

> Hi,
>
>
> @Ken, I understand your considerations. I get that. I’m only asking not to
> remove it *for now*. And yes, if you think it should be discouraged from
> using it’s totally fine. But practically, it’s been the only reliable
> option for Mistral so far that may be our fault, I have to admit, because
> we weren’t able to make it work well with other executor types but we’ll
> try to fix that.
>
> By the way, I was playing with different options yesterday and it seems
> like that setting the executor to “threading” and the
> “executor_thread_pool_size” property to 1 behaves the same way as
> “blocking”. So may be that’s an option for us too, even if “blocking” is
> completely removed. But I would still be in favour of having some extra
> time to prove that with thorough testing.
>
> @Ben, including the executor via setup.cfg also looks OK to me. I see no
> issues with this approach.
>
>
> Thanks
>
> Renat Akhmerov
> @Nokia
> On 18 Oct 2018, 23:35 +0700, Ben Nemec , wrote:
>
>
>
> On 10/18/18 9:59 AM, Ken Giusti wrote:
>
> Hi Renat,
>
> The biggest issue with the blocking executor (IMHO) is that it blocks
> the protocol I/O while  RPC processing is in progress.  This increases
> the likelihood that protocol processing will not get done in a timely
> manner and things start to fail in weird ways.  These failures are
> timing related and are typically hard to reproduce or root-cause.   This
> isn't something we can fix as blocking is the nature of the executor.
>
> If we are to leave it in we'd really want to discourage its use.
>
>
> Since it appears the actual executor code lives in futurist, would it be
> possible to remove the entrypoint for blocking from oslo.messaging and
> have mistral just pull it in with their setup.cfg? Seems like they
> should be able to add something like:
>
> oslo.messaging.executors =
> blocking = futurist:SynchronousExecutor
>
> to their setup.cfg to keep it available to them even if we drop it from
> oslo.messaging itself. That seems like a good way to strongly discourage
> use of it while still making it available to projects that are really
> sure they want it.
>
>
> However I'm ok with leaving it available if the policy for using
> blocking is 'use at your own risk', meaning that bug reports may have to
> be marked 'won't fix' if we have reason to believe that blocking is at
> fault.  That implies removing 'blocking' as the default executor value
> in the API and having applications explicitly choose it.  And we keep
> the deprecation warning.
>
> We could perhaps implement time duration checks around the executor
> callout and log a warning if the executor blocked for an extended amount
> of time (extended=TBD).
>
> Other opinions so we can come to a consensus?
>
>
> On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov  > wrote:
>
> Hi Oslo Team,
>
> Can we retain “blocking” executor for now in Oslo Messaging?
>
>
> Some background..
>
> For a while we had to use Oslo Messaging with “blocking” executor in
> Mistral because of incompatibility of MySQL driver with green
> threads when choosing “eventlet” executor. Under certain conditions
> we would get deadlocks between green threads. Some time ago we
> switched to using PyMysql driver which is eventlet friendly and did
> a number of tests that showed that we could safely switch to
> “eventlet” executor (with that driver) so we introduced a new option
> in Mistral where we could choose an executor in Oslo Messaging. The
> corresponding bug is [1].
>
> The issue is that we recently found that not everything actually
> works as expected when using combination PyMysql + “eventlet”
> executor. We also tried “threading” executor and the system *seems*
> to work with it but surprisingly performance is much worse.
>
> Given all of that we’d like to ask Oslo Team not to remove
> “blocking” executor for now completely, if that’s possible. We have
> a strong motivation to switch to “eventlet” for other reasons
> (parallelism => better performance etc.) but seems like we need some
> time to make it smoothly.
>
>
> [1] https://bugs.launchpad.net/mistral/+bug/1696469
>
>
> Thanks
>
> Renat Akhmerov
> @Nokia
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> 

[openstack-dev] [oslo] Next meeting cancelled

2018-10-19 Thread Ben Nemec

Hi,

Sorry for the late notice, but I just found out I'm going to be 
travelling next week, which will mean I can't run the meeting. Since 
Doug is also out and it's really late to find someone else to run it, 
we're just going to skip it. We'll resume as normal the following week.


Also note that this means I may have somewhat limited availability on 
IRC next week. I'll try to keep up with emails, but I can't make any 
guarantees. If you need immediate assistance with Oslo, try ChangBo 
(gcb), Ken (kgiusti), or Stephen (stephenfin).


Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Zane Bitter

On 19/10/18 12:30 PM, Clark Boylan wrote:

On Fri, Oct 19, 2018, at 8:17 AM, Zane Bitter wrote:

Unit Tests
--

For unit tests, the most important thing is to test on the versions of
Python we target. It's less important to be using the exact distro that
we want to target, because unit tests generally won't interact with
stuff outside of Python.

I'd like to propose that we handle this by setting up a unit test
template in openstack-zuul-jobs for each release. So for Stein we'd have
openstack-python3-stein-jobs. This template would contain:


Because zuul config is branch specific we could set up every project to use a 
`openstack-python3-jobs` template then define that template differently on each 
branch. This would mean you only have to update the location where the template 
is defined and not need to update every other project after cutting a stable 
branch. I would suggest we take advantage of that to reduce churn.


There was a reason I didn't propose that approach: in practice you can't 
add a new gating test to a centralised zuul template definition. If you 
do, many projects will break because the change is not self-testing. At 
best you'll be pitchforked by an angry mob of people who can't get 
anything but py37 fixes through the gate, and at worst they'll all stop 
using the template to get unblocked and then never go back to it.


We don't need everyone to cut over at the same time. We just need them 
to do it in the space of one release cycle. One patch every 6 months is 
not an excessive amount of churn.



* A voting gate job for the highest minor version of py3 we want to
support in that release.
* A voting gate job for the lowest minor version of py3 we want to
support in that release.
* A periodic job for any interim minor releases.
* (Starting late in the cycle) a non-voting check job for the highest
minor version of py3 we want to support in the *next* release (if
different), on the master branch only.

So, for example, (and this is still under active debate) for Stein we
might have gating jobs for py35 and py37, with a periodic job for py36.
The T jobs might only have voting py36 and py37 jobs, but late in the T
cycle we might add a non-voting py38 job on master so that people who
haven't switched to the U template yet can see what, if anything,
they'll need to fix.

We'll run the unit tests on any distro we can find that supports the
version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian
unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a
particular Python version before trying to test it.

Before the start of each cycle, the TC would determine which range of
versions we want to support, on the basis of the latest one we can find
in any distro and the earliest one we're likely to need in one of the
supported Linux distros. There will be a project-wide goal to switch the
testing template from e.g. openstack-python3-stein-jobs to
openstack-python3-treasure-jobs for every repo before the end of the
cycle. We'll have goal champions as usual following up and helping teams
with the process. We'll know where the problem areas are because we'll
have added non-voting jobs for any new Python versions to the previous
release's template.


I don't know that this needs to be a project wide goal if you can just update 
the template on the master branch where the template is defined. Do that then 
every project is now running with the up to date version of the template. We 
should probably advertise when this is happening with some links to python 
version x.y breakages/features, but the process itself should be quick.


Either way, it'll be project teams themselves fixing any broken tests 
due to a new version being added. So we can either have a formal 
project-wide goal where we project-manage that process across the space 
of a release, or a de-facto project-wide goal where we break everybody 
and then nothing gets merged until they fix it.



As for python version range selection I worry that that the criteria about 
relies on too much guesswork.


Some guesswork is going to be inevitable, unfortunately, (we have no way 
of knowing what will be in CentOS 8, for example) but I agree that we 
should try to tighten up the criteria as much as possible.



I do think we should do our best to test future incoming versions of python 
even while not officially supporting them. We will have to support them at some 
point, either directly or via some later version that includes the changes from 
that intermediate version.


+1, I think we should try to add support for higher versions as soon as 
possible. It may take a long time to get into an LTS release, but 
there's bound to be _some_ distro out there where people want to use it. 
(Case in point: Debian really wanted py37 support in Rocky, at which 
point a working 3.7 wasn't even available in _any_ Ubuntu release, let 
alone an LTS). That's why I said "the latest one we can find in any 
distro" - if we 

Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Zane Bitter

On 19/10/18 11:17 AM, Zane Bitter wrote:
I'd like to propose that we handle this by setting up a unit test 
template in openstack-zuul-jobs for each release. So for Stein we'd have 
openstack-python3-stein-jobs. This template would contain:


* A voting gate job for the highest minor version of py3 we want to 
support in that release.
* A voting gate job for the lowest minor version of py3 we want to 
support in that release.

* A periodic job for any interim minor releases.
* (Starting late in the cycle) a non-voting check job for the highest 
minor version of py3 we want to support in the *next* release (if 
different), on the master branch only.


So, for example, (and this is still under active debate) for Stein we 
might have gating jobs for py35 and py37, with a periodic job for py36. 
The T jobs might only have voting py36 and py37 jobs, but late in the T 
cycle we might add a non-voting py38 job on master so that people who 
haven't switched to the U template yet can see what, if anything, 
they'll need to fix.


Just to make it easier to visualise, here is an example for how the Zuul 
config _might_ look now if we had adopted this proposal during Rocky:


https://review.openstack.org/611947

And instead of having a project-wide goal in Stein to add 
`openstack-python36-jobs` to the list that currently includes 
`openstack-python35-jobs` in each project's Zuul config[1], we'd have 
had a goal to change `openstack-python3-rocky-jobs` to 
`openstack-python3-stein-jobs` in each project's Zuul config.


- ZB


[1] 
https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] SSL errors polling amphorae and missing tenant network interface

2018-10-19 Thread Erik McCormick
Apologies for cross-posting, but in the event that these might be
worth filing as bugs, I wanted the Octavia devs to see it as well...

I've been wrestling with getting Octavia up and running and have
become stuck on two issues. I'm hoping someone has run into these
before. My google foo has come up empty.

Issue 1:
When the Octavia controller tries to poll the amphora instance, it
tries repeatedly and eventually fails. The error on the controller
side is:

2018-10-19 14:17:39.181 26 ERROR
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection
retries (currently set to 300) exhausted.  The amphora is unavailable.
Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries
exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by
SSLError(SSLError("bad handshake: Error([('rsa routines',
'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
'tls_process_server_certificate', 'certificate verify
failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112',
port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15
(Caused by SSLError(SSLError("bad handshake: Error([('rsa routines',
'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
'tls_process_server_certificate', 'certificate verify
failed')],)",),))

On the amphora side I see:
[2018-10-19 17:52:54 +] [1331] [DEBUG] Error processing SSL request.
[2018-10-19 17:52:54 +] [1331] [DEBUG] Invalid request from
ip=:::10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake
failure (_ssl.c:1754)

I've generated certificates both with the script in the Octavia git
repo, and with the Openstack Ansible playbook. I can see that they are
present in /etc/octavia/certs.

I'm using the Kolla (Queens) containers for the control plane so I'm
sure I've satisfied all the python library constraints.

Issue 2:
I"m not sure how it gets configured, but the tenant network interface
(ens6) never comes up. I can spawn other instances on that network
with no issue, and I can see that Neutron has the port attached to the
instance. However, in the instance this is all I get:

ubuntu@amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: ens3:  mtu 9000 qdisc pfifo_fast
state UP group default qlen 1000
link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff
inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe30:c460/64 scope link
   valid_lft forever preferred_lft forever
3: ens6:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff

There's no evidence of the interface anywhere else including udev rules.

Any help with either or both issues would be greatly appreciated.

Cheers,
Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Monty Taylor

On 10/19/2018 01:58 PM, Andreas Jaeger wrote:

On 19/10/2018 18.30, Clark Boylan wrote:
 > [...]
Because zuul config is branch specific we could set up every project 
to use a `openstack-python3-jobs` template then define that template 
differently on each branch. This would mean you only have to update 
the location where the template is defined and not need to update 
every other project after cutting a stable branch. I would suggest we 
take advantage of that to reduce churn.


Alternative we have a single "openstack-python3-jobs" template in an 
unbranched repo like openstack-zuul-jobs and define different jobs per 
branch.


The end result would be the same, each repo uses the same template and 
no changes are needed for the repo when branching...


Yes - I agree that we should take advantage of zuul's branching support. 
And I agree with Andreas that we should just use branch matchers in 
openstack-zuul-jobs to do it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Andreas Jaeger

On 19/10/2018 18.30, Clark Boylan wrote:
> [...]

Because zuul config is branch specific we could set up every project to use a 
`openstack-python3-jobs` template then define that template differently on each 
branch. This would mean you only have to update the location where the template 
is defined and not need to update every other project after cutting a stable 
branch. I would suggest we take advantage of that to reduce churn.


Alternative we have a single "openstack-python3-jobs" template in an 
unbranched repo like openstack-zuul-jobs and define different jobs per 
branch.


The end result would be the same, each repo uses the same template and 
no changes are needed for the repo when branching...


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-19 Thread Piotr Misiak


On 19.10.2018 10:21, Florian Engelmann wrote:


On 17.10.2018 15:45, Florian Engelmann wrote:

On 10.10.2018 09:06, Florian Engelmann wrote:
Now I get you. I would say all configuration templates need to be 
changed to allow, eg.


$ grep http /etc/kolla/cinder-volume/cinder.conf
glance_api_servers = http://10.10.10.5:9292
auth_url = http://internal.somedomain.tld:35357
www_authenticate_uri = http://internal.somedomain.tld:5000
auth_url = http://internal.somedomain.tld:35357
auth_endpoint = http://internal.somedomain.tld:5000

to look like:

glance_api_servers = http://glance.service.somedomain.consul:9292
auth_url = http://keystone.service.somedomain.consul:35357
www_authenticate_uri = http://keystone.service.somedomain.consul:5000
auth_url = http://keystone.service.somedomain.consul:35357
auth_endpoint = http://keystone.service.somedomain.consul:5000



The idea with Consul looks interesting.

But I don't get your issue with VIP address and spine-leaf network.

What we have:
- controller1 behind leaf1 A/B pair with MLAG
- controller2 behind leaf2 A/B pair with MLAG
- controller3 behind leaf3 A/B pair with MLAG

The VIP address is active on one controller server.
When the server fail then the VIP will move to another controller 
server.

Where do you see a SPOF in this configuration?



So leaf1 2 and 3 have to share the same L2 domain, right (in IPv4 
network)?



Yes, they share L2 domain but we have ARP and ND suppression enabled.

It is an EVPN network where there is a L3 with VxLANs between leafs 
and spines.


So we don't care where a server is connected. It can be connected to 
any leaf.


Ok that sounds very interesting. Is it possible to share some 
internals? Which switch vendor/model do you use? How does you IP 
address schema look like?
If VxLAN is used between spine and leafs are you using VxLAN 
networking for Openstack as well? Where is your VTEP?




We have Mellanox switches with Cumulus Linux installed. Here you have a 
documentation: 
https://docs.cumulusnetworks.com/display/DOCS/Ethernet+Virtual+Private+Network+-+EVPN


EVPN is a well known standard and it is also supported by Juniper, Cisco 
etc.


We have standard VLANs between servers and leaf switches, they are 
mapped to VxLANs between leafs and spines. In our env every leaf switch 
is a VTEP. Servers have MLAG/CLAG connections to two leaf switches. We 
also have anycast gateways on leaf switches. From servers point of view 
our network is like a very big switch with hundreds of ports and 
standard VLANs.


We are using VxLAN networking for OpenStack, but it is configured on top 
of network VxLANs, we dont mix them.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] SSL errors polling amphorae and missing tenant network interface

2018-10-19 Thread Michael Johnson
Hi Erik,

Sorry to hear you are still having certificate issues.

Issue #2 is probably caused by issue #1. Since we hot-plug the tenant
network for the VIP, one of the first steps after the worker connects
to the amphora agent is finishing the required configuration of the
VIP interface inside the network namespace on the amphroa.

If I remember correctly, you are attempting to configure Octavia with
the dual CA option (which is good for non-development use).

This is what I have for notes:

[certificates] gets the following:
cert_generator = local_cert_generator
ca_certificate = server CA's "server.pem" file
ca_private_key = server CA's "server.key" file
ca_private_key_passphrase = pass phrase for ca_private_key
 [controller_worker]
 client_ca = Client CA's ca_cert file
 [haproxy_amphora]
client_cert = Client CA's client.pem file (I think with it's key
concatenated is what rm_work said the other day)
server_ca = Server CA's ca_cert file

That said, I can probably run through this and write something up next
week that is more step-by-step/detailed.

Michael

On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick
 wrote:
>
> Apologies for cross-posting, but in the event that these might be
> worth filing as bugs, I wanted the Octavia devs to see it as well...
>
> I've been wrestling with getting Octavia up and running and have
> become stuck on two issues. I'm hoping someone has run into these
> before. My google foo has come up empty.
>
> Issue 1:
> When the Octavia controller tries to poll the amphora instance, it
> tries repeatedly and eventually fails. The error on the controller
> side is:
>
> 2018-10-19 14:17:39.181 26 ERROR
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection
> retries (currently set to 300) exhausted.  The amphora is unavailable.
> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries
> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by
> SSLError(SSLError("bad handshake: Error([('rsa routines',
> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
> 'tls_process_server_certificate', 'certificate verify
> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112',
> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15
> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines',
> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
> 'tls_process_server_certificate', 'certificate verify
> failed')],)",),))
>
> On the amphora side I see:
> [2018-10-19 17:52:54 +] [1331] [DEBUG] Error processing SSL request.
> [2018-10-19 17:52:54 +] [1331] [DEBUG] Invalid request from
> ip=:::10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake
> failure (_ssl.c:1754)
>
> I've generated certificates both with the script in the Octavia git
> repo, and with the Openstack Ansible playbook. I can see that they are
> present in /etc/octavia/certs.
>
> I'm using the Kolla (Queens) containers for the control plane so I'm
> sure I've satisfied all the python library constraints.
>
> Issue 2:
> I"m not sure how it gets configured, but the tenant network interface
> (ens6) never comes up. I can spawn other instances on that network
> with no issue, and I can see that Neutron has the port attached to the
> instance. However, in the instance this is all I get:
>
> ubuntu@amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: ens3:  mtu 9000 qdisc pfifo_fast
> state UP group default qlen 1000
> link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff
> inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fe30:c460/64 scope link
>valid_lft forever preferred_lft forever
> 3: ens6:  mtu 1500 qdisc noop state DOWN group
> default qlen 1000
> link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff
>
> There's no evidence of the interface anywhere else including udev rules.
>
> Any help with either or both issues would be greatly appreciated.
>
> Cheers,
> Erik
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade

2018-10-19 Thread Emilien Macchi
On Fri, Oct 19, 2018 at 4:24 AM Giulio Fidente  wrote:

> 1) create the podman systemd unit
> 2) delete the docker container
>

We finally went with "stop the docker container"

3) start the podman container
>

and 4) delete the docker container later in THT upgrade_tasks.

And yes +1 to do the same in ceph-ansible if possible.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-42

2018-10-19 Thread Chris Dent


HTML: https://anticdent.org/placement-update-18-42.html

After a gap from when I was away last week, here's this week's
placement update. The situation this week remains much the same as
last week: focus on specs and the bigger issues associated with
extraction.

# Most Important

The major factors that need attention are managing database
migrations and associated tooling and getting the ball rolling on
properly producing documentation. More on both of these things in
the extraction section below.

# What's Changed

mnaser found an [issue](https://launchpad.net/bugs/1798163) with the
migrations associated with consumer ids. A fix was created [in
nova](https://review.openstack.org/#/c/65/) and [ported into
placement](https://review.openstack.org/#/c/611165/) but it raised
some [questions on what to do with those
migrations](http://lists.openstack.org/pipermail/openstack-dev/2018-October/135796.html)
in the extracted placement. Some work also needs to be done to check
to make sure the solutions will work in postgresql, as it might
tickle the way it is more strict about group by clauses.

# Bugs

* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 16.
  -1.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 8.

# Specs

There's a [spec review
sprint](http://lists.openstack.org/pipermail/openstack-dev/2018-October/135748.html)
this coming Tuesday. This may be missing some newer specs because I
got exhausted keeping tabs on the ones that already exist.

* 
  Account for host agg allocation ratio in placement
  (Still in rocky/)

* 
  Add subtree filter for GET /resource_providers

* 
  Resource provider - request group mapping in allocation candidate

* 
  VMware: place instances on resource pool
  (still in rocky/)

* 
  Standardize CPU resource tracking

* 
  Allow overcommit of dedicated CPU
  (Has an alternative which changes allocations to a float)

* 
  List resource providers having inventory

* 
  Bi-directional enforcement of traits

* 
  Modelling passthrough devices for report to placement

* 
  Propose counting quota usage from placement and API database
  (A bit out of date but may be worth resurrecting)

* 
  Spec: allocation candidates in tree

* 
  [WIP] generic device discovery policy

* 
  Nova Cyborg interaction specification.

* 
  supporting virtual NVDIMM devices

* 
  Spec: Support filtering by forbidden aggregate

* 
  Proposes NUMA topology with RPs

* 
  Support initial allocation ratios

* 
  Count quota based on resource class

* 
  WIP: High Precision Event Timer (HPET) on x86 guests

* 
  Add support for emulated virtual TPM

* 
  Limit instance create max_count (spec) (has some concurrency
  issues related placement)

* 
  Adds spec for instance live resize

# Main Themes

## Making Nested Useful

Work on getting nova's use of nested resource providers happy and
fixing bugs discovered in placement in the process. This is creeping
ahead, but feels somewhat stalled out, presumably because people are
busy with other things.

* 

I feel like I'm missing some things in this area. Please let me know
if there are others. This is related:

* 
  Pass allocations to virt drivers when resizing

## Extraction

There continue to be three main tasks in regard to placement
extraction:

1. upgrade and integration testing
2. database schema migration and management
3. documentation publishing

The upgrade aspect of (1) is in progress with a [patch to
grenade](https://review.openstack.org/#/c/604454/) and a [patch to
devstack](https://review.openstack.org/#/c/600162/). This is very
close to working. A main blocker is needing a proper tool for
managing the creation and migration of database tables (more below).

My experiments with using gabbi-tempest are getting [a bit
closer](https://review.openstack.org/#/c/611678/).

Successful devstack is dependent on us having a reasonable solution
to (2). For the moment [a hacked up

[openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-19 Thread Juan Antonio Osorio Robles
Hello!


I would like to propose Bob Fournier (bfournie) as a core reviewer in
TripleO. His patches and reviews have spanned quite a wide range in our
project, his reviews show great insight and quality and I think he would
be a addition to the core team.

What do you folks think?


Best Regards



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-19 Thread Emilien Macchi
On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles <
jaosor...@redhat.com> wrote:

> I would like to propose Bob Fournier (bfournie) as a core reviewer in
> TripleO. His patches and reviews have spanned quite a wide range in our
> project, his reviews show great insight and quality and I think he would
> be a addition to the core team.
>
> What do you folks think?
>

Big +1, Bob is a solid contributor/reviewer. His area of knowledge has been
critical in all aspects of Hardware Provisioning integration but also in
other TripleO bits.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] [contribute]

2018-10-19 Thread Leni Kadali Mutungi

Hi all.

I've downloaded the manila project from GitHub as a zip file, unpacked 
it and have run `git fetch --depth=1` and been progressively running 
`git fetch --deepen=5` to get the commit history I need. For future 
reference, would a shallow clone e.g. `git clone depth=1` be enough to 
start working on the project or should one have the full commit history 
of the project?


--
-- Kind regards,
Leni Kadali Mutungi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance

2018-10-19 Thread Neil Jerram
Wracking my brains over this one, would appreciate any pointers...

Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu
Bionic. The first compute node is an NFS server for
/var/lib/nova/instances, and the other compute nodes mount that as NFS
clients.

Problem: Sometimes, when launching an instance which is scheduled to one of
the client nodes, nova-compute (in imagebackend.py) gets Permission Denied
(errno 13) when calling utime to touch the timestamp on the instance file.

Through various bits of debugging and hackery, I've established that:

- it looks like the problem never occurs when this is the call that
bootstraps the privsep setup; but it does occur quite frequently on later
calls

- when the problem occurs, retrying doesn't help (5 times, with 0.5s in
between)

- the instance file does exist, and is owned by root with read/write
permission for root

- the privsep helper is running as root

- the privsep helper receives and executes the request - so it's not a
problem with communication between nova-compute and the helper

- root is uid 0 on both NFS server and client

- NFS setup does not have the root_squash option

- there is some AppArmor setup, on both client and server, and I haven't
yet worked out whether that might be relevant.

Any ideas?

Many thanks,
  Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade

2018-10-19 Thread Sergii Golovatiuk
Hi,

On Fri, Oct 19, 2018 at 10:24 AM Giulio Fidente  wrote:

> On 10/14/18 5:07 PM, Emilien Macchi wrote:
> > I recently wrote a blog post about how we could upgrade an host from
> > Docker containers to Podman containers.
> >
> >
> http://my1.fr/blog/openstack-containerization-with-podman-part-3-upgrades/
> thanks Emilien this looks nice and I believe the basic approach
> consisting of:
>
> 1) create the podman systemd unit
> 2) delete the docker container
> 3) start the podman container
>

What about  several chained containers? You may delete one and next one
will fail. It would be better to stop container and delete it once all
dependent containers are migrated, started, validated. What Emilien
described works for all cases. It would be nice to have the same procedure
for Ceph cases as well, IMHO.


> could be used to upgrade the Ceph containers as well (via ceph-ansible)
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Best Regards,
Sergii Golovatiuk
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [all] Naming the T release of OpenStack

2018-10-19 Thread Tony Breeds
On Thu, Oct 18, 2018 at 05:35:39PM +1100, Tony Breeds wrote:
> Hello all,
> As per [1] the nomination period for names for the T release have
> now closed (actually 3 days ago sorry).  The nominated names and any
> qualifying remarks can be seen at2].
> 
> Proposed Names
>  * Tarryall
>  * Teakettle
>  * Teller
>  * Telluride
>  * Thomas
>  * Thornton
>  * Tiger
>  * Tincup
>  * Timnath
>  * Timber
>  * Tiny Town
>  * Torreys
>  * Trail
>  * Trinidad
>  * Treasure
>  * Troublesome
>  * Trussville
>  * Turret
>  * Tyrone
> 
> Proposed Names that do not meet the criteria
>  * Train

I have re-worked my openstack/governance change[1] to ask the TC to accept
adding Train to the poll as (partially) described in [2].

I present the names above to the community and Foundation marketing team
for consideration.  The list above does contain Train, clearly if the TC
do not approve [1] Train will not be included in the poll when created.

I apologise for any offence or slight caused by my previous email in
this thread.  It was well intentioned albeit, with hindsight, poorly
thought through.

Yours Tony.

[1] https://review.openstack.org/#/c/611511/
[2] 
https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev