[Openstack-operators] [simplification] PTG Recap

2017-10-03 Thread Mike Perez
# Simplification PTG Recap

## Introduction
This goal was started off in the May 2017 leadership workshop [1].  We are
collecting feedback from the community that OpenStack can be complex for
deployers, contributors, and ultimately the people we’re all supporting, our
consumers of clouds. This goal is purposely broad in response to some feedback
of OpenStack being complex. As a community, we must work together, and from an
objective standpoint set proper goals to this never-ending effort.

[1] - https://wiki.openstack.org/wiki/Governance/Foundation/8Mar2017BoardMeeting

## Moving Forward
We have a growing thread [1] on this topic, and the dev digest summary [2].
Let's move the discussion to this thread for better focus.

Let's recognize we’re not going to solve this problem with just some group or
code. It’s is going to be never-ending. 

So far with the etherpad, we have allowed the community to identify some of the
known things that make OpenStack complex. Some areas have more information than
others. Let's start research on those more identified areas first. We can
always revisit the other identified areas as interest and more information is
brought forward.

The three areas are Installation, Operation, and Upgrade … otherwise known as
I.O.U.

Below are the areas, some snippets from the etherpad and then also from our
2017 user survey [3].


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/thread.html#122075
[2] - 
https://www.openstack.org/blog/2017/09/developer-mailing-list-digest-september-23-29-2017/
[3] - https://www.openstack.org/assets/survey/April2017SurveyReport.pdf

## Installation

### Etherpad summary
* Our documentation team is moving towards an effort of decentralizing install
guides and more.
* We’ve been bridging the gap between project names and service names with the
project navigator [1], and service-type-authority repository [2].

### User Survey Feedback
* What we have today is varied installation/ deployment models.
* Need the installation to become easier—the architecture is still too complex
  right now.
* Installation, particularly around TripleO and HA UPGRADES deployments, are
  very complicated.
* A common deployment and lifecycle management tool/framework would make things
  easier. Having every distribution use its tools (Triple-O- Fuel- Crowbar-
  ...) really doesn’t help. And yes, I know that this is not OpenStack’s fault
  but if the community unites behind one tool (or maybe two), we could put some
  pressure to the vendors.
* Automate installation. Require consistent installation between projects.
* Standardized automated deployment methods to minimize the risk of splitting
  the developments in vendor-specific branches.
* Deployment is still a nightmare of complexity and riddled with failure unless
  you are covered in scars from previous deployments.
* Initial build-up needs to be much easier, such as using a simple scripted
  installer that analyzes the hardware and then can build a working OpenStack.
  When upgrades become available, it can do a rolling upgrade with 0 down time.

[1] - https://www.openstack.org/software/project-navigator/
[2] - http://git.openstack.org/cgit/openstack/service-types-authority/

## Upgrades

### Etherpad summary
* Easier to burn down clouds than to go from Newton -> Ocata -> Etc.
* It’s recognized things are getting better and will continue to improve
  assuming operators partner with Dev like with the skip level upgrade effort.
* More requests on publishing binaries. Lets refer back to our discussion on
  publish binary images [2] also dev digest version [3].

### User Survey Feedback

 End of Life Upstream
The lifecycle could use a lot of attention. Most large customers move slowly
and thus are running older versions, which are EOL upstream sometimes before
they even deploy them. Doing in-place upgrades is risky business with just
a one or two release jumps, so the prospect of trying to jump 4 or 5 releases
to get to a current, non-EOL version is daunting and results in either a lot of
outages or simply green-fielding new releases and letting the old die on the
vine. This causes significant operational overhead as getting tenants to move
to a new deploy entirely is a big ask, and you end up operating multiple
versions.

 Containerizing OpenStack Itself
Many organizations appear to be moving toward containerizing their OpenStack
control plane. Continued work on multi-version interoperability would allow
organizations to upgrade a lot more seamlessly and rapidly by deploying
newer-versioned containers in parallel with their existing older-versioned
containers. And it may have a profoundly positive effect on the upgrade and
lifecycle for larger deployments.

 Bugs
* The biggest challenge is to upgrade the production system since there are
  a lot of dependencies and bugs that we are facing.
* Releases need more feature and bugfix backporting.

 Longer Development Cycles
Stop coming out with all 

Re: [Openstack-operators] [openstack-dev] [nova] key_pair update on rebuild (a whole lot of conversations)

2017-10-03 Thread Matt Riedemann

On 10/3/2017 3:16 PM, Sean Dague wrote:

There is currently a spec up for being able to specify a new key_pair
name during the rebuild operation in Nova -
https://review.openstack.org/#/c/375221/

For those not completely familiar with Nova operations, rebuild triggers
the "reset this vm to initial state" by throwing out all the disks, and
rebuilding them from the initial glance images. It does however keep the
IP address and device models when you do that. So it's useful for
ephemeral but repeating workloads, where you'd rather not have the
network information change out from under you.


We also talked quite a bit about rebuild with volume-backed instances 
today, and the fact the root disk isn't replaced during rebuild in that 
case, for which there are many reported bugs...




The spec is a little vague about when this becomes really useful,
because this will not save you from "I lost my private key, and I have
important data on that disk". Because the disk is destroyed. That's the
point of rebuild. We once added this preserve_ephemeral flag to rebuild
for trippleo on ironic, but it's so nasty we've scoped it to only work
with ironic backends. Ephemeral should mean ephemeral.

Rebuild bypasses the scheduler. A rebuilt server stays on the same host
as it was before, which means the operation has a good chance of being
faster than a DELETE + CREATE, as the image cache on that host should
already have the base image for you instance.


It also means no chances for NoValidHost or resource claim failures.



A bunch of data was collected today in a lot of different IRC channels
(#openstack-nova, #openstack-infra, #openstack-operators).

= OpenStack Operators =

mnaser said that for their customers this would be useful. Keys get lost
often, but keeping the IP is actually valuable. They would also like this.

penick said that for their existing environment, they have a workflow
where this would be useful. But they are moving away from using nova for
key distribution because in Nova keys are user owned, which actually
works poorly given that everything else is project owned. So they are
building something to do key distribution after boot in the guest not
using nova's metadata.

Lots of people said they didn't use nova's keypair interfaces, they just
did it all in config management after the fact.

= Also on reboot? =

Because the reason people said they wanted it was: "I lost my private
key", the question at PTG was "does that mean you want it on reboot?"

But as we dive through the constraints of that, people that build "pet"
VMs typically delete or disable cloud-init (or similar systems) after
first boot. Without that kind of agent, this isn't going to work anyway.

So also on reboot seems very fragile and unuseful.

= Infra =

We asked the infra team if this is useful to them, the answer was no.
What would be useful them is if keypairs could be updated. They use a
symbolic name for a keypair but want to do regular key rotation. Right
now they do this by deleting then recreating keypairs, but that does
mean there is a window where there is no keypair with that name, so
server creates fail.

It is agreed that something supporting key rotation in the future would
be handy, that's not in this scope.

= Barbican =

In the tradition of making a simple fix a generic one, it does look like
there is a longer term part of this where Nova should really be able to
specify a Barbican resource url for a key so that things like rotation
could be dealt with in a system that specializes in that. It also would
address the very weird oddity of user vs. project scoping.

That's a bigger more nebulous thing. Other folks would need to be
engaged on that one.


= Where I think we are? =

I think with all this data we're at the following:

Q: Should we add this to rebuild
A: Yes, probably - after some enhancement to the spec *

* - we really should have much better use cases about the situations it
is expected to be used in. We spend a lot of time 2 and 3 years out
trying to figure out how anyone would ever use a feature, and adding
another one without this doesn't seem good

Q: should this also be on reboot?
A: NO - it would be too fragile


I also think figuring out a way to get Nova out of the key storage
business (which it really shouldn't be in) would be good. So if anyone
wants to tackle Nova using Barbican for keys, that would be ++. Rebuild
doesn't wait on that, but Barbican urls for keys seems like a much
better world to be in.

-Sean



Sean, thanks for summarizing the various discussions had today. I've 
also included the operators list on this.


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Jonathan Proulx
On Tue, Oct 03, 2017 at 08:29:45PM +, Jeremy Stanley wrote:
:On 2017-10-03 16:19:27 -0400 (-0400), Jonathan Proulx wrote:
:[...]
:> This works in our OpenStack where it's our IP space so PTR record also
:> matches, not so well in public cloud where we can reserve an IP and
:> set forward DNS but not control its reverse mapping.
:[...]
:
:Not that it probably helps, but I consider any public cloud which
:doesn't give you some means of automatically setting reverse DNS
:(either through an API or delegation to your own nameservers) to be
:thoroughly broken, at least for Internet-facing use cases.

we wander off topic...and I wander waaay off topic below...

but I have exactly 1 instance in AWS where I
care about this, perhaps I just don't care enough to have found the
answer or perhaps for count of 1 it's not worth solving.


Then again perhaps AWS is just actually the trash is seem to be to
me. I've been trying to like it since before there was an OpenStack,
but the more I try the less I can stand it.  People who use AWS and
complain about OpenStack UX baffle me, there's a lot OpenStack can do
to impove UX but  it's waaay better than my AWS experices. I mean it
was fewer steps to enable ipv6 on my OpenStack provider networks than
it was to get it working in my AWS VPC and neutron isn't the poster
child for simplicity.


:-- 
:Jeremy Stanley



:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-- 


signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Jeremy Stanley
On 2017-10-03 16:19:27 -0400 (-0400), Jonathan Proulx wrote:
[...]
> This works in our OpenStack where it's our IP space so PTR record also
> matches, not so well in public cloud where we can reserve an IP and
> set forward DNS but not control its reverse mapping.
[...]

Not that it probably helps, but I consider any public cloud which
doesn't give you some means of automatically setting reverse DNS
(either through an API or delegation to your own nameservers) to be
thoroughly broken, at least for Internet-facing use cases.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Jonathan Proulx
On Tue, Oct 03, 2017 at 01:00:13PM -0700, Clint Byrum wrote:

:It's worth noting that AD and Kerberos were definitely not designed
:for clouds that have short lived VMs, so it does not surprise me that
:treating VMs as cattle and then putting them in AD would confuse it.

For instances we have that need Kerberos keytabs we specify the fixed
IP. This works in our OpenStack where it's our IP space so PTR record also
matches, not so well in public cloud where we can reserve an IP and
set forward DNS but not control its reverse mapping.

-Jon

:Excerpts from Tim Bell's message of 2017-10-03 18:46:31 +:
:> We use rebuild when reverting with snapshots. Keeping the same IP and 
hostname avoids some issues with Active Directory and Kerberos.
:> 
:> Tim
:> 
:> -Original Message-
:> From: Clint Byrum 
:> Date: Tuesday, 3 October 2017 at 19:17
:> To: openstack-operators 
:> Subject: Re: [Openstack-operators] [nova] Should we allow passing new
user_data during rebuild?
:> 
:> 
:> Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
:> > We plan on deprecating personality files from the compute API in a new 
:> > microversion. The spec for that is here:
:> > 
:> > https://review.openstack.org/#/c/509013/
:> > 
:> > Today you can pass new personality files to inject during rebuild, and 
:> > at the PTG we said we'd allow passing new user_data to rebuild as a 
:> > replacement for the personality files.
:> > 
:> > However, if the only reason one would need to pass personality files 
:> > during rebuild is because we don't persist them during the initial 
:> > server create, do we really need to also allow passing user_data for 
:> > rebuild? The initial user_data is stored with the instance during 
:> > create, and re-used during rebuild, so do we need to allow updating it 
:> > during rebuild?
:> > 
:> 
:> My personal opinion is that rebuild is an anti-pattern for cloud, and
:> should be frozen and deprecated. It does nothing but complicate Nova
:> and present challenges for scaling.
:> 
:> That said, if it must stay as a feature, I don't think updating the
:> user_data should be a priority. At that point, you've basically created 
an
:> entirely new server, and you can already do that by creating an entirely
:> new server.
:> 
:> ___
:> OpenStack-operators mailing list
:> OpenStack-operators@lists.openstack.org
:> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
:> 
:
:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Clint Byrum
I fully appreciate that there are users of it today, and that it is
a thing that will likely live for years.

Long lived VMs can use all sorts of features to make VMs work more like
precious long lived servers. However, supporting these cases directly
doesn't make OpenStack scalable or simple. Quite the opposite.

It's worth noting that AD and Kerberos were definitely not designed
for clouds that have short lived VMs, so it does not surprise me that
treating VMs as cattle and then putting them in AD would confuse it.

Excerpts from Tim Bell's message of 2017-10-03 18:46:31 +:
> We use rebuild when reverting with snapshots. Keeping the same IP and 
> hostname avoids some issues with Active Directory and Kerberos.
> 
> Tim
> 
> -Original Message-
> From: Clint Byrum 
> Date: Tuesday, 3 October 2017 at 19:17
> To: openstack-operators 
> Subject: Re: [Openstack-operators] [nova] Should we allow passing new
> user_data during rebuild?
> 
> 
> Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
> > We plan on deprecating personality files from the compute API in a new 
> > microversion. The spec for that is here:
> > 
> > https://review.openstack.org/#/c/509013/
> > 
> > Today you can pass new personality files to inject during rebuild, and 
> > at the PTG we said we'd allow passing new user_data to rebuild as a 
> > replacement for the personality files.
> > 
> > However, if the only reason one would need to pass personality files 
> > during rebuild is because we don't persist them during the initial 
> > server create, do we really need to also allow passing user_data for 
> > rebuild? The initial user_data is stored with the instance during 
> > create, and re-used during rebuild, so do we need to allow updating it 
> > during rebuild?
> > 
> 
> My personal opinion is that rebuild is an anti-pattern for cloud, and
> should be frozen and deprecated. It does nothing but complicate Nova
> and present challenges for scaling.
> 
> That said, if it must stay as a feature, I don't think updating the
> user_data should be a priority. At that point, you've basically created an
> entirely new server, and you can already do that by creating an entirely
> new server.
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Tim Bell
We use rebuild when reverting with snapshots. Keeping the same IP and hostname 
avoids some issues with Active Directory and Kerberos.

Tim

-Original Message-
From: Clint Byrum 
Date: Tuesday, 3 October 2017 at 19:17
To: openstack-operators 
Subject: Re: [Openstack-operators] [nova] Should we allow passing new   
user_data during rebuild?


Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
> We plan on deprecating personality files from the compute API in a new 
> microversion. The spec for that is here:
> 
> https://review.openstack.org/#/c/509013/
> 
> Today you can pass new personality files to inject during rebuild, and 
> at the PTG we said we'd allow passing new user_data to rebuild as a 
> replacement for the personality files.
> 
> However, if the only reason one would need to pass personality files 
> during rebuild is because we don't persist them during the initial 
> server create, do we really need to also allow passing user_data for 
> rebuild? The initial user_data is stored with the instance during 
> create, and re-used during rebuild, so do we need to allow updating it 
> during rebuild?
> 

My personal opinion is that rebuild is an anti-pattern for cloud, and
should be frozen and deprecated. It does nothing but complicate Nova
and present challenges for scaling.

That said, if it must stay as a feature, I don't think updating the
user_data should be a priority. At that point, you've basically created an
entirely new server, and you can already do that by creating an entirely
new server.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Clint Byrum

Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
> We plan on deprecating personality files from the compute API in a new 
> microversion. The spec for that is here:
> 
> https://review.openstack.org/#/c/509013/
> 
> Today you can pass new personality files to inject during rebuild, and 
> at the PTG we said we'd allow passing new user_data to rebuild as a 
> replacement for the personality files.
> 
> However, if the only reason one would need to pass personality files 
> during rebuild is because we don't persist them during the initial 
> server create, do we really need to also allow passing user_data for 
> rebuild? The initial user_data is stored with the instance during 
> create, and re-used during rebuild, so do we need to allow updating it 
> during rebuild?
> 

My personal opinion is that rebuild is an anti-pattern for cloud, and
should be frozen and deprecated. It does nothing but complicate Nova
and present challenges for scaling.

That said, if it must stay as a feature, I don't think updating the
user_data should be a priority. At that point, you've basically created an
entirely new server, and you can already do that by creating an entirely
new server.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Matt Riedemann

On 10/3/2017 10:53 AM, Matt Riedemann wrote:
However, if the only reason one would need to pass personality files 
during rebuild is because we don't persist them during the initial 
server create, do we really need to also allow passing user_data for 
rebuild?


Given personality files were added to the rebuild API back in Diablo [1] 
with no explanation in the commit message why, my assumption above is 
just that, an assumption.


[1] 
https://github.com/openstack/nova/commit/cebc98176926f57016a508d5c59b11f55dfcf2b3


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Matt Riedemann
We plan on deprecating personality files from the compute API in a new 
microversion. The spec for that is here:


https://review.openstack.org/#/c/509013/

Today you can pass new personality files to inject during rebuild, and 
at the PTG we said we'd allow passing new user_data to rebuild as a 
replacement for the personality files.


However, if the only reason one would need to pass personality files 
during rebuild is because we don't persist them during the initial 
server create, do we really need to also allow passing user_data for 
rebuild? The initial user_data is stored with the instance during 
create, and re-used during rebuild, so do we need to allow updating it 
during rebuild?


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] reset key pair during rebuilding

2017-10-03 Thread Saverio Proto
Hello,

I agree this feature of injecting a new keypair is something of great
use. We are always dealing with users that cant access their VMs
anymore.

But AFAIU here we are talking about injecting a new key at REBUILD. So
it does not fit the scenario of a staff member that leaves !

We hardly never use the rebuild feature in our workflow. Our users
just use create and delete.

I think it would be more useful a feature where you can reinject a new
keypair in the VM at any time. Ahhh it makes the users happy but of
course it is a security nightmare :D

Cheers

Saverio



2017-09-27 11:15 GMT+02:00 Marcus Furlong :
> On 27 September 2017 at 09:23, Michael Still  wrote:
>>
>> Operationally, why would I want to inject a new keypair? The scenario I can
>> think of is that there's data in that instance that I want, and I've lost
>> the keypair somehow. Unless that data is on an ephemeral, its gone if we do
>> a rebuild.
>
> This is quite a common scenario - staff member who started the
> instance leaves, and you want to access data on the instance, or
> maintain/debug the service running on the instance.
>
> Hitherto, I have used direct db calls to update the key, so it would
> be nice if there was an API call to do so.
>
> Cheers,
> Marcus.
> --
> Marcus Furlong
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetups Team Meeting 2017-10-3

2017-10-03 Thread Chris Morgan
Minutes and meeting log for today's IRC meeting may be found here:

Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-10-03-14.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-10-03-14.02.txt
Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-10-03-14.02.log.html

Of particular interest is the strong support for a Forum session at the
Sydney Summit to discuss OpenStack having LTS releases (Long Term Support).

Next meeting 14:00 EST on #openstack-operators

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] issue using magnum on Pike

2017-10-03 Thread Andy Wojnarek
Thank you very much! That makes a ton of sense.

How did you troubleshoot this? I couldn’t find anything intuitive about the 
error message. 

Do you think this is worthy of a bug submittal? Or is it strictly a build issue 
on SuSE’s part?

Thanks,
Andrew Wojnarek |  Sr. Systems Engineer| ATS Group, LLC
mobile 717.856.6901 | andy.wojna...@theatsgroup.com


On 10/3/17, 4:37 AM, "Spyros Trigazis"  wrote:

cc Andy Wojnarek and Erik McCormick

On 3 October 2017 at 10:32, Spyros Trigazis  wrote:
> Hello,
>
> It looks like the new docker module is not installed.
> The docker client moved from docker 1.x to 2.x and
> unfortunately they changed the name.
>
> Magnum Pike depends on python-docker 2.x.
> 
http://git.openstack.org/cgit/openstack/magnum/tree/requirements.txt?h=stable%2Fpike#n16
>
> Module named docker:
> https://github.com/docker/docker-py/blob/2.0.0/setup.py#L47
> https://pypi.python.org/pypi/docker
>
> Module named docker-py:
> https://github.com/docker/docker-py/blob/1.10.6/setup.py#L43
> https://pypi.python.org/pypi/docker-py
>
> The change on the name:
> 
https://github.com/docker/docker-py/commit/25aaec37b7c2e950b4a987ac151880061febb37a
>
> You need to install docker 2.x , I don't know if there are opensuse
> packages for it, I see only 1.10.4.
>
> # zypper info python-docker-py
> Loading repository data...
> Reading installed packages...
>
>
> Information for package python-docker-py:
> -
> Repository : OSS
> Name   : python-docker-py
> Version: 1.10.4-10.2
> Arch   : noarch
> Vendor : openSUSE
> Installed Size : 306.6 KiB
> Installed  : No
> Status : not installed
> Source package : python-docker-py-1.10.4-10.2.src
> Summary: Docker API Client
> Description:
> A docker API client in Python
>
> Cheers,
> Spyros



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-ansible] Meetings change

2017-10-03 Thread Jean-Philippe Evrard
Hello everyone,

Some people on this planet are more aware of others of this fact:
we have too many meetings in our life.

I don't think OpenStack-Ansible should be so greedy to take 8 hours of
your life a month for meetings. I therefore propose the reduction to 4
meetings/month: 3 bug triages and 1 community meeting.

On top of that, attendance in meetings is low, so I'd rather we find,
all together, a timeslot that matches the majority of us.

I started this etherpad [1], to list timeslots. I'd like you to:
1) (Optionally) Add timeslot that would suit you best
2) Vote for a timeslot in which you can regularily attend
OpenStack-Ansible meetings

Please give your irc nick too, that would help.

Thank you in advance.

Best regards,
Jean-Philippe Evrard (evrardjp)

[1] https://etherpad.openstack.org/p/osa-meetings-planification

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] issue using magnum on Pike

2017-10-03 Thread Spyros Trigazis
cc Andy Wojnarek and Erik McCormick

On 3 October 2017 at 10:32, Spyros Trigazis  wrote:
> Hello,
>
> It looks like the new docker module is not installed.
> The docker client moved from docker 1.x to 2.x and
> unfortunately they changed the name.
>
> Magnum Pike depends on python-docker 2.x.
> http://git.openstack.org/cgit/openstack/magnum/tree/requirements.txt?h=stable%2Fpike#n16
>
> Module named docker:
> https://github.com/docker/docker-py/blob/2.0.0/setup.py#L47
> https://pypi.python.org/pypi/docker
>
> Module named docker-py:
> https://github.com/docker/docker-py/blob/1.10.6/setup.py#L43
> https://pypi.python.org/pypi/docker-py
>
> The change on the name:
> https://github.com/docker/docker-py/commit/25aaec37b7c2e950b4a987ac151880061febb37a
>
> You need to install docker 2.x , I don't know if there are opensuse
> packages for it, I see only 1.10.4.
>
> # zypper info python-docker-py
> Loading repository data...
> Reading installed packages...
>
>
> Information for package python-docker-py:
> -
> Repository : OSS
> Name   : python-docker-py
> Version: 1.10.4-10.2
> Arch   : noarch
> Vendor : openSUSE
> Installed Size : 306.6 KiB
> Installed  : No
> Status : not installed
> Source package : python-docker-py-1.10.4-10.2.src
> Summary: Docker API Client
> Description:
> A docker API client in Python
>
> Cheers,
> Spyros

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] issue using magnum on Pike

2017-10-03 Thread Spyros Trigazis
Hello,

It looks like the new docker module is not installed.
The docker client moved from docker 1.x to 2.x and
unfortunately they changed the name.

Magnum Pike depends on python-docker 2.x.
http://git.openstack.org/cgit/openstack/magnum/tree/requirements.txt?h=stable%2Fpike#n16

Module named docker:
https://github.com/docker/docker-py/blob/2.0.0/setup.py#L47
https://pypi.python.org/pypi/docker

Module named docker-py:
https://github.com/docker/docker-py/blob/1.10.6/setup.py#L43
https://pypi.python.org/pypi/docker-py

The change on the name:
https://github.com/docker/docker-py/commit/25aaec37b7c2e950b4a987ac151880061febb37a

You need to install docker 2.x , I don't know if there are opensuse
packages for it, I see only 1.10.4.

# zypper info python-docker-py
Loading repository data...
Reading installed packages...


Information for package python-docker-py:
-
Repository : OSS
Name   : python-docker-py
Version: 1.10.4-10.2
Arch   : noarch
Vendor : openSUSE
Installed Size : 306.6 KiB
Installed  : No
Status : not installed
Source package : python-docker-py-1.10.4-10.2.src
Summary: Docker API Client
Description:
A docker API client in Python

Cheers,
Spyros

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators