Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Kris,

On Sep 30, 2015, at 4:26 PM, Kris G. Lindgren 
mailto:klindg...@godaddy.com>> wrote:

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

Keep in mind that your magnum bays can use the same floating ip addresses that 
your containers do, and the containers hosts are shared between the COE nodes 
and the containers that make up the applications running in the bay. It is 
possible to use private address space for that, and proxy public facing access 
through a proxy layer that uses names to route connections to the appropriate 
magnum bay. That’s how you can escape the problem of public IP addresses as a 
scarce resource.

Also, if you use Magnum to start all those bays, they can all look the same, 
rather than the ~1000 container environments you have today that probably don’t 
look very similar, one to the next. Upgrading becomes much more achievable when 
you have wider consistency. There is a new feature currently in review called 
public baymodel that allows the cloud operator to define the bay model, but 
individual tenants can start bays based on that one common “template”. This is 
a way of centralizing most of your configuration. This balances a lot of the 
operational concern.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

You can do this today by sharing your TLS certs. In fact, you could make the 
cert signing a bit more sophisticated than it is today, and allow each subteam 
to have a unique TLS cert that can auth against a common bay.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

This is different than what Joshua was asking for with identities in keystone, 
because today’s COE’s themselves don’t have modular identity solutions that are 
implemented with multi-tenancy.

Imagine for a moment that you don’t need to run your bays on Nova instances 
that are virtual machines. What if you had an additional host aggregate that 
could produce libvirt/lxc guests that you can use to form bays. They can 
actually be composed of nodes that are sourced from BOTH your libvirt/lxc host 
aggregate (for hosting your COE’s) and your normal KVM (or the hypervisor) host 
aggregate for your apps to use. Then the effective consolidation ratio of your 
bays (what you referred to as “excessive amount of duplicated infrastructure”) 
become processes running on a much smaller number of compute nodes. You could 
do this by specifying a different master_flavor_id and flavor_id such that 
these fall on different host aggregates. As long as you are “all one company” 
and are not concerned primarily with security isolation between neighboring COE 
master nodes, that approach may actually be the right balance, and would not 
require an architectural shift or figuring out how to accomplish nested tenants.

Adrian

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look like” it
>offers multipl

[openstack-dev] [QA] Meeting Thursday October 1st at 9:00 UTC

2015-09-30 Thread Ken'ichi Ohmichi
Hi everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, October 1st at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the next
meeting will be at:

04:00 EDT
18:00 JST
18:30 ACST
11:00 CEST
04:00 CDT
02:00 PDT

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Ben Nemec
 

On 2015-09-30 01:08, Dougal Matthews wrote: 

> Hi,
> 
> What is the standard practice for defining public API's for OpenStack
> libraries? As I am working on refactoring and updating tripleo-common I have
> to grep through the projects I know that use it to make sure I don't break
> anything.
> 
> Personally I would choose to have a policy of "If it is documented, it is
> public" because that is very clear and it still allows us to do internal
> refactoring.
> 
> Otherwise we could use __all__ to define what is public in each file, or
> assume everything that doesn't start with an underscore is public.

The last is the accepted Python convention:
https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references
and is in common use in other OpenStack libraries. It also integrates
properly with things like automatic api docs from sphinx. I'd be -1 on
pretty much any other approach. 
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Do not modify (or read) ERROR_ON_CLONE in devstack gate jobs

2015-09-30 Thread Takashi Yamamoto
hi,

you removed networking-midonet tempest jobs for the reason. [1]
i want to revive them.  but i'm not sure what methods are acceptable or not.
can you explain a little?

i did some research of other repos.
- networking-ovn seems to use a raw "git clone" command to fetch
openvswitch repo
- networking-odl seems to fetch a zip from their site.  it also seems
to do some magic with maven.

if ovn's way is acceptable, it's the easiest for us (midonet) to follow.

odl's way is also ok for us to follow if acceptable. however,
in the commit message [1] you seem saying we (openstack) need to have
a maven mirror.
does it apply to odl as well?
[1] https://review.openstack.org/#/c/227401/

On Fri, Sep 25, 2015 at 12:24 AM, James E. Blair  wrote:
> Hi,
>
> Recently we noted some projects modifying the ERROR_ON_CLONE environment
> variable in devstack gate jobs.  It is never acceptable to do that.  It
> is also not acceptable to read its value and alter a program's behavior.
>
> Devstack is used by developers and users to set up a simple OpenStack
> environment.  It does this by cloning all of the projects' git repos and
> installing them.
>
> It is also used by our CI system to test changes.  Because the logic
> regarding what state each of the repositories should be in is
> complicated, that is offloaded to Zuul and the devstack-gate project.
> They ensure that all of the repositories involved in a change are set up
> correctly before devstack runs.  However, they need to be identified in
> advance, and to ensure that we don't accidentally miss one, the
> ERROR_ON_CLONE variable is checked by devstack and if it is asked to
> clone a repository because it does not already exist (i.e., because it
> was not set up in advance by devstack-gate), it fails with an error
> message.
>
> If you encounter this, simply add the missing project to the $PROJECTS
> variable in your job definition.  There is no need to detect whether
> your program is being tested and alter its behavior (a practice which I
> gather may be popular but is falling out of favor).
>
> -Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Cathy Zhang
Hi Armando and Kyle,

Thanks for your reply. Yes, the codes are not ready for release yet. The code 
size is not small and we are working hard on getting the codes ready as soon as 
possible.

Cathy

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Wednesday, September 30, 2015 6:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

On Wed, Sep 30, 2015 at 8:26 PM, Armando M. 
mailto:arma...@gmail.com>> wrote:


On 30 September 2015 at 16:02, Cathy Zhang 
mailto:cathy.h.zh...@huawei.com>> wrote:
Hi Kyle,

Is this only about the sub-projects that are ready for release? I do not see 
networking-sfc sub-project in the list. Does this mean we have done the pypi 
registrations for the networking-sfc project correctly or it is not checked 
because it is not ready for release yet?

Can't speak for Kyle, but with these many meaty patches in flight [1], I think 
it's fair to assume that although the mechanisms are in place, Kyle is not 
going to release the project at this time; networking-sfc release is 
independent, we can publish the project when the time is ripe.


Armando is exactly spot on. The release of networking-sfc would appear to be 
pretty early at this point. Once the patches land and the team has some 
confidence in the API and it's testing status, we'll look at releasing it.
Thanks!
Kyle

Cheers,
Armando

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/networking-sfc,n,z

Thanks,
Cathy

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Wednesday, September 30, 2015 11:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] pypi packages for networking sub-projects

Folks:
In trying to release some networking sub-projects recently, I ran into an issue 
[1] where I couldn't release some projects due to them not being registered on 
pypi. I have a patch out [2] which adds pypi publishing jobs, but before that 
can merge, we need to make sure all projects have pypi registrations in place. 
The following networking sub-projects do NOT have pypi registrations in place 
and need them created following the guidelines here [3]:
networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable 
openstackci has "Owner" permissions, which allow for the publishing of packages 
to pypi:
networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the 
neutron-release team the ability to release pypi packages for those packages.
Thanks!
Kyle

[1] 
http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3] 
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] PTL & Component Leads elections

2015-09-30 Thread Sergey Lukjanov
Hi folks,

I've just setup the voting system and you should start receiving email with
topic "Poll: Fuel PTL Elections Fall 2015".

NOTE: Please, don't forward this email, it contains *personal* unique token
for the voting.

Thanks.

On Wed, Sep 30, 2015 at 3:28 AM, Vladimir Kuklin 
wrote:

> +1 to Igor. Do we have voting system set up?
>
> On Wed, Sep 30, 2015 at 4:35 AM, Igor Kalnitsky 
> wrote:
>
>> > * September 29 - October 8: PTL elections
>>
>> So, it's in progress. Where I can vote? I didn't receive any emails.
>>
>> On Mon, Sep 28, 2015 at 7:31 PM, Tomasz Napierala
>>  wrote:
>> >> On 18 Sep 2015, at 04:39, Sergey Lukjanov 
>> wrote:
>> >>
>> >>
>> >> Time line:
>> >>
>> >> PTL elections
>> >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
>> position
>> >> * September 29 - October 8: PTL elections
>> >
>> > Just a reminder that we have a deadline for candidates today.
>> >
>> > Regards,
>> > --
>> > Tomasz 'Zen' Napierala
>> > Product Engineering - Poland
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Selenium is now green - please pay attention to it

2015-09-30 Thread Richard Jones
Hi folks,

Selenium tests "gate-horizon-selenium-headless" are now green in master
again.

Please pay attention if it goes red. I will probably notice, but if I
don't, and you can't figure out what's going on, please feel free to get in
touch with me (r1chardj0n3s on IRC in #openstack-horizon, or email). Let's
try to keep it green!


Richard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] new yaml format for all.yml, need feedback

2015-09-30 Thread Swapnil Kulkarni
On Wed, Sep 30, 2015 at 11:24 PM, Jeff Peeler  wrote:

> The patch I just submitted[1] modifies the syntax of all.yml to use
> dictionaries, which changes how variables are referenced. The key
> point being in globals.yml, the overriding of a variable will change
> from simply specifying the variable to using the dictionary value:
>
> old:
> api_interface: 'eth0'
>
> new:
> network:
> api_interface: 'eth0'
>
> This looks good.


> Preliminary feedback on IRC sounded positive, so I'll go ahead and
> work on finishing the review immediately assuming that we'll go
> forward. Please ping me if you hate this change so that I can stop the
> work.
>
> [1] https://review.openstack.org/#/c/229535/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to address boot from volume failures

2015-09-30 Thread melanie witt
On Sep 30, 2015, at 14:45, Andrew Laski  wrote:
> 
> I have a slight preference for #1.  Nova is not buggy here novaclient is so I 
> think we should contain the fix there.
> 
> Is using the v2 API an option?  That should also allow the 3 extra parameters 
> mentioned in #2.

+1. I have put up https://review.openstack.org/229669 in -W mode in case we 
decide to go that route. 

-melanie __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-30 Thread Matt Fischer
Thanks for summarizing this Mark. What's the best way to get feedback about
this to the TC? I'd love to see some of the items which I think are common
sense for anyone who can't just blow away devstack and start over to get
added for consideration.

On Tue, Sep 29, 2015 at 11:32 AM, Mark Voelker  wrote:

>
> Mark T. Voelker
>
>
>
> > On Sep 29, 2015, at 12:36 PM, Matt Fischer  wrote:
> >
> >
> >
> > I agree with John Griffith. I don't have any empirical evidences to back
> > my "feelings" on that one but it's true that we weren't enable to enable
> > Cinder v2 until now.
> >
> > Which makes me wonder: When can we actually deprecate an API version? I
> > *feel* we are fast to jump on the deprecation when the replacement isn't
> > 100% ready yet for several versions.
> >
> > --
> > Mathieu
> >
> >
> > I don't think it's too much to ask that versions can't be deprecated
> until the new version is 100% working, passing all tests, and the clients
> (at least python-xxxclients) can handle it without issues. Ideally I'd like
> to also throw in the criteria that devstack, rally, tempest, and other
> services are all using and exercising the new API.
> >
> > I agree that things feel rushed.
>
>
> FWIW, the TC recently created an assert:follows-standard-deprecation tag.
> Ivan linked to a thread in which Thierry asked for input on it, but FYI the
> final language as it was approved last week [1] is a bit different than
> originally proposed.  It now requires one release plus 3 linear months of
> deprecated-but-still-present-in-the-tree as a minimum, and recommends at
> least two full stable releases for significant features (an entire API
> version would undoubtedly fall into that bucket).  It also requires that a
> migration path will be documented.  However to Matt’s point, it doesn’t
> contain any language that says specific things like:
>
> In the case of major API version deprecation:
> * $oldversion and $newversion must both work with
> [cinder|nova|whatever]client and openstackclient during the deprecation
> period.
> * It must be possible to run $oldversion and $newversion concurrently on
> the servers to ensure end users don’t have to switch overnight.
> * Devstack uses $newversion by default.
> * $newversion works in Tempest/Rally/whatever else.
>
> What it *does* do is require that a thread be started here on
> openstack-operators [2] so that operators can provide feedback.  I would
> hope that feedback like “I can’t get clients to use it so please don’t
> remove it yet” would be taken into account by projects, which seems to be
> exactly what’s happening in this case with Cinder v1.  =)
>
> I’d hazard a guess that the TC would be interested in hearing about
> whether you think that plan is a reasonable one (and given that TC election
> season is upon us, candidates for the TC probably would too).
>
> [1] https://review.openstack.org/#/c/207467/
> [2]
> http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
>
> At Your Service,
>
> Mark T. Voelker
>
>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Liberty RC1 availability in Debian

2015-09-30 Thread Tom Fifield

On 30/09/15 19:58, Thomas Goirand wrote:

Hi everyone!

1/ Announcement
===

I'm pleased to announce, in advance of the final Liberty release, that
Liberty RC1 not only has been fully uploaded to Debian Experimental, but
also that the Tempest CI (which I maintain and is a package only CI, no
deployment tooling involved), shows that it's also fully installable and
working. There's still some failures, but these are, I am guessing, not
due to problems in the packaging, but rather some Tempest setup problems
which I intend to address.

If you want to try out Liberty RC1 in Debian, you can either try it
using Debian Sid + Experimental (recommended), or use the Jessie
backport repository built out of Mirantis Jenkins server. Repositories
are listed at this address:

http://liberty-jessie.pkgs.mirantis.com/

2/ Quick note about Liberty Debian repositories
===

During Debconf 15, someone reported that the fact the Jessie backports
are on a Mirantis address is disturbing.

Note that, while the above really is a non-Debian (ie: non official
private) repository, it only contains unmodified source packages, only
just rebuilt for Debian Stable. Please don't be afraid by the tainted
"mirantis.com" domain name, I could have as well set a debian.net
address (which has been on my todo list for a long time). But it is
still Debian only packages. Everything there is strait out of Debian
repositories, nothing added, modified or removed.

I believe that Liberty release in Sid, is currently working very well,
but I haven't tested it as much as the Jessie backport.

Started with the Kilo release, I have been uploading packages to the
official Debian backports repositories. I will do so as well for the
Liberty release, after the final release is out, and after Liberty is
fully migrated to Debian Testing (the rule for stable-backports is that
packages *must* be available in Testing *first*, in order to provide an
upgrade path). So I do expect Liberty to be available from
jessie-backports maybe a few weeks *after* the final Liberty release.
Before that, use the unofficial Debian repositories.

3/ Horizon dependencies still in NEW queue
==

It is also worth noting that Horizon hasn't been fully FTP master
approved, and that some packages are still remaining in the NEW queue.
This isn't the first release with such an issue with Horizon. I hope
that 1/ FTP masters will approve the remaining packages son 2/ for
Mitaka, the Horizon team will care about freezing external dependencies
(ie: new Javascript objects) earlier in the development cycle. I am
hereby proposing that the Horizon 3rd party dependency freeze happens
not later than Mitaka b2, so that we don't experience it again for the
next release. Note that this problem affects both Debian and Ubuntu, as
Ubuntu syncs dependencies from Debian.

5/ New packages in this release
===

You may have noticed that the below packages are now part of Debian:
- Manila
- Aodh
- ironic-inspector
- Zaqar (this one is still in the FTP masters NEW queue...)

I have also packaged a few more, but there are still blockers:
- Congress (antlr version is too low in Debian)
- Mistral

6/ Roadmap for Liberty final release


Next on my roadmap for the final release of Liberty, is finishing to
upgrade the remaining components to the latest version tested in the
gate. It has been done for most OpenStack deliverables, but about a
dozen are still in the lowest version supported by our global-requirements.

There's also some remaining work:
- more Neutron drivers
- Gnocchi
- Address the remaining Tempest failures, and widen the scope of tests
(add Sahara, Heat, Swift and others to the tested projects using the
Debian package CI)

I of course welcome everyone to test Liberty RC1 before the final
release, and report bugs on the Debian bug tracker if needed.

Also note that the Debian packaging CI is fully free software, and part
of Debian as well (you can look into the openstack-meta-packages package
in git.debian.org, and in openstack-pkg-tools). Contributions in this
field are also welcome.

7/ Thanks to Canonical & every OpenStack upstream projects
==

I'd like to point out that, even though I did the majority of the work
myself, for this release, there was a way more collaboration with
Canonical on the dependency chain. Indeed, for this Liberty release,
Canonical decided to upload every dependency to Debian first, and then
only sync from it. So a big thanks to the Canonical server team for
doing community work with me together. I just hope we could push this
even further, especially trying to have consistency for Nova and Neutron
binary package names, as it is an issue for Puppet guys.

Last, I would like to hereby thanks everyone who helped me fixing issues
in these packages. Thank you if you've 

[openstack-dev] [fuel] Fuel 7.0 is released

2015-09-30 Thread Dmitry Borodaenko
We are proud to announce the release of Fuel 7.0, deployment and
management tool for OpenStack.

This release introduces support for OpenStack 2015.1.0 (Kilo) and
continues to improve Fuel's pluggability and flexibility:

- Fuel plugins now can reserve their own VIP addresses, define their own
  node roles, extract granular deployment tasks from controllers to
  dedicated nodes. Plugin versioning system makes it easy to publish and
  apply maintenance updates to plugins.

- Networking templates allow you to redefine the set of network roles
  that can be assigned to node NICs, and, combined with node groups
  improvements, greatly expand the range of possible network topologies.

In addition to that, dozens of new features and deployment options were 
added, ranging from Neutron DVR and VXLAN, to customizing node hostnames
and assigning node labels, to more flexible bonding and NIC offloading
settings, just to name a few. Hundreds of bugs were fixed (1565 to be
precise), ranging from scalability and HA issues to user experience
complaints. 

Learn more about Fuel:
https://wiki.openstack.org/wiki/Fuel

Download the Fuel 7.0 ISO:
https://www.fuel-infra.org/

Many thanks to the Fuel team for the insight and the hard work that went
into making this release!

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Kyle Mestery
On Wed, Sep 30, 2015 at 8:26 PM, Armando M.  wrote:

>
>
> On 30 September 2015 at 16:02, Cathy Zhang 
> wrote:
>
>> Hi Kyle,
>>
>>
>>
>> Is this only about the sub-projects that are ready for release? I do not
>> see networking-sfc sub-project in the list. Does this mean we have done the
>> pypi registrations for the networking-sfc project correctly or it is not
>> checked because it is not ready for release yet?
>>
>
> Can't speak for Kyle, but with these many meaty patches in flight [1], I
> think it's fair to assume that although the mechanisms are in place, Kyle
> is not going to release the project at this time; networking-sfc release is
> independent, we can publish the project when the time is ripe.
>
>
Armando is exactly spot on. The release of networking-sfc would appear to
be pretty early at this point. Once the patches land and the team has some
confidence in the API and it's testing status, we'll look at releasing it.

Thanks!
Kyle


> Cheers,
> Armando
>
> [1]
> https://review.openstack.org/#/q/status:open+project:openstack/networking-sfc,n,z
>
>
>>
>>
>> Thanks,
>>
>> Cathy
>>
>>
>>
>> *From:* Kyle Mestery [mailto:mest...@mestery.com]
>> *Sent:* Wednesday, September 30, 2015 11:55 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* [openstack-dev] [neutron] pypi packages for networking
>> sub-projects
>>
>>
>>
>> Folks:
>>
>> In trying to release some networking sub-projects recently, I ran into an
>> issue [1] where I couldn't release some projects due to them not being
>> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
>> but before that can merge, we need to make sure all projects have pypi
>> registrations in place. The following networking sub-projects do NOT have
>> pypi registrations in place and need them created following the guidelines
>> here [3]:
>>
>> networking-calico
>>
>> networking-infoblox
>>
>> networking-powervm
>>
>>
>>
>> The following pypi registrations did not follow directions to enable
>> openstackci has "Owner" permissions, which allow for the publishing of
>> packages to pypi:
>>
>> networking-ale-omniswitch
>>
>> networking-arista
>>
>> networking-l2gw
>>
>> networking-vsphere
>>
>>
>> Once these are corrected, we can merge [2] which will then allow the
>> neutron-release team the ability to release pypi packages for those
>> packages.
>>
>> Thanks!
>>
>> Kyle
>>
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
>> [2] https://review.openstack.org/#/c/229564/1
>> [3]
>> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Armando M.
On 30 September 2015 at 16:02, Cathy Zhang  wrote:

> Hi Kyle,
>
>
>
> Is this only about the sub-projects that are ready for release? I do not
> see networking-sfc sub-project in the list. Does this mean we have done the
> pypi registrations for the networking-sfc project correctly or it is not
> checked because it is not ready for release yet?
>

Can't speak for Kyle, but with these many meaty patches in flight [1], I
think it's fair to assume that although the mechanisms are in place, Kyle
is not going to release the project at this time; networking-sfc release is
independent, we can publish the project when the time is ripe.

Cheers,
Armando

[1]
https://review.openstack.org/#/q/status:open+project:openstack/networking-sfc,n,z


>
>
> Thanks,
>
> Cathy
>
>
>
> *From:* Kyle Mestery [mailto:mest...@mestery.com]
> *Sent:* Wednesday, September 30, 2015 11:55 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [neutron] pypi packages for networking
> sub-projects
>
>
>
> Folks:
>
> In trying to release some networking sub-projects recently, I ran into an
> issue [1] where I couldn't release some projects due to them not being
> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
> but before that can merge, we need to make sure all projects have pypi
> registrations in place. The following networking sub-projects do NOT have
> pypi registrations in place and need them created following the guidelines
> here [3]:
>
> networking-calico
>
> networking-infoblox
>
> networking-powervm
>
>
>
> The following pypi registrations did not follow directions to enable
> openstackci has "Owner" permissions, which allow for the publishing of
> packages to pypi:
>
> networking-ale-omniswitch
>
> networking-arista
>
> networking-l2gw
>
> networking-vsphere
>
>
> Once these are corrected, we can merge [2] which will then allow the
> neutron-release team the ability to release pypi packages for those
> packages.
>
> Thanks!
>
> Kyle
>
>
> [1]
> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
> [2] https://review.openstack.org/#/c/229564/1
> [3]
> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Liberty release - what is the correct version - 2015.2.0, 8.0.0 or 12.0.0?

2015-09-30 Thread Jeremy Stanley
On 2015-10-01 10:54:20 +1000 (+1000), Richard Jones wrote:
[...]
> I believe that if we are moving to semver, then 12.0.0 is
> appropriate.

Each project participating in the release is following semver
independently, with the first digit indicating the number of
OpenStack integrated releases in which that project has
participated. This means the version numbers will vary between
projects. For Horizon it's 8.0.0 but for, say, Nova it's 12.0.0 and
Sahara's 3.0.0. This was chosen specifically to avoid future
assumptions that the version numbers will remain in sync as they
will naturally diverge in coming cycles anyway.

For further details, see the thread starting around
http://lists.openstack.org/pipermail/openstack-dev/2015-June/067006.html
which lists all of them (noting there have since been some
corrections to that initial plan).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Liberty release - what is the correct version - 2015.2.0, 8.0.0 or 12.0.0?

2015-09-30 Thread Robert Collins
This thread

http://lists.openstack.org/pipermail/openstack-dev/2015-May/065211.html
(carries on 
http://lists.openstack.org/pipermail/openstack-dev/2015-June/065278.html)

May help.

On 1 October 2015 at 13:54, Richard Jones  wrote:
> Hi all,
>
> Historically OpenStack releases are versioned .N as documented in the
> Release Naming wiki page[1]. The Liberty Release Schedule[2] has the version
> 2015.2.0. However, in Horizon land, I've been informed that OpenStack is
> moving to semver. I can't find any information about this move except a blog
> post by Doug Hellmann[3]. Is that correct? What is the version string
> supposed to be: 8.0.0 as has been used in Horizon's documentation[4], or
> 12.0.0 as hinted at by the blog post? I believe that if we are moving to
> semver, then 12.0.0 is appropriate.
>
>
>   Richard
>
> [1] https://wiki.openstack.org/wiki/Release_Naming
> [2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
> [3]
> https://doughellmann.com/blog/2015/05/29/openstack-server-version-numbering/
> [4]
> https://raw.githubusercontent.com/openstack/horizon/master/doc/source/topics/settings.rst
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Liberty release - what is the correct version - 2015.2.0, 8.0.0 or 12.0.0?

2015-09-30 Thread Richard Jones
Hi all,

Historically OpenStack releases are versioned .N as documented in the
Release Naming wiki page[1]. The Liberty Release Schedule[2] has the
version 2015.2.0. However, in Horizon land, I've been informed that
OpenStack is moving to semver. I can't find any information about this move
except a blog post by Doug Hellmann[3]. Is that correct? What is the
version string supposed to be: 8.0.0 as has been used in Horizon's
documentation[4], or 12.0.0 as hinted at by the blog post? I believe that
if we are moving to semver, then 12.0.0 is appropriate.


  Richard

[1] https://wiki.openstack.org/wiki/Release_Naming
[2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
[3]
https://doughellmann.com/blog/2015/05/29/openstack-server-version-numbering/
[4]
https://raw.githubusercontent.com/openstack/horizon/master/doc/source/topics/settings.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][PTL] PTL Candidates Q&A Session

2015-09-30 Thread Mike Scherbakov
Vladimir,
we may mix technical direction / tech debt roadmap and process, political,
and people management work of PTL.

PTL definition in OpenStack [1] reflects many things which PTL becomes
responsible for. This applies to Fuel as well.

I'd like to reflect some things here which I'd expect PTL doing, most of
which will intersect with [1]:
- Participate in cross-project initiatives & resolution of issues around
it. Great example is puppet-openstack vs Fuel [2]
- Organize required processes around launchpad bugs & blueprints
- Personal personal feedback to Fuel contributors & public suggestions when
needed
- Define architecture direction & review majority of design specs. Rely on
Component Leads and Core Reviewers
- Ensure that roadmap & use cases are aligned with architecture work
- Resolve conflicts between core reviewers, component leads. Get people to
the same page
- Watch for code review queues and quality of reviews. Ensure discipline of
code review.
- Testing / coverage have to be at the high level

Considering all above, contributors actually have been working with all of
us and know who could be better handling such a hard work. I don't think
special Q&A is needed. If there are concerns / particular process/tech
questions we'd like to discuss - those should be just open as email threads.

[1] https://wiki.openstack.org/wiki/PTL_Guide
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066685.html

Thank you,

On Tue, Sep 29, 2015 at 3:47 AM Vladimir Kuklin 
wrote:

> Folks
>
> I think it is awesome we have three candidates for PTL position in Fuel. I
> read all candidates' emails (including mine own several times :-) ) and I
> got a slight thought of not being able to really differentiate the
> candidates platforms as they are almost identical from the high-level point
> of view. But we all know that the devil is in details. And this details
> will actually affect project future.
>
> Thus I thought about Q&A session at #fuel-dev channel in IRC. I think that
> this will be mutually benefitial for everyone to get our platforms a little
> bit more clear.
>
> Let's do it before or right at the start of actual voting so that our
> contributors can make better decisions based on this session.
>
> I suggest the following format:
>
> 1) 3 questions from electorate members - let's put them onto an etherpad
> 2) 2 questions from a candidate to his opponents (1 question per opponent)
> 3) external moderator - I suppose, @xarses as our weekly meeting moderator
> could help us
> 4) time and date - Wednesday or Thursday comfortable for both timezones,
> e.g. after 4PM UTC or right after fuel weekly meeting.
>
> What do you think, folks?
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] New driver submission deadline

2015-09-30 Thread Sean McGinnis
This message is for all new vendors looking to add a new Cinder driver
in the Mitaka release as well as any existing vendors that need to add a
new protocol/driver to what is already in tree.

There has been some discussion on the mailing list and in the IRC
channel about changes to our policy around submitting new drivers. While
this may lead to some changes after further discussion, I just want to
make it very clear that as of right now, there is no change to new
driver submission.

For the Mitaka release, according to our existing policy, the deadline
will be the M-1 milestone between December 1-3 [1].

Please read and understand all details for new driver submission
available on the Cinder wiki [2].

Requirements for a volume driver to be merged:
* The blueprint for your volume driver is submitted and approved.
* Your volume driver code is posted to gerrit and passing gate tests.
* Your volume driver code gerrit review page has results posted from
your CI [3], and is passing. Keep in mind that your CI must continue
running in order to stay in the release. This also includes future
releases.
* Your volume driver fulfills minimum features. [4]
* You meet all of the above at least by December 1st. Patches can take
quite some time to make it through gate leading up to a milestone. Do
not wait until the morning of the 1st to submit your driver!

To be clear:
* Your volume driver submission must meet *all* the items before we
review your code.
* If your volume driver is submitted after Mitaka-1, expect me to
reference this email and we'll request the volume driver to be
submitted in the N release.
* Even if you meet all of the above requirements by December 1st, it is
not guanranteed that your volume driver will be merged. You still need
to address all review comments in a timely manner and allow time for
gating testing to finish.

Initial merge is not a finish line and you are done. If third party CI
stops reporting, is unstable, or the core team has any reason to
question the quality of your driver, it may be removed at any time if
there is not cooperation to resolve any issues or concerns.


[1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
[2] https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver
[3] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
[4] 
http://docs.openstack.org/developer/cinder/devref/drivers.html#minimum-features

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-09-30 Thread Shinobu Kinjo
Is there any plan to merge those branches to master?
Or is there anything needs to be done more?

Shinobu

- Original Message -
From: "Ben Swartzlander" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Saturday, September 26, 2015 9:27:58 AM
Subject: Re: [openstack-dev] [Manila] CephFS native driver

On 09/24/2015 09:49 AM, John Spray wrote:
> Hi all,
>
> I've recently started work on a CephFS driver for Manila.  The (early)
> code is here:
> https://github.com/openstack/manila/compare/master...jcsp:ceph

Awesome! This is something that's been talking about for quite some time 
and I'm pleased to see progress on making it a reality.

> It requires a special branch of ceph which is here:
> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>
> This isn't done yet (hence this email rather than a gerrit review),
> but I wanted to give everyone a heads up that this work is going on,
> and a brief status update.
>
> This is the 'native' driver in the sense that clients use the CephFS
> client to access the share, rather than re-exporting it over NFS.  The
> idea is that this driver will be useful for anyone who has such
> clients, as well as acting as the basis for a later NFS-enabled
> driver.

This makes sense, but have you given thought to the optimal way to 
provide NFS semantics for those who prefer that? Obviously you can pair 
the existing Manila Generic driver with Cinder running on ceph, but I 
wonder how that wound compare to some kind of ganesha bridge that 
translates between NFS and cephfs. It that something you've looked into?

> The export location returned by the driver gives the client the Ceph
> mon IP addresses, the share path, and an authentication token.  This
> authentication token is what permits the clients access (Ceph does not
> do access control based on IP addresses).
>
> It's just capable of the minimal functionality of creating and
> deleting shares so far, but I will shortly be looking into hooking up
> snapshots/consistency groups, albeit for read-only snapshots only
> (cephfs does not have writeable shapshots).  Currently deletion is
> just a move into a 'trash' directory, the idea is to add something
> later that cleans this up in the background: the downside to the
> "shares are just directories" approach is that clearing them up has a
> "rm -rf" cost!

All snapshots are read-only... The question is whether you can take a 
snapshot and clone it into something that's writable. We're looking at 
allowing for different kinds of snapshot semantics in Manila for Mitaka. 
Even if there's no create-share-from-snapshot functionality a readable 
snapshot is still useful and something we'd like to enable.

The deletion issue sounds like a common one, although if you don't have 
the thing that cleans them up in the background yet I hope someone is 
working on that.

> A note on the implementation: cephfs recently got the ability (not yet
> in master) to restrict client metadata access based on path, so this
> driver is simply creating shares by creating directories within a
> cluster-wide filesystem, and issuing credentials to clients that
> restrict them to their own directory.  They then mount that subpath,
> so that from the client's point of view it's like having their own
> filesystem.  We also have a quota mechanism that I'll hook in later to
> enforce the share size.

So quotas aren't enforced yet? That seems like a serious issue for any 
operator except those that want to support "infinite" size shares. I 
hope that gets fixed soon as well.

> Currently the security here requires clients (i.e. the ceph-fuse code
> on client hosts, not the userspace applications) to be trusted, as
> quotas are enforced on the client side.  The OSD access control
> operates on a per-pool basis, and creating a separate pool for each
> share is inefficient.  In the future it is expected that CephFS will
> be extended to support file layouts that use RADOS namespaces, which
> are cheap, such that we can issue a new namespace to each share and
> enforce the separation between shares on the OSD side.

I think it will be important to document all of these limitations. I 
wouldn't let them stop you from getting the driver done, but if I was a 
deployer I'd want to know about these details.

> However, for many people the ultimate access control solution will be
> to use a NFS gateway in front of their CephFS filesystem: it is
> expected that an NFS-enabled cephfs driver will follow this native
> driver in the not-too-distant future.

Okay this answers part of my above question, but how to you expect the 
NFS gateway to work? Ganesha has been used successfully in the past.

> This will be my first openstack contribution, so please bear with me
> while I come up to speed with the submission process.  I'll also be in
> Tokyo for the summit next month, so I hope to meet other interested
> parties there.

Welcome and I look forward you meeting you in Tokyo!

-Ben


Re: [openstack-dev] [Neutron] Release of a neutron sub-project

2015-09-30 Thread Vadivel Poonathan
Kyle,

We referenced arista's setup/config files when we setup the pypi for
our plugin. So if it is ok for Arista, then it would be ok for
ale-omniswitch too, i believe. You said Arista was ok when you did in
google, instead of pypi search, in another email. So can you pls.
check again ale-omniswitch as well and confirm.

If still it has an issue, can you pls. throw me some pointers on where
to enable the openstackci owener permission?..

Thanks,Vad--

The following pypi registrations did not follow directions to enable
openstackci has "Owner" permissions, which allow for the publishing of

packages to pypi:

networking-ale-omniswitch
networking-arista


On Wed, Sep 30, 2015 at 11:56 AM, Kyle Mestery  wrote:

> On Tue, Sep 29, 2015 at 8:04 PM, Kyle Mestery  wrote:
>
>> On Tue, Sep 29, 2015 at 2:36 PM, Vadivel Poonathan <
>> vadivel.openst...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> As per the Sub-Project Release process - i would like to tag and release
>>> the following sub-project as part of upcoming Liberty release.
>>> The process says talk to one of the member of 'neutron-release' group. I
>>> couldn’t find a group mail-id for this group. Hence I am sending this email
>>> to the dev list.
>>>
>>> I just have removed the version from setup.cfg and got the patch merged,
>>> as specified in the release process. Can someone from the neutron-release
>>> group makes this sub-project release.
>>>
>>>
>>
>> Vlad, I'll do this tomorrow. Find me on IRC (mestery) and ping me there
>> so I can get your IRC NIC in case I have questions.
>>
>>
> It turns out that the networking-ale-omniswitch pypi setup isn't correct,
> see [1] for more info and how to correct. This turned out to be ok, because
> it's forced me to re-examine the other networking sub-projects and their
> pypi setup to ensure consistency, which the thread found here [1] will
> resolve.
>
> Once you resolve this ping me on IRC and I'll release this for you.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/075880.html
>
>
>> Thanks!
>> Kyle
>>
>>
>>>
>>> ALE Omniswitch
>>> Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
>>> Launchpad: https://launchpad.net/networking-ale-omniswitch
>>> Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch
>>>
>>> Thanks,
>>> Vad
>>> --
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] split integration jobs

2015-09-30 Thread Andrew Woodward
Emillien,

What image is being used to spawn the image? We see 300 sec as a good
timeout time in fuel with a cirros image. The time can usually be
substantially cut if the image is of any size using ceph for ephemeral...

On Wed, Sep 30, 2015 at 4:37 PM Jeremy Stanley  wrote:

> On 2015-09-30 17:14:27 -0400 (-0400), Emilien Macchi wrote:
> [...]
> > I like #3 but we are going to consume more CI resources (that's why I
> > put [infra] tag).
> [...]
>
> I don't think adding one more job is going to put a strain on our
> available resources. In fact it consumes just about as much to run a
> single job twice as long since we're constrained on the number of
> running instances in our providers (ignoring for a moment the
> spin-up/tear-down overhead incurred per job which, if you're
> talking about long-running jobs anyway, is less wasteful than it is
> for lots of very quick jobs). The number of puppet changes and
> number of jobs currently run on each is considerably lower than a
> lot of our other teams as well.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] should puppet-neutron manage third party software?

2015-09-30 Thread Steven Hillman (sthillma)
Makes sense to me.

Opened a bug to track the migration of agents/n1kv_vem.pp out of
puppet-neutron during the M-cycle:
https://bugs.launchpad.net/puppet-neutron/+bug/1501535

Thanks.
Steven Hillman

On 9/29/15, 9:23 AM, "Emilien Macchi"  wrote:

>My suggestion:
>
>* patch master to send deprecation warning if third party repositories
>are managed in our current puppet-neutron module.
>* do not manage third party repositories from now and do not accept any
>patch containing this kind of code.
>* in the next cycle, we will consider deleting legacy code that used to
>manage third party software repos.
>
>Thoughts?
>
>On 09/25/2015 12:32 PM, Anita Kuno wrote:
>> On 09/25/2015 12:14 PM, Edgar Magana wrote:
>>> Hi There,
>>>
>>> I just added my comment on the review. I do agree with Emilien. There
>>>should be specific repos for plugins and drivers.
>>>
>>> BTW. I love the sdnmagic name  ;-)
>>>
>>> Edgar
>>>
>>>
>>>
>>>
>>> On 9/25/15, 9:02 AM, "Emilien Macchi"  wrote:
>>>
 In our last meeting [1], we were discussing about whether managing or
 not external packaging repositories for Neutron plugin dependencies.

 Current situation:
 puppet-neutron is installing (packages like neutron-plugin-*) &
 configure Neutron plugins (configuration files like
 /etc/neutron/plugins/*.ini
 Some plugins (Cisco) are doing more: they install third party packages
 (not part of OpenStack), from external repos.

 The question is: should we continue that way and accept that kind of
 patch [2]?

 I vote for no: managing external packages & external repositories
should
 be up to an external more.
 Example: my SDN tool is called "sdnmagic":
 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
 configure the .ini file(s) to make it work in Neutron
 2/ create puppet-sdnmagic that will take care of everything else:
 install sdnmagic, manage packaging (and specific dependencies),
 repositories, etc.
 I -1 puppet-neutron should handle it. We are not managing SDN
soltution:
 we are enabling puppet-neutron to work with them.

 I would like to find a consensus here, that will be consistent across
 *all plugins* without exception.


 Thanks for your feedback,

 [1] http://goo.gl/zehmN2
 [2] https://review.openstack.org/#/c/209997/
 -- 
 Emilien Macchi

>>> 
>>>
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> 
>> I think the data point provided by the Cinder situation needs to be
>> considered in this decision:
>>https://bugs.launchpad.net/manila/+bug/1499334
>> 
>> The bug report outlines the issue, but the tl;dr is that one Cinder
>> driver changed their licensing on a library required to run in tree
>>code.
>> 
>> Thanks,
>> Anita.
>> 
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>-- 
>Emilien Macchi
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] split integration jobs

2015-09-30 Thread Jeremy Stanley
On 2015-09-30 17:14:27 -0400 (-0400), Emilien Macchi wrote:
[...]
> I like #3 but we are going to consume more CI resources (that's why I
> put [infra] tag).
[...]

I don't think adding one more job is going to put a strain on our
available resources. In fact it consumes just about as much to run a
single job twice as long since we're constrained on the number of
running instances in our providers (ignoring for a moment the
spin-up/tear-down overhead incurred per job which, if you're
talking about long-running jobs anyway, is less wasteful than it is
for lots of very quick jobs). The number of puppet changes and
number of jobs currently run on each is considerably lower than a
lot of our other teams as well.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] should puppet-neutron manage third party software?

2015-09-30 Thread Emilien Macchi


On 09/29/2015 12:23 PM, Emilien Macchi wrote:
> My suggestion:
> 
> * patch master to send deprecation warning if third party repositories
> are managed in our current puppet-neutron module.
> * do not manage third party repositories from now and do not accept any
> patch containing this kind of code.
> * in the next cycle, we will consider deleting legacy code that used to
> manage third party software repos.
> 
> Thoughts?

Silence probably means lazy consensus.
I submitted a patch: https://review.openstack.org/#/c/229675/ - please
review.

I also contacted Cisco and they acknowledged it, and will work on
puppet-n1kv to externalize third party software.


> On 09/25/2015 12:32 PM, Anita Kuno wrote:
>> On 09/25/2015 12:14 PM, Edgar Magana wrote:
>>> Hi There,
>>>
>>> I just added my comment on the review. I do agree with Emilien. There 
>>> should be specific repos for plugins and drivers.
>>>
>>> BTW. I love the sdnmagic name  ;-)
>>>
>>> Edgar
>>>
>>>
>>>
>>>
>>> On 9/25/15, 9:02 AM, "Emilien Macchi"  wrote:
>>>
 In our last meeting [1], we were discussing about whether managing or
 not external packaging repositories for Neutron plugin dependencies.

 Current situation:
 puppet-neutron is installing (packages like neutron-plugin-*) &
 configure Neutron plugins (configuration files like
 /etc/neutron/plugins/*.ini
 Some plugins (Cisco) are doing more: they install third party packages
 (not part of OpenStack), from external repos.

 The question is: should we continue that way and accept that kind of
 patch [2]?

 I vote for no: managing external packages & external repositories should
 be up to an external more.
 Example: my SDN tool is called "sdnmagic":
 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
 configure the .ini file(s) to make it work in Neutron
 2/ create puppet-sdnmagic that will take care of everything else:
 install sdnmagic, manage packaging (and specific dependencies),
 repositories, etc.
 I -1 puppet-neutron should handle it. We are not managing SDN soltution:
 we are enabling puppet-neutron to work with them.

 I would like to find a consensus here, that will be consistent across
 *all plugins* without exception.


 Thanks for your feedback,

 [1] http://goo.gl/zehmN2
 [2] https://review.openstack.org/#/c/209997/
 -- 
 Emilien Macchi

>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> I think the data point provided by the Cinder situation needs to be
>> considered in this decision: https://bugs.launchpad.net/manila/+bug/1499334
>>
>> The bug report outlines the issue, but the tl;dr is that one Cinder
>> driver changed their licensing on a library required to run in tree code.
>>
>> Thanks,
>> Anita.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Kris G. Lindgren
We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look like” it
>offers multiple tenants isolation, but it actually has to deliver the
>goods.

>

>I understand that at first glance a company like Yahoo may not want >separate 
>bays for their various applications because of the perceived >administrative 
>overhead. I would then challenge Yahoo to go deploy a COE >like kubernetes 
>(which has no multi-tenancy or a very basic implementation >of such) and get 
>it to work with hundreds of different competing >applications. I would 
>speculate the administrative overhead of getting >all that to work would be 
>greater then the administrative overhead of >simply doing a bay create for the 
>various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that >today. 
>Maybe in the future they will. Magnum was designed to present an >integration 
>point between COEs and OpenStack today, not five years down >the road. Its not 
>as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum >vs a 
>full on integration with OpenStack within the COE itself. However, >that model 
>which is what I believe you proposed is a huge design change to >each COE 
>which would overly complicate the COE at the gain of increased >density. I 
>personally don’t feel that pain is worth the gain.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Hongbin Lu
+1 for both. Welcome!

From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: September-30-15 7:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] New Core Reviewers

+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:
Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a 
simple majority of existing core reviewers, or by lazy consensus concluding on 
2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Cathy Zhang
Hi Kyle,

Is this only about the sub-projects that are ready for release? I do not see 
networking-sfc sub-project in the list. Does this mean we have done the pypi 
registrations for the networking-sfc project correctly or it is not checked 
because it is not ready for release yet?

Thanks,
Cathy

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Wednesday, September 30, 2015 11:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] pypi packages for networking sub-projects

Folks:
In trying to release some networking sub-projects recently, I ran into an issue 
[1] where I couldn't release some projects due to them not being registered on 
pypi. I have a patch out [2] which adds pypi publishing jobs, but before that 
can merge, we need to make sure all projects have pypi registrations in place. 
The following networking sub-projects do NOT have pypi registrations in place 
and need them created following the guidelines here [3]:
networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable 
openstackci has "Owner" permissions, which allow for the publishing of packages 
to pypi:
networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the 
neutron-release team the ability to release pypi packages for those packages.
Thanks!
Kyle

[1] 
http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3] 
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Davanum Srinivas
+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
wrote:

> Core Reviewers,
>
> I propose the following additions to magnum-core:
>
> +Vilobh Meshram (vilobhmm)
> +Hua Wang (humble00)
>
> Please respond with +1 to agree or -1 to veto. This will be decided by
> either a simple majority of existing core reviewers, or by lazy consensus
> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>
> Thanks,
>
> Adrian Otto
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
Totally get it,

And its interesting the boundaries that are being pushed,

Also interesting to know the state of the world, and the state of magnum
and the state of COE systems. I'm somewhat surprised that they lack
multi-tenancy in any kind of manner (but I guess I'm not to surprised,
its a feature that many don't add-on until later, for better or
worse...), especially kubernetes (coming from google), but not entirely
shocked by it ;-)

Insightful stuff, thanks :)

Steven Dake (stdake) wrote:
> Joshua,
> 
> If you share resources, you give up multi-tenancy.  No COE system has the
> concept of multi-tenancy (kubernetes has some basic implementation but it
> is totally insecure).  Not only does multi-tenancy have to “look like” it
> offers multiple tenants isolation, but it actually has to deliver the
> goods.
> 
> I understand that at first glance a company like Yahoo may not want
> separate bays for their various applications because of the perceived
> administrative overhead.  I would then challenge Yahoo to go deploy a COE
> like kubernetes (which has no multi-tenancy or a very basic implementation
> of such) and get it to work with hundreds of different competing
> applications.  I would speculate the administrative overhead of getting
> all that to work would be greater then the administrative overhead of
> simply doing a bay create for the various tenants.
> 
> Placing tenancy inside a COE seems interesting, but no COE does that
> today.  Maybe in the future they will.  Magnum was designed to present an
> integration point between COEs and OpenStack today, not five years down
> the road.  Its not as if we took shortcuts to get to where we are.
> 
> I will grant you that density is lower with the current design of Magnum
> vs a full on integration with OpenStack within the COE itself.  However,
> that model which is what I believe you proposed is a huge design change to
> each COE which would overly complicate the COE at the gain of increased
> density.  I personally don’t feel that pain is worth the gain.
> 
> Regards,
> -steve
> 
> 
> On 9/30/15, 2:18 PM, "Joshua Harlow"  wrote:
> 
>> Wouldn't that limit the ability to share/optimize resources then and
>> increase the number of operators needed (since each COE/bay would need
>> its own set of operators managing it)?
>>
>> If all tenants are in a single openstack cloud, and under say a single
>> company then there isn't much need for management isolation (in fact I
>> think said feature is actually a anti-feature in a case like this).
>> Especially since that management is already by keystone and the
>> project/tenant&  user associations and such there.
>>
>> Security isolation I get, but if the COE is already multi-tenant aware
>> and that multi-tenancy is connected into the openstack tenancy model,
>> then it seems like that point is nil?
>>
>> I get that the current tenancy boundary is the bay (aka the COE right?)
>> but is that changeable? Is that ok with everyone, it seems oddly matched
>> to say a company like yahoo, or other private cloud, where one COE would
>> I think be preferred and tenancy should go inside of that; vs a eggshell
>> like solution that seems like it would create more management and
>> operability pain (now each yahoo internal group that creates a bay/coe
>> needs to figure out how to operate it? and resources can't be shared
>> and/or orchestrated across bays; h, seems like not fully using a COE
>> for what it can do?)
>>
>> Just my random thoughts, not sure how much is fixed in stone.
>>
>> -Josh
>>
>> Adrian Otto wrote:
>>> Joshua,
>>>
>>> The tenancy boundary in Magnum is the bay. You can place whatever
>>> single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
>>> Swarm). This allows you to use native tools to interact with the COE in
>>> that bay, rather than using an OpenStack specific client. If you want to
>>> use the OpenStack client to create both bays, pods, and containers, you
>>> can do that today. You also have the choice, for example, to run kubctl
>>> against your Kubernetes bay, if you so desire.
>>>
>>> Bays offer both a management and security isolation between multiple
>>> tenants. There is no intent to share a single bay between multiple
>>> tenants. In your use case, you would simply create two bays, one for
>>> each of the yahoo-mail.XX tenants. I am not convinced that having an
>>> uber-tenant makes sense.
>>>
>>> Adrian
>>>
 On Sep 30, 2015, at 1:13 PM, Joshua Harlow>>> >  wrote:

 Adrian Otto wrote:
> Thanks everyone who has provided feedback on this thread. The good
> news is that most of what has been asked for from Magnum is actually
> in scope already, and some of it has already been implemented. We
> never aimed to be a COE deployment service. That happens to be a
> necessity to achieve our more ambitious goal: We want to provide a
> compelling Containers-as-a-Service solution for OpenStack clouds in a
> way 

[openstack-dev] [puppet] prepare 5.2.0 and 6.1.0 releases

2015-09-30 Thread Emilien Macchi
Hi,

I would like to organize a "release day" sometimes soon, to release
5.2.0 (Juno) [1] and 6.1.0 (Kilo) [2].

Also, we will take the opportunity of that day to consolidate our
process and bring more documentation [3].

If you have backport needs, please make sure they are all sent in
Gerrit, so our core team will review it.

If there is any volunteer to help in that process (documentation,
launchpad, release notes, reviewing backports), please raise your hand
on IRC.

Once we will release 5.2.0 and 6.1.0, we will schedule 7.0.0 (liberty)
release (probably end-october/early-november), but for now we're still
waiting for UCA & RDO Liberty stable packaging.

Thanks!

[1] https://goo.gl/U767kI
[2] https://goo.gl/HPuVfA
[2] https://wiki.openstack.org/wiki/Puppet/releases
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Adrian Otto
Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a 
simple majority of existing core reviewers, or by lazy consensus concluding on 
2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ops] Operator Local Patches

2015-09-30 Thread Matt Fischer
Is the purge deleted a replacement for nova-manage db archive-deleted? It
hasn't worked for several cycles and so I assume it's abandoned.
On Sep 30, 2015 4:16 PM, "Matt Riedemann" 
wrote:

>
>
> On 9/29/2015 6:33 PM, Kris G. Lindgren wrote:
>
>> Hello All,
>>
>> We have some pretty good contributions of local patches on the etherpad.
>>   We are going through right now and trying to group patches that
>> multiple people are carrying and patches that people may not be carrying
>> but solves a problem that they are running into.  If you can take some
>> time and either add your own local patches that you have to the ether
>> pad or add +1's next to the patches that are laid out, it would help us
>> immensely.
>>
>> The etherpad can be found at:
>> https://etherpad.openstack.org/p/operator-local-patches
>>
>> Thanks for your help!
>>
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> From: "Kris G. Lindgren"
>> Date: Tuesday, September 22, 2015 at 4:21 PM
>> To: openstack-operators
>> Subject: Re: Operator Local Patches
>>
>> Hello all,
>>
>> Friendly reminder: If you have local patches and haven't yet done so,
>> please contribute to the etherpad at:
>> https://etherpad.openstack.org/p/operator-local-patches
>>
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>> From: "Kris G. Lindgren"
>> Date: Friday, September 18, 2015 at 4:35 PM
>> To: openstack-operators
>> Cc: Tom Fifield
>> Subject: Operator Local Patches
>>
>> Hello Operators!
>>
>> During the ops meetup in Palo Alto were we talking about sessions for
>> Tokyo. A session that I purposed, that got a bunch of +1's,  was about
>> local patches that operators were carrying.  From my experience this is
>> done to either implement business logic,  fix assumptions in projects
>> that do not apply to your implementation, implement business
>> requirements that are not yet implemented in openstack, or fix scale
>> related bugs.  What I would like to do is get a working group together
>> to do the following:
>>
>> 1.) Document local patches that operators have (even those that are in
>> gerrit right now waiting to be committed upstream)
>> 2.) Figure out commonality in those patches
>> 3.) Either upstream the common fixes to the appropriate projects or
>> figure out if a hook can be added to allow people to run their code at
>> that specific point
>> 4.) 
>> 5.) Profit
>>
>> To start this off, I have documented every patch, along with a
>> description of what it does and why we did it (where needed), that
>> GoDaddy is running [1].  What I am asking is that the operator community
>> please update the etherpad with the patches that you are running, so
>> that we have a good starting point for discussions in Tokyo and beyond.
>>
>> [1] - https://etherpad.openstack.org/p/operator-local-patches
>> ___
>> Kris Lindgren
>> Senior Linux Systems Engineer
>> GoDaddy
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I saw this originally on the ops list and it's a great idea - cat herding
> the bazillion ops patches and seeing what common things rise to the top
> would be helpful.  Hopefully some of that can then be pushed into the
> projects.
>
> There are a couple of things I could note that are specifically operator
> driven which could use eyes again.
>
> 1. purge deleted instances from nova database:
>
>
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/purge-deleted-instances-cmd.html
>
> The spec is approved for mitaka, the code is out for review.  If people
> could test the change out it'd be helpful to vet it's usefulness.
>
> 2. I'm trying to revive a spec that was approved in liberty but the code
> never landed:
>
> https://review.openstack.org/#/c/226925/
>
> That's for force resetting quotas for a project/user so that on the next
> pass it gets recalculated. A question came up about making the user
> optional in that command so it's going to require a bit more review before
> we re-approve for mitaka since the design changes slightly.
>
> 3. mgagne was good enough to propose a patch upstream to neutron for a
> script he had out of tree:
>
> https://review.openstack.org/#/c/221508/
>
> That's a tool to deleted empty linux bridges.  The neutron linuxbridge
> agent used to remove those automatically but it caused race problems with
> nova so that was removed, but it'd still be good to have a tool to remove
> then as needed.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenSta

[openstack-dev] [election][TC] Candidacy

2015-09-30 Thread Rochelle Grober
Hello People!

I am tossing one of my hats into the ring to run for TC.  Yes, I believe you
could call me a "diversity candidate" as I'm not much of a developer any more,
but I think my skills would be a great addition to the excellent people who are
on the TC (past and present).

My background:  I am currently an architect with Huawei Technologies.  My role
is "OpenStack" and as such, I am liaison to many areas an groups in the
OpenStack community and I am liaison to Huawei engineers and management for the
OpenStack community.  I focus energy on all parts of Software products that
aren't directly writing code.  I am an advocate for quality, for effective and
efficient process, and for the downstream stakeholders (Ops, Apps developers,
Users, Support Engineers, Docs, Training, etc).  I am currently active in:
* DefCore
* RefStack
* Product Working Group
* Logging Working Group (cofounder)
* Ops community
* Peripherally, Tailgaters
* Women of OpenStack
* Diversity Working Group

What I would like to help the TC and the community with:
* Interoperability across deployed clouds begins with cross project
communications and the realization that  each engineer and each project is
connected and influential in how the OpenStack ecosystem works, responds, and
grows. When OpenStack was young, there were two projects and everyone knew
each other, even if they didn't live in the same place.  Just as processes
become more formal when startups grow to be mid-sized companies, OpenStack
has formalized much as it has exploded in number of participants.  We need to
continue to transform Developer, Ops and other community lore into useful
documentation. We are at the point where we really need to focus our energies
and our intelligence on how to effectively span projects and communities via
better communications.  I'm already doing this to some extent.  I'd like to
help the TC do that to a greater extent.
* In the past two years, I've seen the number of "horizontal" 
projects grow
almost as significantly as the "vertical" projects.  These cross functional
projects, with libraries, release, configuration management, docs, QA, etc.,
have also grown in importance in maintaining the quality and velocity of
development.  Again, cross-functional needs are being address, and I want to
help the TC be more proactive in identifying needs and seeding the teams with
senior OpenStack developers (and user community advisors where useful).
* The TC is the conduit between, translator of and champion for the
developers to the OpenStack Board.  They have a huge responsibility and not
enough time, energy or resources to address all the challenges.  I am ready to
work on the challenges and help develop the strategic vision needed to keep on
top of the current and new opportunities always arising and always needing some
thoughtful analysis and action.

That said, I have my own challenges to address.  I know my company will support
me in my role as a TC member, but they will also demand more of my time,
opinions, presence and participation specifically because of the TC position.
I also am still struggling to make inroads on the logging issues I've been
attempting to wrangle into better shape.  I've gotten lots of support from the
community on this (thanks, folks, you know who you are;-), but it still gives
me pause for thought that I, myself need to keep working on my effectiveness.

Whether on the TC or not, I will continue to contribute as much as I can to the
community in the ways that I do best.  and you will continue to see me at the
summits, the midcycles, the meetups, the mailing lists and IRC (hopefully more
there as I'm trying to educate my company how they can provide us the access we
need without compromising their corporate rules).

Thank you for reading this far and considering me for service on the TC.

--Rocky

ps.  I broke Gerrit on my laptop, so infra is helping me, but I've stumped them 
and wanted to get this out.  TLDR: this ain't in the elections repository yet
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Steven Dake (stdake)
Joshua,

If you share resources, you give up multi-tenancy.  No COE system has the
concept of multi-tenancy (kubernetes has some basic implementation but it
is totally insecure).  Not only does multi-tenancy have to “look like” it
offers multiple tenants isolation, but it actually has to deliver the
goods.

I understand that at first glance a company like Yahoo may not want
separate bays for their various applications because of the perceived
administrative overhead.  I would then challenge Yahoo to go deploy a COE
like kubernetes (which has no multi-tenancy or a very basic implementation
of such) and get it to work with hundreds of different competing
applications.  I would speculate the administrative overhead of getting
all that to work would be greater then the administrative overhead of
simply doing a bay create for the various tenants.

Placing tenancy inside a COE seems interesting, but no COE does that
today.  Maybe in the future they will.  Magnum was designed to present an
integration point between COEs and OpenStack today, not five years down
the road.  Its not as if we took shortcuts to get to where we are.

I will grant you that density is lower with the current design of Magnum
vs a full on integration with OpenStack within the COE itself.  However,
that model which is what I believe you proposed is a huge design change to
each COE which would overly complicate the COE at the gain of increased
density.  I personally don’t feel that pain is worth the gain.

Regards,
-steve


On 9/30/15, 2:18 PM, "Joshua Harlow"  wrote:

>Wouldn't that limit the ability to share/optimize resources then and
>increase the number of operators needed (since each COE/bay would need
>its own set of operators managing it)?
>
>If all tenants are in a single openstack cloud, and under say a single
>company then there isn't much need for management isolation (in fact I
>think said feature is actually a anti-feature in a case like this).
>Especially since that management is already by keystone and the
>project/tenant & user associations and such there.
>
>Security isolation I get, but if the COE is already multi-tenant aware
>and that multi-tenancy is connected into the openstack tenancy model,
>then it seems like that point is nil?
>
>I get that the current tenancy boundary is the bay (aka the COE right?)
>but is that changeable? Is that ok with everyone, it seems oddly matched
>to say a company like yahoo, or other private cloud, where one COE would
>I think be preferred and tenancy should go inside of that; vs a eggshell
>like solution that seems like it would create more management and
>operability pain (now each yahoo internal group that creates a bay/coe
>needs to figure out how to operate it? and resources can't be shared
>and/or orchestrated across bays; h, seems like not fully using a COE
>for what it can do?)
>
>Just my random thoughts, not sure how much is fixed in stone.
>
>-Josh
>
>Adrian Otto wrote:
>> Joshua,
>>
>> The tenancy boundary in Magnum is the bay. You can place whatever
>> single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
>> Swarm). This allows you to use native tools to interact with the COE in
>> that bay, rather than using an OpenStack specific client. If you want to
>> use the OpenStack client to create both bays, pods, and containers, you
>> can do that today. You also have the choice, for example, to run kubctl
>> against your Kubernetes bay, if you so desire.
>>
>> Bays offer both a management and security isolation between multiple
>> tenants. There is no intent to share a single bay between multiple
>> tenants. In your use case, you would simply create two bays, one for
>> each of the yahoo-mail.XX tenants. I am not convinced that having an
>> uber-tenant makes sense.
>>
>> Adrian
>>
>>> On Sep 30, 2015, at 1:13 PM, Joshua Harlow >> > wrote:
>>>
>>> Adrian Otto wrote:
 Thanks everyone who has provided feedback on this thread. The good
 news is that most of what has been asked for from Magnum is actually
 in scope already, and some of it has already been implemented. We
 never aimed to be a COE deployment service. That happens to be a
 necessity to achieve our more ambitious goal: We want to provide a
 compelling Containers-as-a-Service solution for OpenStack clouds in a
 way that offers maximum leverage of what’s already in OpenStack,
 while giving end users the ability to use their favorite tools to
 interact with their COE of choice, with the multi-tenancy capability
 we expect from all OpenStack services, and simplified integration
 with a wealth of existing OpenStack services (Identity,
 Orchestration, Images, Networks, Storage, etc.).

 The areas we have disagreement are whether the features offered for
 the k8s COE should be mirrored in other COE’s. We have not attempted
 to do that yet, and my suggestion is to continue resisting that
 temptation because it is not al

Re: [openstack-dev] Announcing Liberty RC1 availability in Debian

2015-09-30 Thread Thomas Goirand
On 09/30/2015 07:25 PM, Jordan Pittier wrote:
> We are not used to reading "thanks" messages from you :) So I enjoy this
> email even more !

I am well aware that I do have the reputation within the community to
complain too much. I'm the world champion of starting monster troll
thread by mistake. :)

Though mostly, I do like everyone I've approached so far (except maybe 2
persons out of a few hundreds, which is unavoidable), and feel like we
have an awesome, very helpful and friendly community.

It is my hope that everyone understands the amount of "WTF" situation I
have face every day due to what I do, and that I'm close to burning out
at the end of each release. Liberty isn't an exception. Seeing that
Tempest finally ran yesterday evening filled me with joy. These last
remaining 15 days before the final release will be painful, even though
I'm nearly done for this cycle: I do need holidays...

So let me do it once more: thanks everyone! :)

Looking forward to meet so many friends in Tokyo,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Large Deployments Team][Performance Team] New informal working group suggestion

2015-09-30 Thread Rogon, Kamil
Hello,

Thanks Dina for bringing up this great idea.



My team at Intel is working with Performance testing so far so we will be 
likely to be part of that project.

The performance aspect at large scale is an obstacle for enterprise 
deployments. For that reason Win The Enterprise 
  group may be also 
interested in this topic.



Regards,

Kamil Rogon



Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173
80-298 Gdansk



From: Dina Belova [mailto:dbel...@mirantis.com]
Sent: Wednesday, September 30, 2015 10:27 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Large Deployments Team][Performance Team] New 
informal working group suggestion



Sandeep,



sorry for the late response :) I'm hoping to define 'spheres of interest' and 
most painful moments using people's experience on Tokyo summit and we'll find 
out what needs to be tested most and can be actually done. You can share your 
ideas of what needs to be tested and focused on in 
 
https://etherpad.openstack.org/p/openstack-performance-issues etherpad, this 
will be a pool of ideas I'm going to use in Tokyo.



I can either create irc channel for the discussions or we can use 
#openstack-operators channel as LDT is using it for the communication. After 
Tokyo summit I'm planning to set Doodle voting for the time people will be 
comfortable with to have periodic meetings :)



Cheers,

Dina



On Fri, Sep 25, 2015 at 1:52 PM, Sandeep Raman mailto:sandeep.ra...@gmail.com> > wrote:

On Tue, Sep 22, 2015 at 6:27 PM, Dina Belova mailto:dbel...@mirantis.com> > wrote:

Hey, OpenStackers!



I'm writing to propose to organise new informal team to work specifically on 
the OpenStack performance issues. This will be a sub team in already existing 
Large Deployments Team, and I suppose it will be a good idea to gather people 
interested in OpenStack performance in one room and identify what issues are 
worrying contributors, what can be done and share results of performance 
researches :)



Dina, I'm focused in performance and scale testing [no coding background].How 
can I contribute and what is the expectation from this informal team?



So please volunteer to take part in this initiative. I hope it will be many 
people interested and we'll be able to use cross-projects session slot 
  to meet in Tokyo and hold a 
kick-off meeting.



I'm not coming to Tokyo. How could I still be part of discussions if any? I 
also feel it is good to have a IRC channel for perf-scale discussion. Let me 
know your thoughts.



I would like to apologise I'm writing to two mailing lists at the same time, 
but I want to make sure that all possibly interested people will notice the 
email.



Thanks and see you in Tokyo :)



Cheers,

Dina



-- 

Best regards,

Dina Belova

Senior Software Engineer

Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ops] Operator Local Patches

2015-09-30 Thread Matt Riedemann



On 9/29/2015 6:33 PM, Kris G. Lindgren wrote:

Hello All,

We have some pretty good contributions of local patches on the etherpad.
  We are going through right now and trying to group patches that
multiple people are carrying and patches that people may not be carrying
but solves a problem that they are running into.  If you can take some
time and either add your own local patches that you have to the ether
pad or add +1's next to the patches that are laid out, it would help us
immensely.

The etherpad can be found at:
https://etherpad.openstack.org/p/operator-local-patches

Thanks for your help!

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Tuesday, September 22, 2015 at 4:21 PM
To: openstack-operators
Subject: Re: Operator Local Patches

Hello all,

Friendly reminder: If you have local patches and haven't yet done so,
please contribute to the etherpad at:
https://etherpad.openstack.org/p/operator-local-patches

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Friday, September 18, 2015 at 4:35 PM
To: openstack-operators
Cc: Tom Fifield
Subject: Operator Local Patches

Hello Operators!

During the ops meetup in Palo Alto were we talking about sessions for
Tokyo. A session that I purposed, that got a bunch of +1's,  was about
local patches that operators were carrying.  From my experience this is
done to either implement business logic,  fix assumptions in projects
that do not apply to your implementation, implement business
requirements that are not yet implemented in openstack, or fix scale
related bugs.  What I would like to do is get a working group together
to do the following:

1.) Document local patches that operators have (even those that are in
gerrit right now waiting to be committed upstream)
2.) Figure out commonality in those patches
3.) Either upstream the common fixes to the appropriate projects or
figure out if a hook can be added to allow people to run their code at
that specific point
4.) 
5.) Profit

To start this off, I have documented every patch, along with a
description of what it does and why we did it (where needed), that
GoDaddy is running [1].  What I am asking is that the operator community
please update the etherpad with the patches that you are running, so
that we have a good starting point for discussions in Tokyo and beyond.

[1] - https://etherpad.openstack.org/p/operator-local-patches
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I saw this originally on the ops list and it's a great idea - cat 
herding the bazillion ops patches and seeing what common things rise to 
the top would be helpful.  Hopefully some of that can then be pushed 
into the projects.


There are a couple of things I could note that are specifically operator 
driven which could use eyes again.


1. purge deleted instances from nova database:

http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/purge-deleted-instances-cmd.html

The spec is approved for mitaka, the code is out for review.  If people 
could test the change out it'd be helpful to vet it's usefulness.


2. I'm trying to revive a spec that was approved in liberty but the code 
never landed:


https://review.openstack.org/#/c/226925/

That's for force resetting quotas for a project/user so that on the next 
pass it gets recalculated. A question came up about making the user 
optional in that command so it's going to require a bit more review 
before we re-approve for mitaka since the design changes slightly.


3. mgagne was good enough to propose a patch upstream to neutron for a 
script he had out of tree:


https://review.openstack.org/#/c/221508/

That's a tool to deleted empty linux bridges.  The neutron linuxbridge 
agent used to remove those automatically but it caused race problems 
with nova so that was removed, but it'd still be good to have a tool to 
remove then as needed.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Remove nova-network as a deployment option in Fuel?

2015-09-30 Thread Mike Scherbakov
Hi team,
where do we stand with it now? I remember there was a plan to remove
nova-network support in 7.0, but we've delayed it due to vcenter/dvr or
something which was not ready for it.

Can we delete it now? The early in the cycle we do it, the easier it will
be.

Thanks!
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Murali R
Yes, sfc without nsh is what I am looking into and I am thinking ovn can
have a better approach.

I did an implementation of sfc around nsh that used ovs & flows from custom
ovs-agent back in mar-may. I added fields in ovs agent to send additional
info for actions as well. Neutron side was quite trivial. But the solution
required an implementation of ovs to listen on a different port to handle
nsh header so doubled the number of tunnels. The ovs code we used/modified
to was either from the link you sent or some other similar impl from Cisco
folks (I don't recall) that had actions and conditional commands for the
field. If we have generic ovs code to compare or set actions on any
configured address field was my thought. But haven't thought through much
on how to do that. In any case, with ovn we cannot define custom flows
directly on ovs, so that approach is dated now. But hoping some similar
feature can be added to ovn which can transpose some header field to geneve
options.

I am trying something right now with ovn and will be attending ovs
conference in nov. I am skipping openstack summit to attend something else
in far-east during that time. But lets keep the discussion going and
collaborate if you work on sfc.

On Wed, Sep 30, 2015 at 2:11 PM, Russell Bryant  wrote:

> On 09/30/2015 04:09 PM, Murali R wrote:
> > Russel,
> >
> > For instance if I have a nsh header embedded in vxlan in the incoming
> > packet, I was wondering if I can transfer that to geneve options
> > somehow. This is just as an example. I may have header other info either
> > in vxlan or ip that needs to enter the ovn network and if we have
> > generic ovs commands to handle that, it will be useful. If commands
> > don't exist but extensible then I can do that as well.
>
> Well, OVS itself doesn't support NSH yet.  There are patches on the OVS
> dev mailing list for it, though.
>
> http://openvswitch.org/pipermail/dev/2015-September/060678.html
>
> Are you interested in SFC?  I have been thinking about that and don't
> think it will be too hard to add support for it in OVN.  I'm not sure
> when I'll work on it, but it's high on my personal todo list.  If you
> want to do it with NSH, that will require OVS support first, of course.
>
> If you're interested in more generic extensibility of OVN, there's at
> least going to be one talk about that at the OVS conference in November.
>  If you aren't there, it will be on video.  I'm not sure what ideas they
> will be proposing.
>
> Since we're on the OpenStack list, I assume we're talking in the
> OpenStack context.  For any feature we're talking about, we also have to
> talk about how that is exposed through the Neutron API.  So, "generic
> extensibility" doesn't immediately make sense for the Neutron case.
>
> SFC certainly makes sense.  There's a Neutron project for adding an SFC
> API and from what I've seen so far, I think we'll be able to extend OVN
> such that it can back that API.
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to address boot from volume failures

2015-09-30 Thread Andrew Laski

On 09/30/15 at 05:03pm, Sean Dague wrote:

Today we attempted to branch devstack and grenade for liberty, and are
currently blocked because in liberty with openstack client and
novaclient, it's not possible to boot a server from volume using just
the volume id.

That's because of this change in novaclient -
https://review.openstack.org/#/c/221525/

That was done to resolve the issue that strong schema validation in Nova
started rejecting the kinds of calls that novaclient was making for boot
from volume, because the bdm 1 and 2 code was sharing common code and
got a bit tangled up. So 3 bdm 2 params were being sent on every request.

However, https://review.openstack.org/#/c/221525/ removed the ==1 code
path. If you pass in just {"vda": "$volume_id"} the code falls through,
volume id is lost, and nothing is booted. This is how the devstack
exercises and osc recommends booting from volume. I expect other people
might be doing that as well.

There seem to be a few options going forward:

1) fix the client without a revert

This would bring back a ==1 code path, which is basically just setting
volume_id, and move on. This means that until people upgrade their
client they loose access to this function on the server.

2) revert the client and loose up schema validation

If we revert the client to the old code, we also need to accept the fact
that novaclient has been sending 3 extra parameters to this API call
since as long as people can remember. We'd need a nova schema relax to
let those in and just accept that people are going to pass those.

3) fix osc and novaclient cli to not use this code path. This will also
require everyone upgrades both of those to not explode in the common
case of specifying boot from volume on the command line.

I slightly lean towards #2 on a compatibility front, but it's a chunk of
change at this point in the cycle, so I don't think there is a clear win
path. It would be good to collect opinions here. The bug tracking this
is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435


I have a slight preference for #1.  Nova is not buggy here novaclient 
is so I think we should contain the fix there.


Is using the v2 API an option?  That should also allow the 3 extra 
parameters mentioned in #2.




-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
Wouldn't that limit the ability to share/optimize resources then and 
increase the number of operators needed (since each COE/bay would need 
its own set of operators managing it)?


If all tenants are in a single openstack cloud, and under say a single 
company then there isn't much need for management isolation (in fact I 
think said feature is actually a anti-feature in a case like this). 
Especially since that management is already by keystone and the 
project/tenant & user associations and such there.


Security isolation I get, but if the COE is already multi-tenant aware 
and that multi-tenancy is connected into the openstack tenancy model, 
then it seems like that point is nil?


I get that the current tenancy boundary is the bay (aka the COE right?) 
but is that changeable? Is that ok with everyone, it seems oddly matched 
to say a company like yahoo, or other private cloud, where one COE would 
I think be preferred and tenancy should go inside of that; vs a eggshell 
like solution that seems like it would create more management and 
operability pain (now each yahoo internal group that creates a bay/coe 
needs to figure out how to operate it? and resources can't be shared 
and/or orchestrated across bays; h, seems like not fully using a COE 
for what it can do?)


Just my random thoughts, not sure how much is fixed in stone.

-Josh

Adrian Otto wrote:

Joshua,

The tenancy boundary in Magnum is the bay. You can place whatever
single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
Swarm). This allows you to use native tools to interact with the COE in
that bay, rather than using an OpenStack specific client. If you want to
use the OpenStack client to create both bays, pods, and containers, you
can do that today. You also have the choice, for example, to run kubctl
against your Kubernetes bay, if you so desire.

Bays offer both a management and security isolation between multiple
tenants. There is no intent to share a single bay between multiple
tenants. In your use case, you would simply create two bays, one for
each of the yahoo-mail.XX tenants. I am not convinced that having an
uber-tenant makes sense.

Adrian


On Sep 30, 2015, at 1:13 PM, Joshua Harlow mailto:harlo...@outlook.com>> wrote:

Adrian Otto wrote:

Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.



So an interesting question, but how is tenancy going to work, will
there be a keystone tenancy <-> COE tenancy adapter? From my
understanding a whole bay (COE?) is owned by a tenant, which is great
for tenants that want to ~experiment~ with a COE but seems disjoint
from the end goal of an integrated COE where the tenancy model of both
keystone and the COE is either the same or is adapted via some adapter
layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
'
1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
...

All those tenancy information is in keystone, not replicated/synced
into the COE (or in some other COE specific disjoint system).

Thoughts?

Thi

[openstack-dev] [puppet] [infra] split integration jobs

2015-09-30 Thread Emilien Macchi
Hello,

Today our Puppet OpenStack Integration jobs are deploying:
- mysql / rabbitmq
- keystone in wsgi with apache
- nova
- glance
- neutron with openvswitch
- cinder
- swift
- sahara
- heat
- ceilometer in wsgi with apache

Currently WIP:
- Horizon
- Trove

The status of the jobs is that some tempest tests (related to compute)
are failing randomly. Most of failures are because of timeouts:

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/neutron/server.txt.gz#_2015-09-30_18_38_32_425

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/nova/nova-compute.txt.gz#_2015-09-30_18_38_34_799

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/nova/nova-compute.txt.gz#_2015-09-30_18_38_12_636

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/1d88f34/logs/nova/nova-compute.txt.gz#_2015-09-30_20_26_34_730

The timeouts happen because Nova needs more than 300s (default) to spawn
a VM. Neutron is barely able to sustain to Nova requests.

It's obvious we reached jenkins slave resources limits.


We have 3 options:

#1 increase timeouts and try to give more time to services to accomplish
what they need to do.

#2 drop some services from our testing scenario.

#3 split our scenario to have scenario001 and scenario002.

I feel like #1 is not really a scalable idea, since we are going to test
more and more services.

I don't like #2 because we want to test all our modules, not just a
subset of them.

I like #3 but we are going to consume more CI resources (that's why I
put [infra] tag).


Side note: we have some non-voting upgrade jobs that we don't really pay
attention now, because of lack of time to work on them. They consume 2
slaves. If resources are a problem, we can drop them and replace by the
2 new integration jobs.

So I propose option #3 and
* drop upgrade jobs if infra says we're using too much resources with 2
more jobs
* replace them by the 2 new integration jobs
or option #3 by adding 2 more jobs with a new scenario, where services
would be split.

Any feedback from Infra / Puppet teams is welcome,
Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Russell Bryant
On 09/30/2015 04:09 PM, Murali R wrote:
> Russel,
> 
> For instance if I have a nsh header embedded in vxlan in the incoming
> packet, I was wondering if I can transfer that to geneve options
> somehow. This is just as an example. I may have header other info either
> in vxlan or ip that needs to enter the ovn network and if we have
> generic ovs commands to handle that, it will be useful. If commands
> don't exist but extensible then I can do that as well.

Well, OVS itself doesn't support NSH yet.  There are patches on the OVS
dev mailing list for it, though.

http://openvswitch.org/pipermail/dev/2015-September/060678.html

Are you interested in SFC?  I have been thinking about that and don't
think it will be too hard to add support for it in OVN.  I'm not sure
when I'll work on it, but it's high on my personal todo list.  If you
want to do it with NSH, that will require OVS support first, of course.

If you're interested in more generic extensibility of OVN, there's at
least going to be one talk about that at the OVS conference in November.
 If you aren't there, it will be on video.  I'm not sure what ideas they
will be proposing.

Since we're on the OpenStack list, I assume we're talking in the
OpenStack context.  For any feature we're talking about, we also have to
talk about how that is exposed through the Neutron API.  So, "generic
extensibility" doesn't immediately make sense for the Neutron case.

SFC certainly makes sense.  There's a Neutron project for adding an SFC
API and from what I've seen so far, I think we'll be able to extend OVN
such that it can back that API.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-30 Thread Rich Megginson

On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:

Gilles Dubreuil  writes:


On 30/09/15 03:43, Rich Megginson wrote:

On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:

On 15/09/15 19:55, Sofer Athlan-Guyot wrote:

Gilles Dubreuil  writes:


On 15/09/15 06:53, Rich Megginson wrote:

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:

Hi,

Gilles Dubreuil  writes:


A. The 'composite namevar' approach:

  keystone_tenant {'projectX::domainY': ... }
B. The 'meaningless name' approach:

 keystone_tenant {'myproject': name='projectX',
domain=>'domainY',
...}

Notes:
- Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
- Please look at [1] this for some background between the two
approaches:

The question
-
Decide between the two approaches, the one we would like to
retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project.
Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.

I think we could support both.  I don't see it as an either/or
situation.


4. Being consistent
5. Therefore the community to decide

Pros/Cons
--
A.

I think it's the B: meaningless approach here.


 Pros
   - Easier names

That's subjective, creating unique and meaningful name don't look
easy
to me.

The point is that this allows choice - maybe the user already has some
naming scheme, or wants to use a more "natural" meaningful name -
rather
than being forced into a possibly "awkward" naming scheme with "::"

keystone_user { 'heat domain admin user':
  name => 'admin',
  domain => 'HeatDomain',
  ...
}

keystone_user_role {'heat domain admin user@::HeatDomain':
  roles => ['admin']
  ...
}


 Cons
   - Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.


   - Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how
to use
"compound" names with Puppet.


   - More difficult to debug

More difficult than it is already? :P


   - Titles mismatch when listing the resources (self.instances)

B.
 Pros
   - Unique titles guaranteed
   - No ambiguity between resource found and their title
 Cons
   - More complicated titles
My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when
not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

+1 for B.

My view is that this should be the advertised way, but the other
method
(meaningless) should be there if the user need it.

So as far as I'm concerned the two idioms should co-exist.  This
would
mimic what is possible with all puppet resources.  For instance
you can:

 file { '/tmp/foo.bar': ensure => present }

and you can

 file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
present }

The two refer to the same resource.

Right.


I disagree, using the name for the title is not creating a composite
name. The latter requires adding at least another parameter to be part
of the title.

Also in the case of the file resource, a path/filename is a unique
name,
which is not the case of an Openstack user which might exist in several
domains.

I actually added the meaningful name case in:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html


But that doesn't work very well because without adding the domain to
the
name, the following fails:

keystone_tenant {'project_1': domain => 'domain_A', ...}
keystone_tenant {'project_1': domain => 'domain_B', ...}

And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.


There is a big blocker of making use of domain name as parameter.
The issue is the limitation of autorequire.

Because autorequire doesn't support any parameter other than the
resource type and expects the resource title (or a list of) [1].

So for instance, keystone_user requires the tenant project1 from
domain1, then the resource name must be 'project1::domain1' because
otherwise there is no way to specify 'domain1':


Yeah, I kept forgetting this is only about resource relationship/ord

[openstack-dev] [nova] how to address boot from volume failures

2015-09-30 Thread Sean Dague
Today we attempted to branch devstack and grenade for liberty, and are
currently blocked because in liberty with openstack client and
novaclient, it's not possible to boot a server from volume using just
the volume id.

That's because of this change in novaclient -
https://review.openstack.org/#/c/221525/

That was done to resolve the issue that strong schema validation in Nova
started rejecting the kinds of calls that novaclient was making for boot
from volume, because the bdm 1 and 2 code was sharing common code and
got a bit tangled up. So 3 bdm 2 params were being sent on every request.

However, https://review.openstack.org/#/c/221525/ removed the ==1 code
path. If you pass in just {"vda": "$volume_id"} the code falls through,
volume id is lost, and nothing is booted. This is how the devstack
exercises and osc recommends booting from volume. I expect other people
might be doing that as well.

There seem to be a few options going forward:

1) fix the client without a revert

This would bring back a ==1 code path, which is basically just setting
volume_id, and move on. This means that until people upgrade their
client they loose access to this function on the server.

2) revert the client and loose up schema validation

If we revert the client to the old code, we also need to accept the fact
that novaclient has been sending 3 extra parameters to this API call
since as long as people can remember. We'd need a nova schema relax to
let those in and just accept that people are going to pass those.

3) fix osc and novaclient cli to not use this code path. This will also
require everyone upgrades both of those to not explode in the common
case of specifying boot from volume on the command line.

I slightly lean towards #2 on a compatibility front, but it's a chunk of
change at this point in the cycle, so I don't think there is a clear win
path. It would be good to collect opinions here. The bug tracking this
is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-30 Thread Thomas Goirand
On 09/25/2015 05:00 PM, Ryan Brown wrote:
> I believe the 72 limit is derived from 80-8 (terminal width - tab width)

If I'm not mistaking, 72 is because of the email format limitation.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Joshua,

The tenancy boundary in Magnum is the bay. You can place whatever single-tenant 
COE you want into the bay (Kubernetes, Mesos, Docker Swarm). This allows you to 
use native tools to interact with the COE in that bay, rather than using an 
OpenStack specific client. If you want to use the OpenStack client to create 
both bays, pods, and containers, you can do that today. You also have the 
choice, for example, to run kubctl against your Kubernetes bay, if you so 
desire.

Bays offer both a management and security isolation between multiple tenants. 
There is no intent to share a single bay between multiple tenants. In your use 
case, you would simply create two bays, one for each of the yahoo-mail.XX 
tenants. I am not convinced that having an uber-tenant makes sense.

Adrian

On Sep 30, 2015, at 1:13 PM, Joshua Harlow 
mailto:harlo...@outlook.com>> wrote:

Adrian Otto wrote:
Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.


So an interesting question, but how is tenancy going to work, will there be a 
keystone tenancy <-> COE tenancy adapter? From my understanding a whole bay 
(COE?) is owned by a tenant, which is great for tenants that want to 
~experiment~ with a COE but seems disjoint from the end goal of an integrated 
COE where the tenancy model of both keystone and the COE is either the same or 
is adapted via some adapter layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

  1.1) Pod inside bay that is connected to tenant 
'yahoo-mail.us'
  1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
  ...

All those tenancy information is in keystone, not replicated/synced into the 
COE (or in some other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a tenancy model 
in the first place :-/

Thanks,

Adrian

On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarnimailto:devdatta.kulka...@rackspace.com>>
  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lumailto:hongbin...@huawei.com>> Sent: 
Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau@gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
quest

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow

Adrian Otto wrote:

Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.



So an interesting question, but how is tenancy going to work, will there 
be a keystone tenancy <-> COE tenancy adapter? From my understanding a 
whole bay (COE?) is owned by a tenant, which is great for tenants that 
want to ~experiment~ with a COE but seems disjoint from the end goal of 
an integrated COE where the tenancy model of both keystone and the COE 
is either the same or is adapted via some adapter layer.


For example:

1) Bay that is connected to uber-tenant 'yahoo'

   1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us'
   1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
   ...

All those tenancy information is in keystone, not replicated/synced into 
the COE (or in some other COE specific disjoint system).


Thoughts?

This one becomes especially hard if said COE(s) don't even have a 
tenancy model in the first place :-/



Thanks,

Adrian


On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu Sent: Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau@gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
k8s?



+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should
export some interfaces to enable end user can create container
applications but not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all
of the concepts in different coe (k8s has pod, service, rc, swarm
only has container, nova only has VM,  PM with different
hypervisors), but this deserve some deep dive and thinking to see
how can move forward.





On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz
wrote: definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansi

Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Murali R
Russel,

For instance if I have a nsh header embedded in vxlan in the incoming
packet, I was wondering if I can transfer that to geneve options somehow.
This is just as an example. I may have header other info either in vxlan or
ip that needs to enter the ovn network and if we have generic ovs commands
to handle that, it will be useful. If commands don't exist but extensible
then I can do that as well.





On Wed, Sep 30, 2015 at 12:49 PM, Russell Bryant  wrote:

> On 09/30/2015 03:29 PM, Murali R wrote:
> > Russell,
> >
> > Are any additional options fields used in geneve between hypervisors at
> > this time? If so, how do they translate to vxlan when it hits gw? For
> > instance, I am interested to see if we can translate a custom header
> > info in vxlan to geneve headers and vice-versa.
>
> Yes, geneve options are used. Specifically, there are three pieces of
> metadata sent: a logical datapath ID (the logical switch, or network),
> the source logical port, and the destination logical port.
>
> Geneve is only used between hypervisors. VxLAN is only used between
> hypervisors and a VTEP gateway. In that case, the additional metadata is
> not included. There's just a tunnel ID in that case, used to identify
> the source/destination logical switch on the VTEP gateway.
>
> > And if there are flow
> > commands available to add conditional flows at this time or if it is
> > possible to extend if need be.
>
> I'm not quite sure I understand this part.  Could you expand on what you
> have in mind?
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] How to selectively enable new services?

2015-09-30 Thread Steven Hardy
Hi all,

So I wanted to start some discussion on $subject, because atm we have a
couple of patches adding support for new services (which is great!):

Manila: https://review.openstack.org/#/c/188137/
Sahara: https://review.openstack.org/#/c/220863/

So, firstly I am *not* aiming to be any impediment to those landing, and I
know they have been in-progress for some time.  These look pretty close to
being ready to land and overall I think new service integration is a very
good thing for TripleO.

However, given the recent evolution towards the "big tent" of OpenStack, I
wanted to get some ideas on what an effective way to selectively enable
services would look like, as I can imagine not all users of TripleO want to
deploy all-the-services all of the time.

I was initially thinking we simply have e.g "EnableSahara" as a boolean in
overcloud-without-mergepy, and wire that in to the puppet manifests, such
that the services are not configured/started.  However comments in the
Sahara patch indicate it may be more complex than that, in particular
requiring changes to the loadbalancer puppet code and os-cloud-config.

This is all part of the more general "composable roles" problem, but is
there an initial step we can take, which will make it easy to simply
disable services (and ideally not pay the cost of configuring them at all)
on deployment?

Interested in peoples thoughts on this - has anyone already looked into it,
or is there any existing pattern we can reuse?

As mentioned above, not aiming to block anything on this, I guess we can
figure it out and retro-fit it to whatever services folks want to
selectively disable later if needed.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Sukhdev Kapur
Hey Kyle,

I have updated the ownership of networking-l2gw. I have +1'd your patch. As
soon as it merges the ACLs for the L2GW project will be fine as well.

Thanks for confirming about the networking-arista.

With this both of these packages should be good to go.

Thanks
-Sukhdev


On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery  wrote:

> Folks:
>
> In trying to release some networking sub-projects recently, I ran into an
> issue [1] where I couldn't release some projects due to them not being
> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
> but before that can merge, we need to make sure all projects have pypi
> registrations in place. The following networking sub-projects do NOT have
> pypi registrations in place and need them created following the guidelines
> here [3]:
>
> networking-calico
> networking-infoblox
> networking-powervm
>
> The following pypi registrations did not follow directions to enable
> openstackci has "Owner" permissions, which allow for the publishing of
> packages to pypi:
>
> networking-ale-omniswitch
> networking-arista
> networking-l2gw
> networking-vsphere
>
> Once these are corrected, we can merge [2] which will then allow the
> neutron-release team the ability to release pypi packages for those
> packages.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
> [2] https://review.openstack.org/#/c/229564/1
> [3]
> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][TC] TC Candidacy

2015-09-30 Thread Joshua Harlow

Hi folks,

I'd like to propose my candidacy for the technical committee
elections.

I've been involved in OpenStack for around ~four~ years now, working
to help integrate it into various Yahoo! systems and infrastructure.
I've been involved with integration and creation (and maturation) of
many projects (and libraries); for example rpm and venv packaging (via
anvil), cloud-init (a related tool), doc8 (a doc checking tool),
taskflow (an oslo library), tooz (an oslo library), automaton (an oslo
library), kazoo (a dependent library) and more.

As mentioned above, my contributions to OpenStack have been at the
project and library level. My experience in oslo (a group of
folks that specialize in cross-project libraries and reduction of
duplication across projects) has helped me grow and gain knowledge
about how to work across various projects. Now I would like to help
OpenStack projects become ~more~ excellent technically. I'd like to
be able  to leverage (and share) the experience I have gained at
Yahoo! to help make OpenStack that much better (we have tens of
thousands of VMs and thousands of hypervisors, tens of
thousands of baremetal instances split across many clusters with
varying network topology and layout).

I'd like to join the TC to aid some of the on-going work that helps
overhaul pieces of OpenStack to make them more scalable, more fault
tolerant, and in all honesty more ~modern~. I believe we (as a TC)
need to perform ~more~ outreach to projects and provide more advice
and guidance with respect to which technologies will help them scale
in the long term (for example instead of reinventing service discovery
solutions and/or distributed locking, use other open source solutions
that provide it already in a battle-hardened manner) proactively
instead of reactively.

I believe some of this can be solved by trying to make sure the TC is
on-top of: https://review.openstack.org/#/q/status:open+project:openstack
/openstack-specs,n,z and ensuring proposed/accepted cross-project
initiatives do not linger. (I'd personally rather have a cross-project
spec be reviewed and marked as not applicable vs. having a spec
linger.)

In summary, I would like to focus on helping this outreach and
involvement become better (and yes some of that outreach goes beyond
the OpenStack community), helping get OpenStack projects onto scalable
solutions (where applicable) and help make OpenStack become a cloud
solution that can work well for all (instead of work well for small
clouds and not work so well for large ones). Of course on-going
efforts need to conclude (tags for example) first but I hope that as a
TC member I can help promote work on OpenStack that helps the long
term technical sustainability (at small and megascale) of OpenStack
become better.

TLDR; work on getting TC to get more involved with the technical
outreach of OpenStack; reduce focus on approving projects and tags
and hopefully work to help the focus become on the long term technical
sustainability of OpenStack (at small and megascale); using my own
experiences to help in this process //

Thanks for considering me,

Joshua Harlow

--

Yahoo!

http://stackalytics.com/report/users/harlowja

Official submission @ https://review.openstack.org/229591

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Russell Bryant
On 09/30/2015 03:29 PM, Murali R wrote:
> Russell,
> 
> Are any additional options fields used in geneve between hypervisors at
> this time? If so, how do they translate to vxlan when it hits gw? For
> instance, I am interested to see if we can translate a custom header
> info in vxlan to geneve headers and vice-versa. 

Yes, geneve options are used. Specifically, there are three pieces of
metadata sent: a logical datapath ID (the logical switch, or network),
the source logical port, and the destination logical port.

Geneve is only used between hypervisors. VxLAN is only used between
hypervisors and a VTEP gateway. In that case, the additional metadata is
not included. There's just a tunnel ID in that case, used to identify
the source/destination logical switch on the VTEP gateway.

> And if there are flow
> commands available to add conditional flows at this time or if it is
> possible to extend if need be.

I'm not quite sure I understand this part.  Could you expand on what you
have in mind?

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Kyle Mestery
Sukhdev, you're right, for some reason that one didn't show up in a pypi
search on pypi itself, but does in google. And it is correctly owned [1].

[1] https://pypi.python.org/pypi/networking_arista

On Wed, Sep 30, 2015 at 2:21 PM, Sukhdev Kapur 
wrote:

> Hey Kyle,
>
> I am bit confused by this. I just checked networking-arista and see that
> the co-owner of the project is openstackci
> I also checked the [1] and [2] and the settings for networking-arista are
> correct as well.
>
> What else is missing which make you put networking-arista in the second
> category?
> Please advise.
>
> Thanks
> -Sukhdev
>
>
> [1] - jenkins/jobs/projects.yaml
> 
> [2] - zuul/layout.yaml
> 
>
> On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery 
> wrote:
>
>> Folks:
>>
>> In trying to release some networking sub-projects recently, I ran into an
>> issue [1] where I couldn't release some projects due to them not being
>> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
>> but before that can merge, we need to make sure all projects have pypi
>> registrations in place. The following networking sub-projects do NOT have
>> pypi registrations in place and need them created following the guidelines
>> here [3]:
>>
>> networking-calico
>> networking-infoblox
>> networking-powervm
>>
>> The following pypi registrations did not follow directions to enable
>> openstackci has "Owner" permissions, which allow for the publishing of
>> packages to pypi:
>>
>> networking-ale-omniswitch
>> networking-arista
>> networking-l2gw
>> networking-vsphere
>>
>> Once these are corrected, we can merge [2] which will then allow the
>> neutron-release team the ability to release pypi packages for those
>> packages.
>>
>> Thanks!
>> Kyle
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
>> [2] https://review.openstack.org/#/c/229564/1
>> [3]
>> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread John Belamaric
Kyle,

I have taken care of this for networking-infoblox. Please let me know if 
anything else is necessary.

Thanks,
John

On Sep 30, 2015, at 2:55 PM, Kyle Mestery 
mailto:mest...@mestery.com>> wrote:

Folks:

In trying to release some networking sub-projects recently, I ran into an issue 
[1] where I couldn't release some projects due to them not being registered on 
pypi. I have a patch out [2] which adds pypi publishing jobs, but before that 
can merge, we need to make sure all projects have pypi registrations in place. 
The following networking sub-projects do NOT have pypi registrations in place 
and need them created following the guidelines here [3]:

networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable 
openstackci has "Owner" permissions, which allow for the publishing of packages 
to pypi:

networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the 
neutron-release team the ability to release pypi packages for those packages.

Thanks!
Kyle

[1] 
http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3] 
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Murali R
Russell,

Are any additional options fields used in geneve between hypervisors at
this time? If so, how do they translate to vxlan when it hits gw? For
instance, I am interested to see if we can translate a custom header info
in vxlan to geneve headers and vice-versa. And if there are flow commands
available to add conditional flows at this time or if it is possible to
extend if need be.

Thanks
Murali

On Sun, Sep 27, 2015 at 1:14 PM, Russell Bryant  wrote:

> On 09/27/2015 02:26 AM, WANG, Ming Hao (Tony T) wrote:
> > Russell,
> >
> > Thanks for your valuable information.
> > I understood Geneve is some kind of tunnel format for network
> virtualization encapsulation, just like VxLAN.
> > But I'm still confused by the connection between Geneve and VTEP.
> > I suppose VTEP should be on behalf of "VxLAN Tunnel Endpoint", which
> should be used for VxLAN only.
> >
> > Does it become some "common tunnel endpoint" in OVN, and can be also
> used as a tunnel endpoint for Geneve?
>
> When using VTEP gateways, both the Geneve and VxLAN protocols are being
> used.  Packets between hypervisors are sent using Geneve.  Packets
> between a hypervisor and the gateway are sent using VxLAN.
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Sukhdev Kapur
Hey Kyle,

I am bit confused by this. I just checked networking-arista and see that
the co-owner of the project is openstackci
I also checked the [1] and [2] and the settings for networking-arista are
correct as well.

What else is missing which make you put networking-arista in the second
category?
Please advise.

Thanks
-Sukhdev


[1] - jenkins/jobs/projects.yaml

[2] - zuul/layout.yaml


On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery  wrote:

> Folks:
>
> In trying to release some networking sub-projects recently, I ran into an
> issue [1] where I couldn't release some projects due to them not being
> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
> but before that can merge, we need to make sure all projects have pypi
> registrations in place. The following networking sub-projects do NOT have
> pypi registrations in place and need them created following the guidelines
> here [3]:
>
> networking-calico
> networking-infoblox
> networking-powervm
>
> The following pypi registrations did not follow directions to enable
> openstackci has "Owner" permissions, which allow for the publishing of
> packages to pypi:
>
> networking-ale-omniswitch
> networking-arista
> networking-l2gw
> networking-vsphere
>
> Once these are corrected, we can merge [2] which will then allow the
> neutron-release team the ability to release pypi packages for those
> packages.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
> [2] https://review.openstack.org/#/c/229564/1
> [3]
> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] new yaml format for all.yml, need feedback

2015-09-30 Thread Sam Yaple
Also in favor is it lands before Liberty. But I don't want to see a format
change straight into Mitaka.

Sam Yaple

On Wed, Sep 30, 2015 at 1:03 PM, Steven Dake (stdake) 
wrote:

> I am in favor of this work if it lands before Liberty.
>
> Regards
> -steve
>
>
> On 9/30/15, 10:54 AM, "Jeff Peeler"  wrote:
>
> >The patch I just submitted[1] modifies the syntax of all.yml to use
> >dictionaries, which changes how variables are referenced. The key
> >point being in globals.yml, the overriding of a variable will change
> >from simply specifying the variable to using the dictionary value:
> >
> >old:
> >api_interface: 'eth0'
> >
> >new:
> >network:
> >api_interface: 'eth0'
> >
> >Preliminary feedback on IRC sounded positive, so I'll go ahead and
> >work on finishing the review immediately assuming that we'll go
> >forward. Please ping me if you hate this change so that I can stop the
> >work.
> >
> >[1] https://review.openstack.org/#/c/229535/
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Jastrzebski, Michal
Thanks everyone!

I really appreciate this and I hope to help to make kolla even better project 
than it is right now (and right now it's pretty cool;)). We have great 
community, very diverse and very dedicated. It's pleasure to work with all of 
you and let's keep up with great work in following releases:)

Thank you again,
Michał

> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: Wednesday, September 30, 2015 8:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for 
> core
> reviewer
> 
> Michal,
> 
> The vote was unanimous.  Welcome to the Kolla Core Reviewer team.  I have
> added you to the appropriate gerrit group.
> 
> Regards
> -steve
> 
> 
> From: Steven Dake mailto:std...@cisco.com> >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> mailto:openstack-
> d...@lists.openstack.org> >
> Date: Tuesday, September 29, 2015 at 3:20 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> mailto:openstack-
> d...@lists.openstack.org> >
> Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core
> reviewer
> 
> 
> 
>   Hi folks,
> 
>   I am proposing Michal for core reviewer.  Consider my proposal as a
> +1 vote.  Michal has done a fantastic job with rsyslog, has done a nice job
> overall contributing to the project for the last cycle, and has really 
> improved his
> review quality and participation over the last several months.
> 
>   Our process requires 3 +1 votes, with no veto (-1) votes.  If your
> uncertain, it is best to abstain :)  I will leave the voting open for 1 week 
> until
> Tuesday October 6th or until there is a unanimous decision or a  veto.
> 
>   Regards
>   -steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [Sahara] Block Device Driver updates

2015-09-30 Thread Ivan Kolodyazhny
Hi team,

I know that Block Device Driver (BDD) is not popular in Cinder community.
The main issues were:

* driver is not good maintained
* it doesn't feet minimum features set
* there is no CI for it
* it's not a Cinder way/it works only when instance and volume are created
on the same host
* etc

AFAK, it's widely used in Sahara &  Hadoop communities because it works
fast. I won't discuss driver's performance in this thread. I share my
performance tests results once I'll finish it.

I'm going to share drive updates with you about issues above.

1) driver is not good maintained - we are working on it right now and will
fix any found issues. We've got devstack plugin [1] for this driver.

2) it doesn't feet minimum features set - I've filed a blueprint [2] for
it. There are patches that implement needed features in the gerrit [3].

3) there is no CI for it - In Cinder community, we've got strong
requirement that each driver must has CI. I've absolutely agree with that.
That's why new infra job is proposed [4].

4) it works only when instance and volume are created on the same host -
I've filed a blueprint [5] but after testing I've found that it's already
implemented by [6].


I hope, I've answered all questions that were asked in IRC and in comments
for [6]. I will do my best to support this driver and propose fix to delete
if community decide  to delete it from the cinder tree


[1] https://github.com/openstack/devstack-plugin-bdd
[2]
https://blueprints.launchpad.net/cinder/+spec/block-device-driver-minimum-features-set
[3]
https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/block-device-driver-minimum-features-set,n,z
[4] https://review.openstack.org/228857
[5]
https://blueprints.launchpad.net/cinder/+spec/block-device-driver-via-iscsi
[6] https://review.openstack.org/#/c/200039/


Regards,
Ivan Kolodyazhny
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Release of a neutron sub-project

2015-09-30 Thread Kyle Mestery
On Tue, Sep 29, 2015 at 8:04 PM, Kyle Mestery  wrote:

> On Tue, Sep 29, 2015 at 2:36 PM, Vadivel Poonathan <
> vadivel.openst...@gmail.com> wrote:
>
>> Hi,
>>
>> As per the Sub-Project Release process - i would like to tag and release
>> the following sub-project as part of upcoming Liberty release.
>> The process says talk to one of the member of 'neutron-release' group. I
>> couldn’t find a group mail-id for this group. Hence I am sending this email
>> to the dev list.
>>
>> I just have removed the version from setup.cfg and got the patch merged,
>> as specified in the release process. Can someone from the neutron-release
>> group makes this sub-project release.
>>
>>
>
> Vlad, I'll do this tomorrow. Find me on IRC (mestery) and ping me there so
> I can get your IRC NIC in case I have questions.
>
>
It turns out that the networking-ale-omniswitch pypi setup isn't correct,
see [1] for more info and how to correct. This turned out to be ok, because
it's forced me to re-examine the other networking sub-projects and their
pypi setup to ensure consistency, which the thread found here [1] will
resolve.

Once you resolve this ping me on IRC and I'll release this for you.

Thanks!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075880.html


> Thanks!
> Kyle
>
>
>>
>> ALE Omniswitch
>> Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
>> Launchpad: https://launchpad.net/networking-ale-omniswitch
>> Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch
>>
>> Thanks,
>> Vad
>> --
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Kyle Mestery
Folks:

In trying to release some networking sub-projects recently, I ran into an
issue [1] where I couldn't release some projects due to them not being
registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
but before that can merge, we need to make sure all projects have pypi
registrations in place. The following networking sub-projects do NOT have
pypi registrations in place and need them created following the guidelines
here [3]:

networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable
openstackci has "Owner" permissions, which allow for the publishing of
packages to pypi:

networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the
neutron-release team the ability to release pypi packages for those
packages.

Thanks!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3]
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Models and validation for v2

2015-09-30 Thread Kairat Kushaev
Agree with you. That's why I am asking about reasoning. Perhaps, we need to
realize how to get rid of this in glanceclient.

Best regards,
Kairat Kushaev

On Wed, Sep 30, 2015 at 7:04 PM, Jay Pipes  wrote:

> On 09/30/2015 09:31 AM, Kairat Kushaev wrote:
>
>> Hi All,
>> In short terms, I am wondering why we are validating responses from
>> server when we are doing
>> image-show, image-list, member-list, metadef-namespace-show and other
>> read-only requests.
>>
>> AFAIK, we are building warlock models when receiving responses from
>> server (see [0]). Each model requires schema to be fetched from glance
>> server. It means that each time we are doing image-show, image-list,
>> image-create, member-list and others we are requesting schema from the
>> server. AFAIU, we are using models to dynamically validate that object
>> is in accordance with schema but is it the case when glance receives
>> responses from the server?
>>
>> Could somebody please explain me the reasoning of this implementation?
>> Am I missed some usage cases when validation is required for server
>> responses?
>>
>> I also noticed that we already faced some issues with such
>> implementation that leads to "mocking" validation([1][2]).
>>
>
> The validation should not be done for responses, only ever requests (and
> it's unclear that there is value in doing this on the client side at all,
> IMHO).
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Steven Dake (stdake)
Michal,

The vote was unanimous.  Welcome to the Kolla Core Reviewer team.  I have added 
you to the appropriate gerrit group.

Regards
-steve


From: Steven Dake mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 3:20 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core 
reviewer

Hi folks,

I am proposing Michal for core reviewer.  Consider my proposal as a +1 vote.  
Michal has done a fantastic job with rsyslog, has done a nice job overall 
contributing to the project for the last cycle, and has really improved his 
review quality and participation over the last several months.

Our process requires 3 +1 votes, with no veto (-1) votes.  If your uncertain, 
it is best to abstain :)  I will leave the voting open for 1 week until Tuesday 
October 6th or until there is a unanimous decision or a  veto.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] new yaml format for all.yml, need feedback

2015-09-30 Thread Steven Dake (stdake)
I am in favor of this work if it lands before Liberty.

Regards
-steve


On 9/30/15, 10:54 AM, "Jeff Peeler"  wrote:

>The patch I just submitted[1] modifies the syntax of all.yml to use
>dictionaries, which changes how variables are referenced. The key
>point being in globals.yml, the overriding of a variable will change
>from simply specifying the variable to using the dictionary value:
>
>old:
>api_interface: 'eth0'
>
>new:
>network:
>api_interface: 'eth0'
>
>Preliminary feedback on IRC sounded positive, so I'll go ahead and
>work on finishing the review immediately assuming that we'll go
>forward. Please ping me if you hate this change so that I can stop the
>work.
>
>[1] https://review.openstack.org/#/c/229535/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-30 Thread Shiv Haris
Hi David,

Exactly what Tim mentioned in his email – there are 2 VMs.

The VM that I published has a README file in the home directory when you login 
with the credentials vagrant/vagrant.

Looking forward to your feedback.

-Shiv



From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Wednesday, September 30, 2015 10:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi David,

There are 2 VM images for Congress that we're working on simultaneously: Shiv's 
and Alex's.

1. Shiv's image is to help new people understand some of the use cases Congress 
was designed for.  The goal is to include a bunch of use cases that we have 
working.

2. Alex's image is the one we'll be using for the hands-on-lab in Tokyo.  This 
one accompanies the Google doc instructions for the Hands On Lab: 
https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub.

It sounds like you might be using Shiv's image with Alex's hands-on-lab 
instructions, so the instructions won't necessarily line up with the image.

Tim



On Wed, Sep 30, 2015 at 9:45 AM KARR, DAVID 
mailto:dk0...@att.com>> wrote:
I think I’m seeing similar errors, but I’m not certain.  With the OVA I 
downloaded last night, when I run “./rejoin-stack.sh”, I get “Couldn’t find 
./stack-screenrc file; have you run stack.sh yet?”

Concerning the original page with setup instructions, at 
https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
 , I note that the the login user and password is different (probably obvious), 
and obviously the required path to “cd” to.

Also, after starting the VM, the instructions say to run “ifconfig” to get the 
IP address of the VM, and then to ssh to the VM.  This seems odd.  If I’ve 
already done “interact with the console”, then I’m already logged into the 
console.  The instructions also describe how to get to the Horizon client from 
your browser.  I’m not sure what this should say now.

From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Friday, September 25, 2015 3:35 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Thanks Alex, Zhou,

I get errors from congress when I do a re-join. These errors seem to due to the 
order in which the services are coming up. Hence I still depend on running 
stack.sh after the VM is up and running. Please try out the new VM – also 
advise if you need to add any of your use cases. Also re-join starts “screen” – 
do we expect the end user to know how to use “screen”.

I do understand that running “stack.sh” takes time to run – but it does not do 
things that appear to be any kind of magic which we want to avoid in order to 
get the user excited.

I have uploaded a new version of the VM please experiment with this and let me 
know:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova

(root: vagrant password: vagrant)

-Shiv



From: Alex Yip [mailto:a...@vmware.com]
Sent: Thursday, September 24, 2015 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I was able to make devstack run without a network connection by disabling 
tempest.  So, I think it uses the loopback IP address, and that does not 
change, so rejoin-stack.sh works without a network at all.



- Alex






From: Zhou, Zhenzan mailto:zhenzan.z...@intel.com>>
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and 
fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:a...@vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a 
minute or so.  Then I run rejoin-stack.sh which takes just another minute or 
so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack 
state that was running before.



- Alex






From: Shiv Haris mailto:sha...@brocade.com>>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user 
instantiates the Usecase-VM. However creating a OVA file is possible only when 
the VM is halted which means Openstack is not running and the user will have to 
run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapsh

[openstack-dev] [kolla] new yaml format for all.yml, need feedback

2015-09-30 Thread Jeff Peeler
The patch I just submitted[1] modifies the syntax of all.yml to use
dictionaries, which changes how variables are referenced. The key
point being in globals.yml, the overriding of a variable will change
from simply specifying the variable to using the dictionary value:

old:
api_interface: 'eth0'

new:
network:
api_interface: 'eth0'

Preliminary feedback on IRC sounded positive, so I'll go ahead and
work on finishing the review immediately assuming that we'll go
forward. Please ping me if you hate this change so that I can stop the
work.

[1] https://review.openstack.org/#/c/229535/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][glance] glance-stable-maint group refresher

2015-09-30 Thread Mikhail Fedosin
Thank you for your confidence in me folks! I I'll be happy to maintain the
stability of our project and continue working on its improvements.

Best regards,
Mike

On Wed, Sep 30, 2015 at 4:28 PM, Nikhil Komawar 
wrote:

>
>
> On 9/30/15 8:46 AM, Kuvaja, Erno wrote:
>
> Hi all,
>
>
>
> I’d like to propose following changes to glance-stable-maint team:
>
> 1)  Removing Zhi Yan Liu from the group; unfortunately he has moved
> on to other ventures and is not actively participating our operations
> anymore.
>
> +1 (always welcome back)
>
> 2)  Adding Mike Fedosin to the group; Mike has been reviewing and
> backporting patches to glance stable branches and is working with the right
> mindset. I think he would be great addition to share the workload around.
>
> +1 (definitely)
>
>
>
> Best,
>
> Erno (jokke_) Kuvaja
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-30 Thread Sofer Athlan-Guyot
Gilles Dubreuil  writes:

> On 30/09/15 03:43, Rich Megginson wrote:
>> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>>
>>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
 Gilles Dubreuil  writes:

> On 15/09/15 06:53, Rich Megginson wrote:
>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>> Hi,
>>>
>>> Gilles Dubreuil  writes:
>>>
 A. The 'composite namevar' approach:

  keystone_tenant {'projectX::domainY': ... }
B. The 'meaningless name' approach:

 keystone_tenant {'myproject': name='projectX',
 domain=>'domainY',
 ...}

 Notes:
- Actually using both combined should work too with the domain
 supposedly overriding the name part of the domain.
- Please look at [1] this for some background between the two
 approaches:

 The question
 -
 Decide between the two approaches, the one we would like to
 retain for
 puppet-keystone.

 Why it matters?
 ---
 1. Domain names are mandatory in every user, group or project.
 Besides
 the backward compatibility period mentioned earlier, where no domain
 means using the default one.
 2. Long term impact
 3. Both approaches are not completely equivalent which different
 consequences on the future usage.
>>> I can't see why they couldn't be equivalent, but I may be missing
>>> something here.
>> I think we could support both.  I don't see it as an either/or
>> situation.
>>
 4. Being consistent
 5. Therefore the community to decide

 Pros/Cons
 --
 A.
>>> I think it's the B: meaningless approach here.
>>>
 Pros
   - Easier names
>>> That's subjective, creating unique and meaningful name don't look
>>> easy
>>> to me.
>> The point is that this allows choice - maybe the user already has some
>> naming scheme, or wants to use a more "natural" meaningful name -
>> rather
>> than being forced into a possibly "awkward" naming scheme with "::"
>>
>>keystone_user { 'heat domain admin user':
>>  name => 'admin',
>>  domain => 'HeatDomain',
>>  ...
>>}
>>
>>keystone_user_role {'heat domain admin user@::HeatDomain':
>>  roles => ['admin']
>>  ...
>>}
>>
 Cons
   - Titles have no meaning!
>> They have meaning to the user, not necessarily to Puppet.
>>
   - Cases where 2 or more resources could exists
>> This seems to be the hardest part - I still cannot figure out how
>> to use
>> "compound" names with Puppet.
>>
   - More difficult to debug
>> More difficult than it is already? :P
>>
   - Titles mismatch when listing the resources (self.instances)

 B.
 Pros
   - Unique titles guaranteed
   - No ambiguity between resource found and their title
 Cons
   - More complicated titles
 My vote
 
 I would love to have the approach A for easier name.
 But I've seen the challenge of maintaining the providers behind the
 curtains and the confusion it creates with name/titles and when
 not sure
 about the domain we're dealing with.
 Also I believe that supporting self.instances consistently with
 meaningful name is saner.
 Therefore I vote B
>>> +1 for B.
>>>
>>> My view is that this should be the advertised way, but the other
>>> method
>>> (meaningless) should be there if the user need it.
>>>
>>> So as far as I'm concerned the two idioms should co-exist.  This
>>> would
>>> mimic what is possible with all puppet resources.  For instance
>>> you can:
>>>
>>> file { '/tmp/foo.bar': ensure => present }
>>>
>>> and you can
>>>
>>> file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
>>> present }
>>>
>>> The two refer to the same resource.
>> Right.
>>
> I disagree, using the name for the title is not creating a composite
> name. The latter requires adding at least another parameter to be part
> of the title.
>
> Also in the case of the file resource, a path/filename is a unique
> name,
> which is not the case of an Openstack user which might exist in several
> domains.
>
> I actually added the meaningful name case in:
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>
>
> But that doesn't work very well because without adding the domain to
> the
> name, the following fails:
>
> keystone_tenant {'project_1': domain =

Re: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) for core reviewer

2015-09-30 Thread Dave Wilde
+1 from me as well

--
Dave Wilde
Sent with Airmail


On September 30, 2015 at 03:51:48, Jesse Pretorius 
(jesse.pretor...@gmail.com) wrote:

Hi everyone,

I'd like to propose that Steve Lewis (stevelle) be added as a core reviewer.

He has made an effort to consistently keep up with doing reviews in the last 
cycle and always makes an effort to ensure that his responses are made after 
thorough testing where possible. I have found his input to be valuable.

--
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Liberty RC1 availability in Debian

2015-09-30 Thread Jordan Pittier
On Wed, Sep 30, 2015 at 1:58 PM, Thomas Goirand  wrote:

> Hi everyone!
>
> 1/ Announcement
> ===
>
> I'm pleased to announce, in advance of the final Liberty release, that
> Liberty RC1 not only has been fully uploaded to Debian Experimental, but
> also that the Tempest CI (which I maintain and is a package only CI, no
> deployment tooling involved), shows that it's also fully installable and
> working. There's still some failures, but these are, I am guessing, not
> due to problems in the packaging, but rather some Tempest setup problems
> which I intend to address.
>
> If you want to try out Liberty RC1 in Debian, you can either try it
> using Debian Sid + Experimental (recommended), or use the Jessie
> backport repository built out of Mirantis Jenkins server. Repositories
> are listed at this address:
>
> http://liberty-jessie.pkgs.mirantis.com/
>
> 2/ Quick note about Liberty Debian repositories
> ===
>
> During Debconf 15, someone reported that the fact the Jessie backports
> are on a Mirantis address is disturbing.
>
> Note that, while the above really is a non-Debian (ie: non official
> private) repository, it only contains unmodified source packages, only
> just rebuilt for Debian Stable. Please don't be afraid by the tainted
> "mirantis.com" domain name, I could have as well set a debian.net
> address (which has been on my todo list for a long time). But it is
> still Debian only packages. Everything there is strait out of Debian
> repositories, nothing added, modified or removed.
>
> I believe that Liberty release in Sid, is currently working very well,
> but I haven't tested it as much as the Jessie backport.
>
> Started with the Kilo release, I have been uploading packages to the
> official Debian backports repositories. I will do so as well for the
> Liberty release, after the final release is out, and after Liberty is
> fully migrated to Debian Testing (the rule for stable-backports is that
> packages *must* be available in Testing *first*, in order to provide an
> upgrade path). So I do expect Liberty to be available from
> jessie-backports maybe a few weeks *after* the final Liberty release.
> Before that, use the unofficial Debian repositories.
>
> 3/ Horizon dependencies still in NEW queue
> ==
>
> It is also worth noting that Horizon hasn't been fully FTP master
> approved, and that some packages are still remaining in the NEW queue.
> This isn't the first release with such an issue with Horizon. I hope
> that 1/ FTP masters will approve the remaining packages son 2/ for
> Mitaka, the Horizon team will care about freezing external dependencies
> (ie: new Javascript objects) earlier in the development cycle. I am
> hereby proposing that the Horizon 3rd party dependency freeze happens
> not later than Mitaka b2, so that we don't experience it again for the
> next release. Note that this problem affects both Debian and Ubuntu, as
> Ubuntu syncs dependencies from Debian.
>
> 5/ New packages in this release
> ===
>
> You may have noticed that the below packages are now part of Debian:
> - Manila
> - Aodh
> - ironic-inspector
> - Zaqar (this one is still in the FTP masters NEW queue...)
>
> I have also packaged a few more, but there are still blockers:
> - Congress (antlr version is too low in Debian)
> - Mistral
>
> 6/ Roadmap for Liberty final release
> 
>
> Next on my roadmap for the final release of Liberty, is finishing to
> upgrade the remaining components to the latest version tested in the
> gate. It has been done for most OpenStack deliverables, but about a
> dozen are still in the lowest version supported by our global-requirements.
>
> There's also some remaining work:
> - more Neutron drivers
> - Gnocchi
> - Address the remaining Tempest failures, and widen the scope of tests
> (add Sahara, Heat, Swift and others to the tested projects using the
> Debian package CI)
>
> I of course welcome everyone to test Liberty RC1 before the final
> release, and report bugs on the Debian bug tracker if needed.
>
> Also note that the Debian packaging CI is fully free software, and part
> of Debian as well (you can look into the openstack-meta-packages package
> in git.debian.org, and in openstack-pkg-tools). Contributions in this
> field are also welcome.
>
> 7/ Thanks to Canonical & every OpenStack upstream projects
> ==
>
> I'd like to point out that, even though I did the majority of the work
> myself, for this release, there was a way more collaboration with
> Canonical on the dependency chain. Indeed, for this Liberty release,
> Canonical decided to upload every dependency to Debian first, and then
> only sync from it. So a big thanks to the Canonical server team for
> doing community work with me together. I just hope we could push this
> even further, especially trying to hav

Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-30 Thread Tim Hinrichs
Hi David,

There are 2 VM images for Congress that we're working on simultaneously:
Shiv's and Alex's.

1. Shiv's image is to help new people understand some of the use cases
Congress was designed for.  The goal is to include a bunch of use cases
that we have working.

2. Alex's image is the one we'll be using for the hands-on-lab in Tokyo.
This one accompanies the Google doc instructions for the Hands On Lab:
https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
.

It sounds like you might be using Shiv's image with Alex's hands-on-lab
instructions, so the instructions won't necessarily line up with the image.

Tim



On Wed, Sep 30, 2015 at 9:45 AM KARR, DAVID  wrote:

> I think I’m seeing similar errors, but I’m not certain.  With the OVA I
> downloaded last night, when I run “./rejoin-stack.sh”, I get “Couldn’t find
> ./stack-screenrc file; have you run stack.sh yet?”
>
>
>
> Concerning the original page with setup instructions, at
> https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
> , I note that the the login user and password is different (probably
> obvious), and obviously the required path to “cd” to.
>
>
>
> Also, after starting the VM, the instructions say to run “ifconfig” to get
> the IP address of the VM, and then to ssh to the VM.  This seems odd.  If
> I’ve already done “interact with the console”, then I’m already logged into
> the console.  The instructions also describe how to get to the Horizon
> client from your browser.  I’m not sure what this should say now.
>
>
>
> *From:* Shiv Haris [mailto:sha...@brocade.com]
> *Sent:* Friday, September 25, 2015 3:35 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Thanks Alex, Zhou,
>
>
>
> I get errors from congress when I do a re-join. These errors seem to due
> to the order in which the services are coming up. Hence I still depend on
> running stack.sh after the VM is up and running. Please try out the new VM
> – also advise if you need to add any of your use cases. Also re-join starts
> “screen” – do we expect the end user to know how to use “screen”.
>
>
>
> I do understand that running “stack.sh” takes time to run – but it does
> not do things that appear to be any kind of magic which we want to avoid in
> order to get the user excited.
>
>
>
> I have uploaded a new version of the VM please experiment with this and
> let me know:
>
>
>
> http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova
>
>
>
> (root: vagrant password: vagrant)
>
>
>
> -Shiv
>
>
>
>
>
>
>
> *From:* Alex Yip [mailto:a...@vmware.com ]
> *Sent:* Thursday, September 24, 2015 5:09 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> I was able to make devstack run without a network connection by disabling
> tempest.  So, I think it uses the loopback IP address, and that does not
> change, so rejoin-stack.sh works without a network at all.
>
>
>
> - Alex
>
>
>
>
> --
>
> *From:* Zhou, Zhenzan 
> *Sent:* Thursday, September 24, 2015 4:56 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Rejoin-stack.sh works only if its IP was not changed. So using NAT network
> and fixed ip inside the VM can help.
>
>
>
> BR
>
> Zhou Zhenzan
>
>
>
> *From:* Alex Yip [mailto:a...@vmware.com ]
> *Sent:* Friday, September 25, 2015 01:37
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> I have been using images, rather than snapshots.
>
>
>
> It doesn't take that long to start up.  First, I boot the VM which takes a
> minute or so.  Then I run rejoin-stack.sh which takes just another minute
> or so.  It's really not that bad, and rejoin-stack.sh restores vms and
> openstack state that was running before.
>
>
>
> - Alex
>
>
>
>
> --
>
> *From:* Shiv Haris 
> *Sent:* Thursday, September 24, 2015 10:29 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Congress] Congress Usecases VM
>
>
>
> Hi Congress folks,
>
>
>
> I am looking for ideas. We want the Openstack to be running when the user
> instantiates the Usecase-VM. However creating a OVA file is possible only
> when the VM is halted which means Openstack is not running and the user
> will have to run devstack again (which is time consuming) when the VM is
> restarted.
>
>
>
> The option is to take a snapshot. It appears that taking a snapshot of the
> VM and using it in another setup is not very straight forward. It involves
> modifying the .vbox file and seems that it is prone to user errors. I am
> leaning towards halting the machine and generating an OVA file.
>
>
>
> I am looking for s

Re: [openstack-dev] Infra needs Gerrit developers

2015-09-30 Thread Wayne Warren
I am definitely interested in helping out with this as I feel the pain
of gerrit, particularly around text entry...

Not a huge fan of Java but might be able to take on some low-hanging
fruit once I've had a chance to tackle the JJB 2.0 API.

Maybe this is the wrong place to discuss, but is there any chance the
Gerrit project might consider a move toward Clojure as its primary
language? I suspect this could be done in a way that slowly deprecates
the use of Java over time but would need to spend time investigating
the current Gerrit architecture before making any strong claims about
this.

On Tue, Sep 29, 2015 at 3:30 PM, Zaro  wrote:
> Hello All,
>
> I believe you are all familiar with Gerrit.  Our community relies on it
> quite heavily and it is one of the most important applications in our CI
> infrastructure. I work on the OpenStack-infra team and I've been hacking on
> Gerrit for a while. I'm the infra team's sole Gerrit developer. I also test
> all our Gerrit upgrades prior to infra upgrading Gerrit.  There are many
> Gerrit feature and bug fix requests coming from the OpenStack community
> however due to limited resources it has been a challenge to meet those
> requests.
>
> I've been fielding some of those requests and trying to make Gerrit better
> for OpenStack.  I was wondering whether there are any other folks in our
> community who might also like to hack on a large scale java application
> that's being used by many corporations and open source projects in the
> world.  If so this is an opportunity for you to contribute.  I'm hoping to
> get more OpenStackers involved with the Gerrit community so we can
> collectively make OpenStack better.  If you would like to get involved let
> the openstack-infra folks know[1] and we will try help get you going.
>
> For instance our last attempt to upgrading Gerrit failed due to a bug[2]
> that makes repos unusable on a diff timeout.   This bug is still not fixed
> so a nice way to contribute is to help us fix things like this so we can
> continue to use never versions of Gerrit.
>
> [1] in #openstack-infra or on openstack-in...@lists.openstack.org
> [2] https://code.google.com/p/gerrit/issues/detail?id=3424
>
>
> Thank You.
> - Khai (AKA zaro)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][TC] TC Candidacy

2015-09-30 Thread Barrett, Carol L
Mike - Congrats on your new position! Looking forward to working with you.
Carol

-Original Message-
From: Mike Perez [mailto:thin...@gmail.com] 
Sent: Wednesday, September 30, 2015 1:55 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [election][TC] TC Candidacy

Hi all!

I'm announcing my candidacy for a position on the OpenStack Technical Committee.

On October 1st I will be employed by the OpenStack Foundation as a 
Cross-Project Developer Coordinator to help bring focus and support to 
cross-project initiatives within the cross-project specs, Def Core, The Product 
Working group, etc.

I feel the items below have enabled others across this project to strive for 
quality. If you would all have me as a member of the Technical Committee, you 
can help me to enable more quality work in OpenStack.

* I have been working in OpenStack since 2010. I spent a good amount of my time
  working on OpenStack in my free time before being paid full time to work on
  it. It has been an important part of my life, and rewarding to see what we
  have all achieved together.

* I was PTL for the Cinder project in the Kilo and Liberty releases for two
  cross-project reasons:
  * Third party continuous integration (CI).
  * Stop talking about rolling upgrades, and actually make it happen for
operators.

* I led the effort in bringing third party continuous integration to the
  Cinder project for more than 60 different drivers. [1]
  * I removed 25 different storage drivers from Cinder to bring quality to the
project to ensure what was in the Kilo release would work for operators.
I did what I believed was right, regardless of whether it would cost me
re-election for PTL [2].
  * In my conversations with other projects, this has enabled others to
follow the same effort. Continuing this trend of quality cross-project will
be my next focus.

* During my first term of PTL for Cinder, the team, and much respect to Thang
  Pham working on an effort to end the rolling upgrade problem, not just for
  Cinder, but for *all* projects.
  * First step was making databases independent from services via Oslo
versioned objects.
  * In Liberty we have a solution coming that helps with RPC versioned messages
to allow upgrading services independently.

* I have attempted to help with diversity in our community.
  * Helped lead our community to raise $17,403 for the Ada Initiative [3],
which was helping address gender-diversity with a focus in open source.
  * For the Vancouver summit, I helped bring in the ally skills workshops from
the Ada Initiative, so that our community can continue to be a welcoming
environment [4].

* Within the Cinder team, I have enabled all to provide good documentation for
  important items in our release notes in Kilo [5] and Liberty [6].
  * Other projects have reached out to me after Kilo feeling motivated for this
same effort. I've explained in the August 2015 Operators midcycle sprint
that I will make this a cross-project effort in order to provide better
communication to our operators and users.

* I started an OpenStack Dev List summary in the OpenStack Weekly Newsletter
  (What you need to know from the developer's list), in order to enable others
  to keep up with the dev list on important cross-project information. [7][8]

* I created the Cinder v2 API which has brought consistency in
  request/responses with other OpenStack projects.
  * I documented Cinder v1 and Cinder v2 API's. Later on I created the Cinder
API reference documentation content. The attempt here was to enable others
to have somewhere to start, to continue quality documentation with
continued developments.

Please help me to do more positive work in this project. It would be an honor 
to be member of your technical committee.


Thank you,
Mike Perez

Official Candidacy: https://review.openstack.org/#/c/229298/2
Review History: https://review.openstack.org/#/q/reviewer:170,n,z
Commit History: https://review.openstack.org/#/q/owner:170,n,z
Stackalytics: http://stackalytics.com/?user_id=thingee
Foundation: https://www.openstack.org/community/members/profile/4840
IRC Freenode: thingee
Website: http://thing.ee


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html
[2] - 
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:cinder-driver-removals,n,z
[3] - 
http://lists.openstack.org/pipermail/openstack-dev/2014-October/047892.html
[4] - http://lists.openstack.org/pipermail/openstack-dev/2015-May/064156.html
[5] - 
https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#OpenStack_Block_Storage_.28Cinder.29
[6] - 
https://wiki.openstack.org/wiki/ReleaseNotes/Liberty#OpenStack_Block_Storage_.28Cinder.29
[7] - 
http://www.openstack.org/blog/2015/09/openstack-community-weekly-newsletter-sept-12-18/
[8] - 
http://www.openstack.org/blog/2015/09/openstack-weekly-community-newsletter-sept-19-25/

_

Re: [openstack-dev] [cinder] The Absurdity of the Milestone-1 Deadline for Drivers

2015-09-30 Thread Ben Swartzlander

On 09/30/2015 12:11 PM, Mike Perez wrote:

On 13:29 Sep 28, Ben Swartzlander wrote:

I've always thought it was a bit strange to require new drivers to
merge by milestone 1. I think I understand the motivations of the
policy. The main motivation was to free up reviewers to review "other
things" and this policy guarantees that for 75% of the release
reviewers don't have to review new drivers. The other motivation was
to prevent vendors from turning up at the last minute with crappy
drivers that needed a ton of work, by encouraging them to get started
earlier, or forcing them to wait until the next cycle.

I believe that the deadline actually does more harm than good.

First of all, to those that don't want to spend time on driver
reviews, there are other solutions to that problem. Some people do
want to review the drivers, and those who don't can simply ignore
them and spend time on what they care about. I've heard people who
spend time on driver reviews say that the milestone-1 deadline
doesn't mean they spend less time reviewing drivers overall, it just
all gets crammed into the beginning of each release. It should be
obvious that setting a deadline doesn't actually affect the amount of
reviewer effort, it just concentrates that effort.

Some bad assumptions here:

* Nobody said they didn't want to review drivers.

* "Crammed" is completely an incorrect word here. An example with last release,
   we only had 3/17 drivers trying to get in during the last week of the
   milestone [1]. I don't think you're very active in Cinder to really judge how
   well the team has worked together to get these drivers in a timely way with
   vendors.


There are fair points. No argument. I think I managed to obscure my main 
point with too much assumptions and rhetoric though.


Let me restate my argument as simply as possible.

Drivers are relatively low risk to the project. They're a lot of work to 
review due to the size, but the risk of missing bugs is small because 
those bugs will affect only the users who choose to deploy the given 
driver. Also drivers are well understood, so the process of reviewing 
them is straightforward.


New features are high risk. Even a small change to the manager or API 
code can have dramatic impact on all users of Cinder. Larger changes 
that touch multiple modules in different areas must be reviewed by 
people who understand all of Cinder just to get basic assurance that 
they do what they say. Finding bugs in these kinds of changes is tricky. 
Reading the code only gets you so far, and automated testing only 
scratches the surface. You have to run the code and try it out. These 
things take time and core team time is a limited and precious resource.


Now, if you have some high risk changes and some low risk changes, which 
do you think it makes sense to work on early in the release, and which 
do you think is safe to merge at the last minute? I asked myself that 
question and decided that I'd rather to high risk stuff early and low 
risk stuff later. Based on that belief, I'm making a suggestion to move 
the deadlines around.




The argument about crappy code is also a lot weaker now that there
are CI requirements which force vendors to spend much more time up
front and clear a much higher quality bar before the driver is even
considered for merging. Drivers that aren't ready for merge can
always be deferred to a later release, but it seems weird to defer
drivers that are high quality just because they're submitted during
milestones 2 or 3.

"Crappy code" ... I don't know where that's coming from. If anything, CI has
helped get the drivers in faster to get rid of what you call "cramming".



That's good. If that's true, then I would think it supports an argument 
that the deadlines are unnecessary because the underlying problem 
(limited reviewer time) has been solved.




All the the above is just my opinion though, and you shouldn't care
about my opinions, as I don't do much coding and reviewing in Cinder.
There is a real reason I'm writing this email...

In Manila we added some major new features during Liberty. All of the
new features merged in the last week of L-3. It was a nightmare of
merge conflicts and angry core reviewers, and many contributors
worked through a holiday weekend to bring the release together. While
asking myself how we can avoid such a situation in the future, it
became clear to me that bigger features need to merge earlier -- the
earlier the better.

When I look at the release timeline, and ask myself when is the best
time to merge new major features, and when is the best time to merge
new drivers, it seems obvious that *features* need to happen early
and drivers should come *later*. New major features require FAR more
review time than new drivers, and they require testing, and even
after they merge they cause merge conflicts that everyone else has to
deal with. Better that that works happens in milestones 1 and 2 than
right before feature freeze. New dri

Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Harm Weites

Looks like he passed 3 already, but here's another +1 :)

Steven Dake (stdake) schreef op 2015-09-30 00:20:

Hi folks,

I am proposing Michal for core reviewer. Consider my proposal as a +1
vote. Michal has done a fantastic job with rsyslog, has done a nice
job overall contributing to the project for the last cycle, and has
really improved his review quality and participation over the last
several months.

Our process requires 3 +1 votes, with no veto (-1) votes. If your
uncertain, it is best to abstain :) I will leave the voting open for 1
week until Tuesday October 6th or until there is a unanimous decision
or a veto.

Regards
-steve
 
 
 
___

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) for core reviewer

2015-09-30 Thread Kevin Carter
+1 from me


--

Kevin Carter
IRC: cloudnull


From: Jesse Pretorius 
Sent: Wednesday, September 30, 2015 3:51 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) 
for core reviewer

Hi everyone,

I'd like to propose that Steve Lewis (stevelle) be added as a core reviewer.

He has made an effort to consistently keep up with doing reviews in the last 
cycle and always makes an effort to ensure that his responses are made after 
thorough testing where possible. I have found his input to be valuable.

--
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-30 Thread KARR, DAVID
I think I'm seeing similar errors, but I'm not certain.  With the OVA I 
downloaded last night, when I run "./rejoin-stack.sh", I get "Couldn't find 
./stack-screenrc file; have you run stack.sh yet?"

Concerning the original page with setup instructions, at 
https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
 , I note that the the login user and password is different (probably obvious), 
and obviously the required path to "cd" to.

Also, after starting the VM, the instructions say to run "ifconfig" to get the 
IP address of the VM, and then to ssh to the VM.  This seems odd.  If I've 
already done "interact with the console", then I'm already logged into the 
console.  The instructions also describe how to get to the Horizon client from 
your browser.  I'm not sure what this should say now.

From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Friday, September 25, 2015 3:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Thanks Alex, Zhou,

I get errors from congress when I do a re-join. These errors seem to due to the 
order in which the services are coming up. Hence I still depend on running 
stack.sh after the VM is up and running. Please try out the new VM - also 
advise if you need to add any of your use cases. Also re-join starts "screen" - 
do we expect the end user to know how to use "screen".

I do understand that running "stack.sh" takes time to run - but it does not do 
things that appear to be any kind of magic which we want to avoid in order to 
get the user excited.

I have uploaded a new version of the VM please experiment with this and let me 
know:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova

(root: vagrant password: vagrant)

-Shiv



From: Alex Yip [mailto:a...@vmware.com]
Sent: Thursday, September 24, 2015 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I was able to make devstack run without a network connection by disabling 
tempest.  So, I think it uses the loopback IP address, and that does not 
change, so rejoin-stack.sh works without a network at all.



- Alex






From: Zhou, Zhenzan mailto:zhenzan.z...@intel.com>>
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and 
fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:a...@vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a 
minute or so.  Then I run rejoin-stack.sh which takes just another minute or 
so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack 
state that was running before.



- Alex






From: Shiv Haris mailto:sha...@brocade.com>>
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user 
instantiates the Usecase-VM. However creating a OVA file is possible only when 
the VM is halted which means Openstack is not running and the user will have to 
run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM 
and using it in another setup is not very straight forward. It involves 
modifying the .vbox file and seems that it is prone to user errors. I am 
leaning towards halting the machine and generating an OVA file.

I am looking for suggestions 

Thanks,

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not 
cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you 
posed however I am still working on some of the subtle issues raised. Once I 
have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's 
going on MUCH more quickly.

Some thoughts.
- The i

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Thanks everyone who has provided feedback on this thread. The good news is that 
most of what has been asked for from Magnum is actually in scope already, and 
some of it has already been implemented. We never aimed to be a COE deployment 
service. That happens to be a necessity to achieve our more ambitious goal: We 
want to provide a compelling Containers-as-a-Service solution for OpenStack 
clouds in a way that offers maximum leverage of what’s already in OpenStack, 
while giving end users the ability to use their favorite tools to interact with 
their COE of choice, with the multi-tenancy capability we expect from all 
OpenStack services, and simplified integration with a wealth of existing 
OpenStack services (Identity, Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for the k8s COE 
should be mirrored in other COE’s. We have not attempted to do that yet, and my 
suggestion is to continue resisting that temptation because it is not aligned 
with our vision. We are not here to re-invent container management as a hosted 
service. Instead, we aim to integrate prevailing technology, and make it work 
great with OpenStack. For example, adding docker-compose capability to Magnum 
is currently out-of-scope, and I think it should stay that way. With that said, 
I’m willing to have a discussion about this with the community at our upcoming 
Summit.

An argument could be made for feature consistency among various COE options 
(Bay Types). I see this as a relatively low value pursuit. Basic features like 
integration with OpenStack Networking and OpenStack Storage services should be 
universal. Whether you can present a YAML file for a bay to perform internal 
orchestration is not important in my view, as long as there is a prevailing way 
of addressing that need. In the case of Docker Bays, you can simply point a 
docker-compose client at it, and that will work fine.

Thanks,

Adrian

> On Sep 30, 2015, at 8:58 AM, Devdatta Kulkarni 
>  wrote:
> 
> +1 Hongbin.
> 
> From perspective of Solum, which hopes to use Magnum for its application 
> container scheduling
> requirements, deep integration of COEs with OpenStack services like Keystone 
> will be useful.
> Specifically, I am thinking that it will be good if Solum can depend on 
> Keystone tokens to deploy 
> and schedule containers on the Bay nodes instead of having to use COE 
> specific credentials. 
> That way, container resources will become first class components that can be 
> monitored 
> using Ceilometer, access controlled using Keystone, and managed from within 
> Horizon.
> 
> Regards,
> Devdatta
> 
> 
> From: Hongbin Lu 
> Sent: Wednesday, September 30, 2015 9:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>   
> 
> +1 from me as well.
>  
> I think what makes Magnum appealing is the promise to provide 
> container-as-a-service. I see coe deployment as a helper to achieve the 
> promise, instead of  the main goal.
>  
> Best regards,
> Hongbin
>  
> 
> From: Jay Lau [mailto:jay.lau@gmail.com]
> Sent: September-29-15 10:57 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>  
> 
> 
> +1 to Egor, I think that the final goal of Magnum is container as a service 
> but not coe deployment as a service. ;-)
> 
> Especially we are also working on Magnum UI, the Magnum UI should export some 
> interfaces to enable end user can create container applications but not only 
> coe deployment.
> 
> I hope that the Magnum can be treated as another "Nova" which is focusing on 
> container service. I know it is difficult to unify all of the concepts in 
> different coe (k8s has pod, service, rc, swarm only has container, nova only 
> has VM,  PM with different hypervisors), but this deserve some deep dive and 
> thinking to see how can move forward. 
> 
> 
> 
>  
> 
> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz  wrote:
> definitely ;), but the are some thoughts to Tom’s email.
> 
> I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
> focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
> if we do it ):)
> I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
> ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
> Kub/Mesos/Swarm communities for that.
> 
> —
> Egor
> 
> From: Adrian Otto 
> mailto:adrian.o...@rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Tuesday, September 29, 2015 at 08:44
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> 
> This is definitely a topic we should cover in Tokyo.
> 
> On Sep 29,

Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Jeff Peeler
On Tue, Sep 29, 2015 at 6:20 PM, Steven Dake (stdake)  wrote:
> Hi folks,
>
> I am proposing Michal for core reviewer.  Consider my proposal as a +1 vote.
> Michal has done a fantastic job with rsyslog, has done a nice job overall
> contributing to the project for the last cycle, and has really improved his
> review quality and participation over the last several months.

Agreed, +1!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The Absurdity of the Milestone-1 Deadline for Drivers

2015-09-30 Thread Mike Perez
On 13:29 Sep 28, Ben Swartzlander wrote:
> I've always thought it was a bit strange to require new drivers to
> merge by milestone 1. I think I understand the motivations of the
> policy. The main motivation was to free up reviewers to review "other
> things" and this policy guarantees that for 75% of the release
> reviewers don't have to review new drivers. The other motivation was
> to prevent vendors from turning up at the last minute with crappy
> drivers that needed a ton of work, by encouraging them to get started
> earlier, or forcing them to wait until the next cycle.
> 
> I believe that the deadline actually does more harm than good.
> 
> First of all, to those that don't want to spend time on driver
> reviews, there are other solutions to that problem. Some people do
> want to review the drivers, and those who don't can simply ignore
> them and spend time on what they care about. I've heard people who
> spend time on driver reviews say that the milestone-1 deadline
> doesn't mean they spend less time reviewing drivers overall, it just
> all gets crammed into the beginning of each release. It should be
> obvious that setting a deadline doesn't actually affect the amount of
> reviewer effort, it just concentrates that effort.

Some bad assumptions here:

* Nobody said they didn't want to review drivers.

* "Crammed" is completely an incorrect word here. An example with last release,
  we only had 3/17 drivers trying to get in during the last week of the
  milestone [1]. I don't think you're very active in Cinder to really judge how
  well the team has worked together to get these drivers in a timely way with
  vendors.

> The argument about crappy code is also a lot weaker now that there
> are CI requirements which force vendors to spend much more time up
> front and clear a much higher quality bar before the driver is even
> considered for merging. Drivers that aren't ready for merge can
> always be deferred to a later release, but it seems weird to defer
> drivers that are high quality just because they're submitted during
> milestones 2 or 3.

"Crappy code" ... I don't know where that's coming from. If anything, CI has
helped get the drivers in faster to get rid of what you call "cramming".

> All the the above is just my opinion though, and you shouldn't care
> about my opinions, as I don't do much coding and reviewing in Cinder.
> There is a real reason I'm writing this email...
>
> In Manila we added some major new features during Liberty. All of the
> new features merged in the last week of L-3. It was a nightmare of
> merge conflicts and angry core reviewers, and many contributors
> worked through a holiday weekend to bring the release together. While
> asking myself how we can avoid such a situation in the future, it
> became clear to me that bigger features need to merge earlier -- the
> earlier the better.
> 
> When I look at the release timeline, and ask myself when is the best
> time to merge new major features, and when is the best time to merge
> new drivers, it seems obvious that *features* need to happen early
> and drivers should come *later*. New major features require FAR more
> review time than new drivers, and they require testing, and even
> after they merge they cause merge conflicts that everyone else has to
> deal with. Better that that works happens in milestones 1 and 2 than
> right before feature freeze. New drivers can come in right before
> feature freeze as far as I'm concerned. Drivers don't cause merge
> conflicts, and drivers don't need huge amounts of testing (presumably
> the CI system ensure some level of quality).
> 
> It also occurs to me that new features which require driver
> implementation (hello replication!) *really* should go in during the
> first milestone so that drivers have time to implement the feature
> during the same release.

I disagree. You're under the assumption that there is an intention of getting
a feature being worked in Liberty, to be ready for Liberty.

No.

I've expressed this numerous times at the Cinder midcycle sprint you attended
that I did not want to see drivers working on replication in their driver.

> So I'm asking the Cinder core team to reconsider the milestone-1
> deadline for drivers, and to change it to a deadline for new major
> features (in milestone-1 or milestone-2), and to allow drivers to
> merge whenever*. This is the same pitch I'll be making to the Manila
> core team. I've been considering this idea for a few weeks now but I
> wanted to wait until after PTL elections to suggest it here.

During the release, a feature can be worked on, but adoption in drivers can be
difficult. Since just about every driver is behind on potential features, I'd
rather see driver maintainers focused on those features that have been ready
for sometime, not what we just merged a week ago.

There is no good reason to rush and suffer quality for the sake of vendors
wanting the latest and greatest feature. We're more mature than that.


Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Ryan Hallisey
Way to go Michal! +1

-Ryan

- Original Message -
From: "Swapnil Kulkarni" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, September 30, 2015 5:00:27 AM
Subject: Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for 
core reviewer



On Wed, Sep 30, 2015 at 3:50 AM, Steven Dake (stdake) < std...@cisco.com > 
wrote: 



Hi folks, 

I am proposing Michal for core reviewer. Consider my proposal as a +1 vote. 
Michal has done a fantastic job with rsyslog, has done a nice job overall 
contributing to the project for the last cycle, and has really improved his 
review quality and participation over the last several months. 

Our process requires 3 +1 votes, with no veto (-1) votes. If your uncertain, it 
is best to abstain :) I will leave the voting open for 1 week until Tuesday 
October 6th or until there is a unanimous decision or a veto. 

+1 :) 




Regards 
-steve 

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-09-30 Thread Chris Friesen

On 09/30/2015 06:03 AM, Koniszewski, Pawel wrote:

From: Murray, Paul (HP Cloud) [mailto:pmur...@hpe.com]



- migrate suspended instances


I'm not sure I understand this correctly. When user calls 'nova suspend' I
thought that it actually "hibernates" VM and saves memory state to disk
[2][3]. In such case there is nothing to "live" migrate - shouldn't
cold-migration/resize solve this problem?


A "suspend" currently uses a libvirt API (dom.managedSave()) that results in the 
use of a libvirt-managed hibernation file. (So nova doesn't know the filename.) 
 I've only looked at it briefly, but it seems like it should be possible to 
switch to virDomainSave(), which would let nova specify the file to save, and 
therefore allow cold migration of the suspended instance.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [TC] TC Candidacy

2015-09-30 Thread Edgar Magana
Hello Developers and Community,

I would like to submit my candidacy for the TC election.

I have been involved in OpenStack activities since April 2011, when I became
part one of the founders of the networking project known as Quantum at that
time. I have been a core reviewer for Neutron since November 2011. I helped
to create the networking guide and contributed to multiple chapters. I have
spoken at many OpenStack summits, meet-ups and conferences. I have been very
active in the Operators meet-up moderating multiple sessions. In few words
"I love to evangelize OpenStack".

Last four years I have gained experience on project management and leadership
from different team perspectives like technology vendors, networking start-up
and over a year as OpenStack operator. This last one has been very interesting
compare with my previous ones because of my focus on a production ready
cloud powered by OpenStack. Running a high-scale production cloud is giving me
a different perspective on how the platform should be delivered as a product
ready to use and that is what I will be bringing to the TC and to all project
members and PTLs.

As a TC member my main focus will be to close any existing gap between the
development teams and their costumers who are the OpenStack users and operators
between others. In my operator role I have validated documentation and best
practices on OpenStack deployment and operations and I have been provided all
possible feedback and I want to do more on this area. I believe we can make
OpenStack better if we open the TC to members who have deployed pure OpenStack
with no vendor specific guidelines or any specific distribution influence.

I strongly believe we will make OpenStack more solid and integrated. I will
work as cross-project liaison in order to reach this goal. I will continue my
work of evangelizing the newest OpenStack projects and guide them to have the
best adoption process by the community and also I will help them to have the
best integration with any other open-source technologies.

No matter what is the result of the election I will continue my work on
OpenStack with passion and courage. This is the best project ever and it can
be even better. Let's inject some fresh ideas to the TC, let's keep making
this platform the de-facto cloud management system for all operators either
public, private or hybrid clouds.

Thank you so much for reading and considering my humble aspiration to the TC!

--
Edgar Magana
IRC: emagana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Models and validation for v2

2015-09-30 Thread Jay Pipes

On 09/30/2015 09:31 AM, Kairat Kushaev wrote:

Hi All,
In short terms, I am wondering why we are validating responses from
server when we are doing
image-show, image-list, member-list, metadef-namespace-show and other
read-only requests.

AFAIK, we are building warlock models when receiving responses from
server (see [0]). Each model requires schema to be fetched from glance
server. It means that each time we are doing image-show, image-list,
image-create, member-list and others we are requesting schema from the
server. AFAIU, we are using models to dynamically validate that object
is in accordance with schema but is it the case when glance receives
responses from the server?

Could somebody please explain me the reasoning of this implementation?
Am I missed some usage cases when validation is required for server
responses?

I also noticed that we already faced some issues with such
implementation that leads to "mocking" validation([1][2]).


The validation should not be done for responses, only ever requests (and 
it's unclear that there is value in doing this on the client side at 
all, IMHO).


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Devdatta Kulkarni
+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its application 
container scheduling
requirements, deep integration of COEs with OpenStack services like Keystone 
will be useful.
Specifically, I am thinking that it will be good if Solum can depend on 
Keystone tokens to deploy 
and schedule containers on the Bay nodes instead of having to use COE specific 
credentials. 
That way, container resources will become first class components that can be 
monitored 
using Ceilometer, access controlled using Keystone, and managed from within 
Horizon.

Regards,
Devdatta


From: Hongbin Lu 
Sent: Wednesday, September 30, 2015 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
  

+1 from me as well.
 
I think what makes Magnum appealing is the promise to provide 
container-as-a-service. I see coe deployment as a helper to achieve the 
promise, instead of  the main goal.
 
Best regards,
Hongbin
 

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
  


+1 to Egor, I think that the final goal of Magnum is container as a service but 
not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some 
interfaces to enable end user can create container applications but not only 
coe deployment.
 
I hope that the Magnum can be treated as another "Nova" which is focusing on 
container service. I know it is difficult to unify all of the concepts in 
different coe (k8s has pod, service, rc, swarm only has container, nova only 
has VM,  PM with different hypervisors), but this deserve some deep dive and 
thinking to see how can move forward. 
 


 

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz  wrote:
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz 
To: 
"openstack-dev@lists.openstack.org" 

Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

—
Egor

From: Adrian Otto 
mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>

Re: [openstack-dev] [murano] suggestion on commit message title format for the murano-apps repository

2015-09-30 Thread Alexey Khivin
lets discuss in more details how it should be

I have prepared a draft. please take a look
https://review.openstack.org/#/c/229477/




2015-09-25 13:16 GMT+03:00 Kirill Zaitsev :

> Looks reasonable to me! Could you maybe document that on HACKING.rst in
> the repo? We could vote on the commit itself.
>
> --
> Kirill Zaitsev
> Murano team
> Software Engineer
> Mirantis, Inc
>
> On 25 Sep 2015 at 02:14:09, Alexey Khivin (akhi...@mirantis.com) wrote:
>
> Hello everyone
>
> Almost an every commit-message in the murano-apps repository contains a
> name of the application which it is related to
>
> I suggest to specify application within commit message title using strict
> and uniform format
>
>
> For example, something like this:
>
> [ApacheHTTPServer] Utilize Custom Network selector
> 
> [Docker/Kubernetes ] Fix typo
> 
>
> instead of this:
>
> Utilize Custom Network selector in Apache App
> Fix typo in Kubernetes Cluster app 
>
>
> I think it would be useful for readability of the messages list
>
> --
> Regards,
> Alexey Khivin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Alexey Khivin

Skype: khivin
+79169167297

+7 (495) 640-4904 (office)
+7 (495) 646-56-27 (fax)
Moscow, Russia, Vorontsovskaya St. 35B, bld.3
www.mirantis.ru ,www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:

Hi,

One of remaining items in convergence is detecting and handling engine
(the engine worker) failures, and here are my thoughts.

Background: Since the work is distributed among heat engines, by some
means heat needs to detect the failure and pick up the tasks from failed
engine and re-distribute or run the task again.

One of the simple way is to poll the DB to detect the liveliness by
checking the table populated by heat-manage. Each engine records its
presence periodically by updating current timestamp. All the engines
will have a periodic task for checking the DB for liveliness of other
engines. Each engine will check for timestamp updated by other engines
and if it finds one which is older than the periodicity of timestamp
updates, then it detects a failure. When this happens, the remaining
engines, as and when they detect the failures, will try to acquire the
lock for in-progress resources that were handled by the engine which
died. They will then run the tasks to completion.

Another option is to use a coordination library like the community owned
tooz (http://docs.openstack.org/developer/tooz/) which supports
distributed locking and leader election. We use it to elect a leader
among heat engines and that will be responsible for running periodic
tasks for checking state of each engine and distributing the tasks to
other engines when one fails. The advantage, IMHO, will be simplified
heat code. Also, we can move the timeout task to the leader which will
run time out for all the stacks and sends signal for aborting operation
when timeout happens. The downside: an external resource like
Zookeper/memcached etc are needed for leader election.



It's becoming increasingly clear that OpenStack services in general need
to look at distributed locking primitives. There's a whole spec for that
right now:

https://review.openstack.org/#/c/209661/


As the author of said spec (Chronicles of a DLM) I fully agree that we 
shouldn't be reinventing this (again, and again). Also as the author of 
that spec, I'd like to encourage others to get involved in adding their 
use-cases/stories to it. I have done some initial analysis of projects 
and documented some of the recreation of DLM like things in it, and I'm 
very much open to including others stories as well. In the end I hope we 
can pick a DLM (ideally a single one) that has a wide community, is 
structurally sound, is easily useable & operable, is open and will help 
achieve and grow (what I think are) the larger long-term goals (and 
health) of many openstack projects.


Nicely formatted RST (for the latest uploaded spec) also viewable at:

http://docs-draft.openstack.org/61/209661/22/check/gate-openstack-specs-docs/ced42e7//doc/build/html/specs/chronicles-of-a-dlm.html#chronicles-of-a-distributed-lock-manager



I suggest joining that conversation, and embracing a DLM as the way to
do this.

Also, the leader election should be per-stack, and the leader selection
should be heavily weighted based on a consistent hash algorithm so that
you get even distribution of stacks to workers. You can look at how
Ironic breaks up all of the nodes that way. They're using a similar lock
to the one Heat uses now, so the two projects can collaborate nicely on
a real solution.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Ryan Rossiter


On 9/29/2015 11:00 PM, Monty Taylor wrote:

*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate 
the network and storage fabrics that exist in an OpenStack with 
kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's not 
interesting ... an ansible playbook can do that in 5 minutes. OTOH - 
deploying some kube into a cloud in such a way that it shares a tenant 
network with some VMs that are there - that's good stuff and I think 
actually provides significant value.

+1 on sharing the tenant network with VMs.

When I look at Magnum being an OpenStack project, I see it winning by 
integrating itself with the other projects, and to make containers just 
work in your cloud. Here's the scenario I would want a cloud with Magnum 
to do (though it may be very pie-in-the-sky):


I want to take my container, replicate it across 3 container host VMs 
(each of which lives on a different compute host), stick a Neutron LB in 
front of it, and hook it up to the same network as my 5 other VMs.


This way, it handles my containers in a service, and integrates 
beautifully with my existing OpenStack cloud.


On 09/29/2015 10:57 PM, Jay Lau wrote:

+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz mailto:e...@walmartlabs.com>> wrote:

definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
mailto:daneh...@cisco.com>>> wrote:


+1

From: Tom Cammann mailto:tom.camm...@hpe.com>>>
Reply-To: "openstack-dev@lists.openstack.org
>"
mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev@lists.openstack.org
>"
mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and 
container.


As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question:
should pod/service/rc be deprecated if the user can easily get to
the k8s api?
Even if we want to orchestrate these in a Heat template, the
corresponding heat resources can just interface with k8s instead of
Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive

Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-30 Thread Zane Bitter

On 29/09/15 12:05, Ihar Hrachyshka wrote:

On 25 Sep 2015, at 16:44, Ihar Hrachyshka  wrote:

Hi all,

releases are approaching, so it’s the right time to start some bike shedding on 
the mailing list.

Recently I got pointed out several times [1][2] that I violate our commit message 
requirement [3] for the message lines that says: "Subsequent lines should be 
wrapped at 72 characters.”

I agree that very long commit message lines can be bad, f.e. if they are 200+ 
chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 chars 
limit for the code.

We had a check for the line lengths in openstack-dev/hacking before but it was 
killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and should not 
get -1 treatment. I propose to raise the limit for the guideline on wiki 
accordingly.

Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
[2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
[3]: 
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
[4]: https://review.openstack.org/#/c/142585/
[5]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

Ihar


Thanks everyone for replies.

Now I realize WHY we do it with 72 chars and not 80 chars (git log output). :) 
I updated the wiki page with how to configure Vim to enforce the rule. I also 
removed the notion of gating on commit messages because we have them removed 
since recently.


Thanks Ihar! FWIW, vim has had built-in support for setting that width 
since at least 7.2, and I suspect long before (for me it's in 
/usr/share/vim/vim74/ftplugin/gitcommit.vim). AFAIK the only thing you 
need in your .vimrc to take advantage is:


if has("autocmd")
  filetype plugin indent on
endif " has("autocmd")

This is included in the example vimrc file that ships with vim, so I 
think better advice for 99% of people would be to just install the 
example vimrc file if they don't already have a ~/.vimrc. (There are 
*lots* of other benefits too.) I've updated the wiki to reflect that, I 
hope you don't mind :)


It'd be great if anyone who didn't have it set up already could try this 
though, since it's been many, many years since it has not worked 
automagically for me ;)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Hongbin Lu
+1 from me as well.

I think what makes Magnum appealing is the promise to provide 
container-as-a-service. I see coe deployment as a helper to achieve the 
promise, instead of the main goal.

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

+1 to Egor, I think that the final goal of Magnum is container as a service but 
not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some 
interfaces to enable end user can create container applications but not only 
coe deployment.
I hope that the Magnum can be treated as another "Nova" which is focusing on 
container service. I know it is difficult to unify all of the concepts in 
different coe (k8s has pod, service, rc, swarm only has container, nova only 
has VM, PM with different hypervisors), but this deserve some deep dive and 
thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz 
mailto:e...@walmartlabs.com>> wrote:
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto 
mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
mailto:daneh...@cisco.com>>>
 wrote:


+1

From: Tom Cammann 
mailto:tom.camm...@hpe.com>>>
Reply-To: 
"openstack-dev@lists.openstack.org>"
 
mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org>"
 
mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz 
mailto:e...@walmartlabs.com>>>
To: 
"openstack-dev@lists.openstack.org">
 
mailto:openstack-dev@lists.openstack.org>>>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

—
Egor

From: Adrian Otto 
mailto:adrian.o...@rackspace.com>

Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Dulko, Michal
On Wed, 2015-09-30 at 02:29 -0700, Clint Byrum wrote:
> Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:
> > Hi,
> > 
> > One of remaining items in convergence is detecting and handling engine
> > (the engine worker) failures, and here are my thoughts.
> > 
> > Background: Since the work is distributed among heat engines, by some
> > means heat needs to detect the failure and pick up the tasks from failed
> > engine and re-distribute or run the task again.
> > 
> > One of the simple way is to poll the DB to detect the liveliness by
> > checking the table populated by heat-manage. Each engine records its
> > presence periodically by updating current timestamp. All the engines
> > will have a periodic task for checking the DB for liveliness of other
> > engines. Each engine will check for timestamp updated by other engines
> > and if it finds one which is older than the periodicity of timestamp
> > updates, then it detects a failure. When this happens, the remaining
> > engines, as and when they detect the failures, will try to acquire the
> > lock for in-progress resources that were handled by the engine which
> > died. They will then run the tasks to completion.
> > 
> > Another option is to use a coordination library like the community owned
> > tooz (http://docs.openstack.org/developer/tooz/) which supports
> > distributed locking and leader election. We use it to elect a leader
> > among heat engines and that will be responsible for running periodic
> > tasks for checking state of each engine and distributing the tasks to
> > other engines when one fails. The advantage, IMHO, will be simplified
> > heat code. Also, we can move the timeout task to the leader which will
> > run time out for all the stacks and sends signal for aborting operation
> > when timeout happens. The downside: an external resource like
> > Zookeper/memcached etc are needed for leader election.
> > 
> 
> It's becoming increasingly clear that OpenStack services in general need
> to look at distributed locking primitives. There's a whole spec for that
> right now:
> 
> https://review.openstack.org/#/c/209661/
> 
> I suggest joining that conversation, and embracing a DLM as the way to
> do this.
> 
> Also, the leader election should be per-stack, and the leader selection
> should be heavily weighted based on a consistent hash algorithm so that
> you get even distribution of stacks to workers. You can look at how
> Ironic breaks up all of the nodes that way. They're using a similar lock
> to the one Heat uses now, so the two projects can collaborate nicely on
> a real solution.

It is worth to mention that there's also an idea of using both Tooz and
hash ring approach [1].

There was enormously big discussion on this list when Cinder's faced
similar problem [2]. It finally became a discussion on whether we need a
common solution for DLM in OpenStack [3]. In the end Cinder is currently
trying to achieve A/A capabilities by using CAS DB operations. The
detecting of failed services is still discussed, but most mature
solution to this problem was described in [4]. It is based on database
checks.

Given that many projects are facing similar problems (well, it's not a
surprise that distributed system is facing general problems of
distributed systems…), we should certainly discuss how to approach that
class of issues. That's why a cross-project Design Summit session on the
topic was proposed [5] (this one is by harlowja, but I know that Mike
Perez also wanted to propose such session).

[1] https://review.openstack.org/#/c/195366/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-July/070683.html
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071262.html
[4] http://gorka.eguileor.com/simpler-road-to-cinder-active-active/
[5] http://odsreg.openstack.org/cfp/details/8
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Anant Patil
On 30-Sep-15 14:59, Clint Byrum wrote:
> Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:
>> Hi,
>>
>> One of remaining items in convergence is detecting and handling engine
>> (the engine worker) failures, and here are my thoughts.
>>
>> Background: Since the work is distributed among heat engines, by some
>> means heat needs to detect the failure and pick up the tasks from failed
>> engine and re-distribute or run the task again.
>>
>> One of the simple way is to poll the DB to detect the liveliness by
>> checking the table populated by heat-manage. Each engine records its
>> presence periodically by updating current timestamp. All the engines
>> will have a periodic task for checking the DB for liveliness of other
>> engines. Each engine will check for timestamp updated by other engines
>> and if it finds one which is older than the periodicity of timestamp
>> updates, then it detects a failure. When this happens, the remaining
>> engines, as and when they detect the failures, will try to acquire the
>> lock for in-progress resources that were handled by the engine which
>> died. They will then run the tasks to completion.
>>
>> Another option is to use a coordination library like the community owned
>> tooz (http://docs.openstack.org/developer/tooz/) which supports
>> distributed locking and leader election. We use it to elect a leader
>> among heat engines and that will be responsible for running periodic
>> tasks for checking state of each engine and distributing the tasks to
>> other engines when one fails. The advantage, IMHO, will be simplified
>> heat code. Also, we can move the timeout task to the leader which will
>> run time out for all the stacks and sends signal for aborting operation
>> when timeout happens. The downside: an external resource like
>> Zookeper/memcached etc are needed for leader election.
>>
> 
> It's becoming increasingly clear that OpenStack services in general need
> to look at distributed locking primitives. There's a whole spec for that
> right now:
> 
> https://review.openstack.org/#/c/209661/
> 
> I suggest joining that conversation, and embracing a DLM as the way to
> do this.
> 

Thanks Clint for pointing to this.

> Also, the leader election should be per-stack, and the leader selection
> should be heavily weighted based on a consistent hash algorithm so that
> you get even distribution of stacks to workers. You can look at how
> Ironic breaks up all of the nodes that way. They're using a similar lock
> to the one Heat uses now, so the two projects can collaborate nicely on
> a real solution.
>

>From each stack, all the resources are distributed among heat engines,
so it is evenly distributed at resource level. I need to investigate
more on this. Thoughts are welcome.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Anant Patil
On 30-Sep-15 18:13, Ryan Brown wrote:
> On 09/30/2015 03:10 AM, Anant Patil wrote:
>> Hi,
>>
>> One of remaining items in convergence is detecting and handling engine
>> (the engine worker) failures, and here are my thoughts.
>>
>> Background: Since the work is distributed among heat engines, by some
>> means heat needs to detect the failure and pick up the tasks from failed
>> engine and re-distribute or run the task again.
>>
>> One of the simple way is to poll the DB to detect the liveliness by
>> checking the table populated by heat-manage. Each engine records its
>> presence periodically by updating current timestamp. All the engines
>> will have a periodic task for checking the DB for liveliness of other
>> engines. Each engine will check for timestamp updated by other engines
>> and if it finds one which is older than the periodicity of timestamp
>> updates, then it detects a failure. When this happens, the remaining
>> engines, as and when they detect the failures, will try to acquire the
>> lock for in-progress resources that were handled by the engine which
>> died. They will then run the tasks to completion.
> 
> Implementing our own locking system, even a "simple" one, sounds like a 
> recipe for major bugs to me. I agree with your assessment that tooz is a 
> better long-run decision.
> 
>> Another option is to use a coordination library like the community owned
>> tooz (http://docs.openstack.org/developer/tooz/) which supports
>> distributed locking and leader election. We use it to elect a leader
>> among heat engines and that will be responsible for running periodic
>> tasks for checking state of each engine and distributing the tasks to
>> other engines when one fails. The advantage, IMHO, will be simplified
>> heat code. Also, we can move the timeout task to the leader which will
>> run time out for all the stacks and sends signal for aborting operation
>> when timeout happens. The downside: an external resource like
>> Zookeper/memcached etc are needed for leader election.
> 
> That's not necessarily true. For single-node installations (devstack, 
> TripleO underclouds, etc) tooz offers file and IPC backends that don't 
> need an extra service. Tooz's MySQL/PostgreSQL backends only provide 
> distributed locking functionality, so we may need to depend on the 
> memcached/redis/zookeeper backends for multi-node installs.
> 

Definitely, for single-node installations, one can rely on IPC as
backend. As a convention, a default provider for single node as IPC
would be helpful for running heat in devstack or development
environment. From a holistic perspective, I am referring to external
resource, as mostly the deployments are multi-node with active-active
HA.

> Even if tooz doesn't provide everything we need, I'm sure patches
> would be welcome.
>
I am sure when we dive in, we will find use cases for tooz as well.

>> In the long run, IMO, using a library like tooz will be useful for heat.
>> A lot of boiler plate needed for locking and running centralized tasks
>> (such as timeout) will not be needed in heat. Given that we are moving
>> towards distribution of tasks and horizontal scaling is preferred, it
>> will be advantageous to use them.
>>
>> Please share your thoughts.
>>
>> - Anant


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Models and validation for v2

2015-09-30 Thread Kairat Kushaev
Hi All,
In short terms, I am wondering why we are validating responses from server
when we are doing
image-show, image-list, member-list, metadef-namespace-show and other
read-only requests.

AFAIK, we are building warlock models when receiving responses from server
(see [0]). Each model requires schema to be fetched from glance server. It
means that each time we are doing image-show, image-list, image-create,
member-list and others we are requesting schema from the server. AFAIU, we
are using models to dynamically validate that object is in accordance with
schema but is it the case when glance receives responses from the server?

Could somebody please explain me the reasoning of this implementation? Am I
missed some usage cases when validation is required for server responses?

I also noticed that we already faced some issues with such implementation
that leads to "mocking" validation([1][2]).


[0]:
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L185
[1]:
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L47
[2]: https://bugs.launchpad.net/python-glanceclient/+bug/1501046

Best regards,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][glance] glance-stable-maint group refresher

2015-09-30 Thread Nikhil Komawar


On 9/30/15 8:46 AM, Kuvaja, Erno wrote:
>
> Hi all,
>
>  
>
> I’d like to propose following changes to glance-stable-maint team:
>
> 1)  Removing Zhi Yan Liu from the group; unfortunately he has
> moved on to other ventures and is not actively participating our
> operations anymore.
>
+1 (always welcome back)
>
> 2)  Adding Mike Fedosin to the group; Mike has been reviewing and
> backporting patches to glance stable branches and is working with the
> right mindset. I think he would be great addition to share the
> workload around.
>
+1 (definitely)
>
>  
>
> Best,
>
> Erno (jokke_) Kuvaja
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Dmitry Tantsur

On 09/30/2015 03:15 PM, Ryan Brown wrote:

On 09/30/2015 04:08 AM, Dougal Matthews wrote:

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common
I have
to grep through the projects I know that use it to make sure I don't
break
anything.


The API working group exists, but they focus on REST APIs so they don't
have any guidelines on library APIs.


Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.


I think assuming that anything without a leading underscore is public
might be too broad. For example, that would make all of libutils
ostensibly a "stable" interface. I don't think that's what we want,
especially this early in the lifecycle.

In heatclient, we present "heatclient.client" and "heatclient.exc"
modules as the main public API, and put versioned implementations in
modules.


I'd recommend to avoid things like 'heatclient.client', as in a big 
application it would lead to imports like


 from heatclient import client as heatclient

:)

What I did for ironic-inspector-client was to make a couple of most 
important things available directly on ironic_inspector_client top-level 
module, everything else - under ironic_inspector_client.v1 (modulo some 
legacy).




heatclient
|- client
|- exc
\- v1
   |- client
   |- resources
   |- events
   |- services

I think versioning the public API is the way to go, since it will make
it easier to maintain backwards compatibility while new needs/uses evolve.


++






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Ryan Brown

On 09/30/2015 04:08 AM, Dougal Matthews wrote:

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common I have
to grep through the projects I know that use it to make sure I don't break
anything.


The API working group exists, but they focus on REST APIs so they don't 
have any guidelines on library APIs.



Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.


I think assuming that anything without a leading underscore is public 
might be too broad. For example, that would make all of libutils 
ostensibly a "stable" interface. I don't think that's what we want, 
especially this early in the lifecycle.


In heatclient, we present "heatclient.client" and "heatclient.exc" 
modules as the main public API, and put versioned implementations in 
modules.


heatclient
|- client
|- exc
\- v1
  |- client
  |- resources
  |- events
  |- services

I think versioning the public API is the way to go, since it will make 
it easier to maintain backwards compatibility while new needs/uses evolve.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][shotgun] do we still use subs?

2015-09-30 Thread Alexander Gordeev
Hello fuelers,

My question is related to shotgun tool[1] which will be invoked in
order to generate the diagnostic snapshot.

It has possibilities to substitute particular sensitive data such as
credentials/hostnames/IPs/etc with meaningless values. It's done by
Subs [2] object driver.

However, it seems that subs is not used anymore. Well, at least it was
turned off by default for fuel 5.1 [3] and newer. I won't able to find
any traces of its usage in the code at fuel-web repo.

Seems that this piece of code for subs could be ditched. Even more, it
should be ditched as it looks like a fifth wheel from the project
architecture point of view. As shotgun is totally about getting the
actual logs, but not about corrupting them unpredictably with sed
scripts.

Proper log sanitization is the another story entirely. I doubt if it
could be fitted into shotgun and being effective and/or well designed
at the same time.

Perhaps, i missed something and subs is still being used actively.
So, folks don't hesitate to respond, if you know something which helps
to shed a light on subs.

Let's discuss anything related to subs or even vote on its removal.
Maybe we need to wait for another 2 years to pass until we could
finally get rid of it.

Let me know your thoughts.

Thanks!


[1] https://github.com/stackforge/fuel-web/tree/master/shotgun
[2] 
https://github.com/stackforge/fuel-web/blob/master/shotgun/shotgun/driver.py#L165-L233
[3] 
https://github.com/stackforge/fuel-web/blob/stable/5.1/nailgun/nailgun/settings.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >